| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Apparently, this was replaced by the SD_CTRL_SET_VIDEO_PARAMS set
dimensions. But I can't find out when this happened - possibly, these
fields were never used by sd_lavc.c, and only by the (long removed)
MPlayer dvdsub decoder.
|
|
|
|
|
| |
Usually not a problem, but could not be initialized early enough in some
corner cases.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, feeding packets to the decoder in advance was done for text
subtitles only. This was possible because libass buffers all subtitle
data anyway (in ASS_Track). sd_lavc, responsible for bitmap subs, does
not do this. But it can buffer a small number of subtitle frames ahead.
Enable this.
Repurpose the sub_accept_packets_in_advance(). Instead of "can take all
packets" it means "can take 1 packet" now. (The old meaning is still
needed locally in dec_sub.c; keep it there.) It asks the decoder whether
there is place for at least 1 subtitle packet. sd_lavc implements it and
returns true if its internal fixed-size subtitle queue still has a free
slot. (The implementation of this in dec_sub.c isn't entirely clean.
For one, decode_chain() ignores this mechanism, so it's implied that
bitmap subtitles do not use the subtitle filter chain in any advanced
way.)
Also fix 2 bugs in the sd_lavc queue handling. Subtitles must be checked
in reverse, because the first entry will often have endpts==NOPTS, which
would always match. alloc_sub() must cycle the queue buffer, because it
reuses memory allocations (like sub.imgs) by design.
|
|
|
|
| |
Found by Coverity.
|
| |
|
|
|
|
|
|
|
|
|
| |
Helps with files that have occasional broken timestamps. For larger
discontinuities, e.g. caused by actual timestamp resets, we still want
to realign audio.
(I guess in general, this should be removed and replaced by a more
general resync-on-desync logic, but not now.)
|
|
|
|
|
|
|
|
|
| |
This makes no sense, because the client is obligated to react to this
event.
This also happens to fix a deadlock with JSON IPC clients sending
"disable_event all", because MPV_EVENT_SHUTDOWN was used to stop the
thread driving the socket connection (fixes #2558).
|
|
|
|
|
|
|
|
|
| |
Requested. Don't overwrite permanent OSD text set with e.g. --osd-msg1.
Instead, append the OSD message to it (on the next line).
Note that with --osd-msg1, seeking will still overwrite the OSD with the
playback status for a while. If you do not want this, use --osd-msg3
--osd-level=3 instead.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At least I hope so.
Deriving the duration from the pts was not really correct. It doesn't
include speed adjustments, and becomes completely wrong of the user e.g.
changes the playback speed by a huge amount. Pass through the accurate
duration value by adding a new vo_frame field.
The value for vsync_offset was not correct either. We don't need the
error for the next frame, but the error for the current one. This wasn't
noticed because it makes no difference in symmetric cases, like 24 fps
on 60 Hz.
I'm still not entirely confident in the correctness of this, but it sure
is an improvement.
Also, remove the MP_STATS() calls - they're not really useful to debug
anything anymore.
|
|
|
|
| |
Well, this was stupid.
|
|
|
|
|
|
|
| |
This was just converting back and forth between int64_t/microseconds and
double/seconds. Remove this stupidity. The pts/duration fields are still
in microseconds, but they have no meaning in the display-sync case (also
drop printing the pts field from opengl/video.c - it's always 0).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of periodically trying to enable it again. There are two cases
that can happen:
1. A random discontinuity messed everything up,
2. Things are just broken and will desync all the time
Until now, it tried to deal with case 1 - but maybe this is really rare,
and we don't really need to care about it. On the other hand, case 2 is
kind of hard to diagnose if the user doesn't use the terminal.
Seeking will reenable display-sync, so you can fix playback if case 1
happens, but still get predictable behavior in case 2.
|
|
|
|
|
| |
Don't change video speed in this mode, which is closer to the claim on
the manpage that it's close to the behavior of the "audio" mode.
|
|
|
|
|
|
|
|
|
|
| |
This is simply the average refresh rate. Including "bad" samples is
actually an advantage, because the property exists only for
informational purposes, and will reflect problems such as the driver
skipping a vsync.
Also export the standard deviation of the vsync frame duration
(normalized to the range 0-1) as vsync-jitter property.
|
| |
|
| |
|
|
|
|
| |
Utterly useless, but requested. Fixes #2444.
|
|
|
|
| |
I think this is much more informative. Maybe.
|
|
|
|
| |
But not much.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was used with --no-sub-ass (aka --no-ass). This option (which is
not yet removed) strips all styling from the subtitles, and renders them
as plaintext only. For some reason, it originally seemed convenient to
reuse all the OSD text rendering code (osd_libass.c). While this was
indeed simple, it had a bad influence on the rest of the code. For
example, it had to decide whether to go through the OSD code path, or
the proper subtitle renderer in sd_ass.c.
Kill the OSD subtitle renderer. Reimplement --no-sub-ass and also
"secondary" subtitles in sd_ass.c. fill_plaintext() contains some rather
minor code duplication with osd_libass.c for setting up a dummy
ASS_Event and escaping the stripped text. Since sd_ass.c already has to
handle "normal" text subtitles, and has code for stripping ASS tags,
this remains all relatively simple.
Remove all the unnecessary crap from the rest of the code.
|
|
|
|
|
|
|
|
|
| |
Use the demux_set_ts_offset() added in the previous commit to base each
timeline segment to use timestamps according to its relative position
within the overall timeline. As a consequence we don't need to care
about these timestamps anymore, and everything becomes simpler.
(Another minor but delicious nugget of sanity.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Most of this is explained in the DOCS additions.
This gives us slightly more sanity, because there is less interaction
between the various parts. The goal is getting rid of the video_offset
entirely.
The simplification extends to the user API. In particular, we don't need
to fix missing parts in the API, such as the lack for a seek command
that seeks relatively to the start time. All these things are now
transparent.
(If someone really wants to know the real timestamps/start time, new
properties would have to be added.)
|
|
|
|
|
|
|
|
|
|
|
| |
This adds support for the progress indicator taskbar extension
that was introduced with Windows 7 and Windows Server 2008 R2.
I don’t like this solution because it keeps its own state and
introduces another VOCTRL, but I couldn’t come up with anything
less messy.
closes #2399
|
|
|
|
|
|
|
|
|
| |
If the player sends a frame with duration==0 to the VO, it can trivially
underrun. Don't panic, but keep the correct time.
Also, returning the absolute time from vo_get_next_frame_start_time()
just to turn it into a float with relative time was silly. Rename it and
make it return what the caller needs.
|
| |
|
|
|
|
|
| |
This computed nonsense if the user set a playback speed other than 1
(in addition to the display-sync speed change).
|
|
|
|
|
|
| |
80ms allowable desync was a bit too much. It'd allow for a range of
160ms, which everyone can notice. It might also be a bother to apply
compensation resampling speed for that long.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We always let audio slowly desync until a threshold is reached, and then
pushed it back by applying a maximum compensation speed. Refine what
comes afterwards: instead of playing with the nominal video speed, use
the actual required audio speed for keeping sync as measured by the A/V
difference. (The "actual" speed is the ideal speed with A/V differences
added.)
Although this works in theory, it's somewhat questionable how much this
works in practice. The ideal time value is actually not exact, but is
the time at which the frame is scheduled (could be compensated by using
the time_left calculations in handle_display_sync_frame()). It doesn't
account for speed changes or catastrophic discontinuities. It uses only
10 past frames.
|
|
|
|
|
|
|
|
|
|
| |
As long as it's within the desync tolerance, do not change the audio
speed at all for resampling. This reduces speed changes which might be
caused by jittering timestamps and similar cases.
(While in theory you could just not care and change speed every single
frame, I'm afraid that such changes could possibly cause audio
artifacts. So better just avoid it in the first place.)
|
|
|
|
| |
We can implement it differently and drop a tiny bit of state.
|
|
|
|
|
|
|
|
| |
This is very "illustrative", unlike the video-speed-correction
property, and thus useful. It can also be used to observe scheduling
errors, which are not detected by the core. (These happen due to
rounding errors; possibly not evne our fault, but coming from
files with rounded timestamps and so on.)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of looking at the current frame duration for the intended
speedup, look at all past frames, and find a good average speed. This
ties in with not wanting to average _all_ frame durations, which
doesn't make sense in VFR situations.
This is currently done in the most naive way possible, but already sort
of works for VFR which switches between frame durations that are
integer multiples of a base rate. Certainly more improvements could
be made, such as trying to adjust directly on FPS changes, instead of
averaging everything, but for now this is not needed at all.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Helps somewhat with muxer-rounded timestamps.
There is some danger that this introduces a timestamp drift. But since
they are averaged values (unlike as when using an incorrect container
framerate hint), any potential drift shouldn't be too brutal, or
compensate itself soon. So I won't bother yet with comparing the results
with the real timestamp, unless we run into actual problems.
Of course we still prefer potentially real timestamps over the
approximated ones. But unless the timestamps match the container FPS,
we can't know whether they are (no, checking whether the they have
microsecond components would be cheating). Perhaps in future, we could
let the demuxer export the timebase - if the timebase is not 1000 (or
divisible by it), we know that millisecond-rounded timestamps won't
happen.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Get rid of get_past_frame_durations(), which was a bit too messy. Add
a past_frames array, which contains the same information in a more
reasonable way. This also means that we can get the exact current and
past frame durations without going through awful stuff. (The main
problem is that vo_pts_history contains future frames as well, which is
needed for frame backstepping etc., but gets in the way here.)
Also disable the automatic disabling of display-sync if the frame
duration changes, and extend the frame durations allowed for display
sync. To allow arbitrarily high durations, vo.c needs to be changed
to pause and potentially redraw OSD while showing a single frame, so
they're still limited.
In an attempt to deal with VFR, calculate the overall speed using the
average FPS. The frame scheduling itself does not use the average FPS,
but the duration of the current frame. This does not work too well,
but provides a good base for further improvements.
Where this commit actually helps a lot is dealing with rounded
timestamps, e.g. if the container framerate is wrong or unknown, or
if the muxer wrote incorrectly rounded timestamps. While the rounding
errors apparently can't be get rid of completely in the general case,
this is still much better than e.g. disabling display-sync completely
just because some frame durations go out of bounds.
|
|
|
|
|
|
| |
We need a frame duration even on start, because the number of vsyncs
the frame is shown is predetermined. (vo_opengl actually makes use of
this property in certain cases.)
|
|
|
|
|
|
|
|
|
| |
"Missed" implies the frame was dropped, but what really happens is that
the following frame will be shown later than intended (due to the
current frame skipping a vsync).
(As of this commit, this property is still inactive and always
returns 0. See git blame for details.)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the audio format is not known yet and the audio chain is still
initializing, filter reinit will fail. Normally, attempts to
reinitialize filters at this stage should be rare (e.g. user commands
editing the filter chain). But it sometimes happened with track
switching in combination with the video code calling
update_playback_speed() at arbitrary times.
Get rid of the message by not trying to change the filters for the sake
of playback speed update while decoding is still being initialized.
|
| |
|
|
|
|
|
|
|
| |
Has the same function as setting the option.
This commit changes the property in a bunch of other ways. For example
if the VO is not created, it will return the option value.
|
|
|
|
|
|
| |
This check disables the display-sync resample method. If the filters
convert PCM to AC3, we can still insert a filter to change speed. This
is because filters are inserted at the beginning of the filter chain.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Actually, it didn't really require that before (most work was avoided),
but some bits had to be run anyway. Separate the speed change into a
light-weight function, which merely updates already created filters, and
a heavy-weight one which messes with filter insertion.
This also happens to fix the case where the filters would "forget" the
current speed (force resampling, change speed, hit a volume control to
force af_volume insertion - it will reset speed and desync).
Since we now always run the light-weight function, remove the
af_scaletempo verbose message that is printed on speed setting. Other
than that, all setters are cheap.
|
|
|
|
|
|
|
|
|
| |
Move it (in a cosmetic sense), and also move its invocation to below all
the video handling.
All other changes remain cosmetic, including moving the framedrop
calculation code, and getting rid of the video_speed_correction
variable.
|
|
|
|
|
|
| |
We still have a sample-based buffer between filters and audio outputs.
In order to avoid cutting frames into half (which can upset receivers),
we strictly need to align the boundaries on which we cut the audio.
|
|
|
|
|
|
|
|
|
|
|
| |
Update msg.c state immediately if a terminal or logging setting is set.
Until now, this was delayed until mp[v]_initialize() was called. When
using the client API, you could easily miss logged error messages, even
when logging was initialized early on by calling
mpv_request_log_messages().
(Properties can't be used for this either, because properties do not
work before mpv_initialize().)
|
|
|
|
| |
Commit 49d94853 worked only at the start of playback.
|
|
|
|
|
|
|
|
|
|
| |
Discontinuities (like toggling fullscreen) can cause multiple frames to
be dropped in succession, which sounds very weird. It's better to drop
some video frames instead to compensate for larger desyncs.
We roughly base it on the maximum allowed speed changes (audio change is
"additional" to the video change to account for deviations when playing
at max. video speed change).
|
|
|
|
|
|
|
| |
update_av_diff() works on the timestamps, while time_left is in real
time. When playing at not-1 speed, these are very different, and cause
the A/V difference to jitter. Fix this by scaling the expected A/V
desync to the correct range.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This didn't show up with cases where the frame pattern has a cycle of 1
or 2 like it is the case with 24-on-24 fps, or 24-on-60 fps. It did show
up with 25-on-60 fps. (We don't slow down 25 fps video to 24 on default
settings.)
In this case, we must not add the timing error of the next frame to the
A/V difference estimation of the current frame. Use the previous timing
error instead.
This is another bug resulting from the confusion about whether we
calculate parameters for the currently playing frame, or the one we're
about to queue.
|
|
|
|
|
| |
Does what the manpage says. This is a replacement incrementing the
dropped frame counter (see previous commit).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit a1315c76 broke this slightly. Frame drops got counted multiple
times, and also vo.c was actually trying to "render" the dropped frame
over and over again (normally not a problem, since frames are always
queued "tightly" in display-sync mode, but could have caused 100% CPU
usage in some rare corner cases).
Do not repeat already dropped frames, but still treat new frames with
num_vsyncs==0 as dropped frames. Also, strictly count dropped frames in
the VO. This means we don't count "soft" dropped frames anymore (frames
that are shown, but for fewer vsyncs than intended). This will be
adjusted in the next commit.
|
|
|
|
|
|
|
| |
Bump it to 80, and 2 vsyncs. This is another measure against vsync
jitter. Admittedly this is a bit simplistic (and we should probably
estimate a stable estimated vsync phase instead), but for now this will
do.
|
|
|
|
| |
It's annoying.
|
|
|
|
|
|
| |
It's not needed, because the additional data is not appended, but is the
total size of the audio buffer. The maximum size is the static audio
drop size (or twice, if the audio is duplicated).
|
|
|
|
|
|
|
|
| |
Calculate the A/V difference directly in the display sync code, instead
of the awkward current way, which reuses the fields for audio sync.
We still set time_frame, because it makes falling back to audio sync
somewhat smoother.
|
|
|
|
|
|
|
| |
When dropping or repeating frames, we essentially influence when the
frame after the next frame will be shown, not the next frame. This led
to dropping/repeating frames 2 times, because the A/V difference had a
delay of one frame. Compensate it with the expected value.
|