| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
It just causes annoying artifacts.
Interestingly, this means keeping down the frame stepping key (".") will
play video with interpolation disabled.
|
|
|
|
|
| |
This is not needed. It was used only temporarily in a development
branch, and is a leftover from earlier rebasing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If this mode is enabled, the player tries to strictly synchronize video
to display refresh. It will adjust playback speed to match the display,
so if you play 23.976 fps video on a 24 Hz screen, playback speed is
increased by approximately 1/1000. Audio wll be resampled to keep up
with playback.
This is different from the default sync mode, which will sync video to
audio, with the consequence that video might skip or repeat a frame once
in a while to make video keep up with audio.
This is still unpolished. There are some major problems as well; in
particular, mkv VFR files won't work well. The reason is that Matroska
is terrible and rounds timestamps to milliseconds. This makes it rather
hard to guess the framerate of a section of video that is playing. We
could probably fix this by just accepting jittery timestamps (instead
of explicitly disabling the sync code in this case), but I'm not ready
to accept such a solution yet.
Another issue is that we are extremely reliant on OS video and audio
APIs working in an expected manner, which of course is not too often
the case. Consequently, the new sync mode is a bit fragile.
|
|
|
|
|
|
|
|
|
|
| |
For video sync, we want separate playback speed controls for user-
requested speed and the "correction" speed for video timing. Further, we
use this separation to make sure only a resampler is inserted if
playback speed is only changed for video sync correction.
As of this commit, this is basically inactive code. It's just
preparation for the video sync code (the following commit).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Additionally to taking the average, this tries to use the demuxer FPS to
eliminate jitter, and applies some other heuristics to check if the
result is sane.
This code will also be used for the display sync code (it will actually
make use of the require_exact parameter).
(The value of doing this over keeping the simpler demux_mkv hack is
somewhat questionable. But at least it allows us to deal with other
container formats that use jittery timestamps, such as mp4 remuxed
from mkv.)
|
|
|
|
|
|
|
| |
Normally when there's a timestamp reset, we make audio resync to make
sure audio and video line up (again). But in video-only mode, just
setting audio to resyncing breaks EOF detection, because there's no code
which would get audio_status out of this bogus state.
|
|
|
|
| |
Minor preparation for something else.
|
| |
|
|
|
|
|
|
|
| |
Remove the exception for decoding only 1 frame if VO framedrop is
disabled. This was originally done to be able to test potential
regressions when we enabled VO framedrop and decoding 2 frames by
default. It's not needed anymore.
|
|
|
|
|
|
|
|
|
| |
If filters are disabled or reconfigured, attempt to remove and probe the
deinterlace filter again. This fixes behavior if e.g. a software deint
filter was automatically inserted, and then hardware decoding is enabled
during playback. Without this commit, initializing hw decoding would
fail because of the software filter; with this commit, it'll replace it
with the hw deinterlacer instead.
|
|
|
|
|
|
|
|
|
|
| |
Instead of calling it "future frames" and adding or subtracting 1 from
it, always call it "requested frames". This simplifies it a bit.
MPContext.next_frames had 2 added to it; this was mainly to ensure a
minimum size of 2. Drop it and assume VO_MAX_REQ_FRAMES is at least 2;
together with the other changes, this can be the exact size of the
array.
|
| |
|
|
|
|
|
|
|
| |
This is a real pain: if a quit command is received, it's set to PT_QUIT.
And then other code could overwrite it, making it not quit. The annoying
bit is that stop_play is written and read in many places. Just not
overwriting it unconditionally seems to be the best course of action.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
draw_image_timed is renamed to draw_frame. struct frame_timing is
renamed to vo_frame. flip_page_timed is merged into draw_frame (the
additional parameters are part of struct vo_frame). draw_frame also
deprecates VOCTRL_REDRAW_FRAME, and replaces it with a method that
works for both VOs which can cache the current frame, and VOs which
need to redraw it anyway.
This is preparation to making the interpolation and (work in progress)
display sync code saner.
Lots of other refactoring, and also some simplifications.
|
|
|
|
|
|
|
|
|
|
| |
Now the VO can request a number of future frames with the last parameter
of vo_set_queue_params(). This will be helpful to fix the interpolation
code.
Note that the first frame (after playback start or seeking) will usually
not have any future frames (to make seeking fast). Near the end of the
file, the number of future frames will become lower as well.
|
|
|
|
|
| |
I don't think most of these suggestions are overly helpful. Just get rid
of them.
|
| |
|
|
|
|
| |
Broken by e00e9d65.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When showing cover art, the decoding logic pretends that the source has
an infinite number of frames. This slightly simplifies dealing with
filter data flow. It was done by feeding the same packet repeatedly to
the decoder (each decode run produces new output).
Change this by decoding once at the video initialization. This is easier
to follow, and increases robustness in case of broken images. Usually,
we try to tolerate decoding errors, so decoding normally continues, but
in this case it would just burn the CPU for no reason.
Fixes #2056.
|
|
|
|
|
| |
All this information is already output otherwise. Except the FourCC,
which lost most of its importance in mpv.
|
| |
|
|
|
|
|
|
| |
There's no need for this, it just creates more corner cases.
Also always reset it on seeks etc..
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reduce the default tolerance for timestamp jumps from 60 to 15 seconds.
For .ts files, where ts_resets_possible coming from AVFMT_TS_DISCONT is
set, apply a more sophisticated heuristic. It's clear that such a file
wouldn't have a framerate below, say, 23hz. If the demuxer reports a
lower fps, we allow longer PTS jumps.
This should replace long pauses on discontinuities with .ts files with
at most a short stutter.
Of course, all kinds of things could go wrong anyway if the source is
VFR, or FFmpeg's frame rate detection fails in some other way. I haven't
found such a file yet, though.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes PNG cover art not showing up immediately (for example when running
with --pause).
libavformat exports embedded cover art as a single packet. For example,
a PNG attachment simply contains the PNG image, which can be sent to the
decoder. Normally you would expect that the PNG decoder would return 1
frame for 1 packet, without any delays. But this stopped working, and it
incurs a 1 frame delay.
This is perfectly legal (even if unexpected), so let our code feed the
decoder packets until we get something back. (In theory feeding the
packet instead of a real flush packet is still somewhat questionable.)
|
|
|
|
|
|
|
| |
last_av_difference can be MP_NOPTS_VALUE under certain circumstances
(like no video timestamp yet). This triggered the desync message,
because fabs(MP_NOPTS_VALUE) is quite a large value. We don't want to
show a message in this situation.
|
|
|
|
|
| |
It was called only in 2 places, one of them redundant (the container FPS
can not change).
|
|
|
|
|
| |
These are basically MPlayer leftovers, and barely useful due to being
redundant with other messages. The FPS message is used somewhere else.
|
|
|
|
|
|
|
|
|
|
|
| |
libavcodec makes it impossible to distinguish dropped frames (requested
with AVCodecContext.skip_frame), and cases when the decoder simply does
not return a frame by default (such as with VP9, which has invisible
reference frames).
This confuses users when decoding VP9 video. It's basically a cosmetic
issue, so just paint it over by ignoring them if framedropping is
disabled.
|
|
|
|
|
|
| |
When playing cover art, it conceptually reaches EOF as soon as the image
was put on the VO, causing the EOF message to be repeated every time new
audio was decoded. Just silence the message.
|
|
|
|
| |
Signed-off-by: wm4 <wm4@nowhere>
|
|
|
|
|
|
|
|
|
| |
Use OPT_CHOICE_C() instead of the custom parser. The functionality is
pretty much equivalent.
(On a side note, it seems --video-stereo-mode can't be removed, because
it controls whether to "reduce" stereo video to mono, which is also the
default. In fact I'm not sure how this should be handled at all.)
|
|
|
|
|
| |
Accidentally broken in 79779616; we really need to check for true EOF,
not just whether there are no frames yet.
|
| |
|
|
|
|
|
| |
Instead of touching the 2-entry queue in mpctx->next_frame directly,
move some of it to functions.
|
|
|
|
|
|
|
| |
"Non-monotonic" isn't even 100% correct; it's missing "strictly" (for
briefness I guess), and also the message is printed if the PTS jumps
forward. So just print something that is likely a bit easier to
understand.
|
|
|
|
| |
It obviously needs to be updated after the VO was destroyed.
|
|
|
|
|
|
|
|
| |
For some reason there were two points in the code where it warned
against non-monotonic video PTS. The one in video.c triggered on PTS
going backwards or making large jumps forwards, while dec_video.c
triggered on PTS going backwards or PTS not changing. Merge them into a
single check, which warns against all cases.
|
|
|
|
| |
Meh.
|
|
|
|
|
|
| |
Broken drivers are an issue rather often. Maybe this gives the user an
idea that this could be the reason. (We can't dump much more info on a
80x24 terminal.)
|
|
|
|
|
| |
This value is not necessarily trustworthy (it might change) and can be
0.
|
|
|
|
|
|
|
| |
In ancient times, this was needed because it was not default, and many
VOs had problems with it. But it was always default in mpv, and all VOs
are required to deal with it. Also, running --fixed-vo=no is not useful
and just creates weird corner cases. Get rid of it.
|
|
|
|
|
| |
This allows us to plot the difference between video timestamps, and the
adjusted video timestamps due to syncing video to audio speed.
|
|
|
|
| |
Just minor things.
|
|
|
|
|
|
|
|
| |
This reverts commit 7b3feecbc23e3e0b0d9cf66f02af53d127a0b681.
It's broken, hr-seek never ends at a video position before seek pts.
Not sure what I was thinking, although it did work anyway when
artificially forcing a video frame to display before seek pts.
|
|
|
|
|
|
|
|
| |
At least there is _some_ problem if this happens. It would mean that
audio is playing slower than video. Normally, video is synced to audio,
so if audio stops playback completely, video will not advance at all.
But using things like --autosync, it's well possible that this kind of
desync happens.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move the update_avsync_before_frame() call further down. Moving it
closer to where the time_frame value is used (and which the function
updates) should make the code more readable. With this change, there's
no need anymore to reset the time_frame value on the video reconfig
path.
Move the update_avsync_after_frame() up. Now no meaningful amount of
time passes since the previous get_relative_time() call anymore, and the
second one can be removed.
|
|
|
|
|
|
| |
We use double for these things everywhere, just this code didn't. It
likely doesn't matter much, and this code is for an optional feature
too.
|
|
|
|
|
|
| |
mpctx->audio_delay always has the same value as opts->audio_delay. (This
was not the case a long time ago, when the audio-delay property didn't
actually write to opts->audio_delay. I think.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows seeking audio between two video frames that are relatively
far away.
The implementation of this is a bit subtle. It pretend the audio
position is different, and the actual PTS adjustment happens in audio.c
with this line:
sync_pts -= mpctx->audio_delay - mpctx->delay;
Effectively this is the same as setting sync_pts to hrseek_pts after
this line, though. (I'm actually not sure if this could be written in a
more straightforward way; probably yes.)
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
We still need to send the VO a duration in these cases. Disabling
framedrop has logically absolutely nothing to do with these cases; it
was overlooked in commit 918b06c4.
So we always send the frame duration (or a guess for it), and check
whether framedropping is actually enabled in the VO code. (It would
be cleaner to send framedrop as a flag, but I don't care about that
right now.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The last video frame is another case that has a separate code path,
although it's pretty similar to the one in commit 73e5aa87. Fix this
in a different way, which also takes care of the last frame case,
although without context the code becomes slightly more tricky.
As further cleanup, move the decision about framedropping itself to
the same place, so the check in vo.c becomes much simpler. The check
for the vo->driver->encode flag, which is remvoed completely, was
redundant too.
Fixes #1480.
|
|
|
|
|
|
|
|
|
|
|
| |
If the video format changes (e.g. different frame size), a special code
path is entered to wait until the currently displayed frame is done.
Otherwise, the frame before the change would be destroyed by the
vo_reconfig() call.
This code path didn't respect --untimed; correct this.
Fixes #1475.
|
|
|
|
|
|
| |
Using edl or --merge-files with .avi files didn't work, because the DTS
was not offset. Only the PTS was adjusted, which led to nonsense
timestamps.
|
|
|
|
|
|
|
| |
There are currently 568 pixel formats (actually fewer, but the namespace
is this big), and for each format elaborate synchronization was done to
call it synchronously on the VO. This is completely unnecessary, and we
can do with just a single call.
|
|
|
|
|
|
|
|
| |
This is basically a hack; but apparently a needed one, since many
vapoursynth filters insist on having a FPS set.
We need to apply the FPS override before creating the filters. Also
change some terminal output related to the FPS value.
|
|
|
|
|
|
|
|
|
|
| |
Most of this is explained in the code comments. This change should
improve performance with vapoursynth, especially if concurrent requests
are used.
This should change nothing if vf_vapoursynth is not in the filter chain,
since non-threaded filters obviously can not asynchronously finish
filtering of frames.
|
|
|
|
|
| |
This is simpler than setting the context after VO creation, which
requires the code to check for the context on every entrypoint.
|
|
|
|
|
| |
Not particularly elegant, but better than adding more and more stuff to
the relevant function signatures.
|
|
|
|
| |
This typo has been around for over a year. Oops.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|