| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
Fixes bogus frame drop counter in cover art mode.
|
|
|
|
|
|
|
|
|
| |
Another crappy fix for timestamp reset issues. This time, we try to fix
files which have very weird but legitimate frame durations, such as
cdgraphics. It can have many short frames, but once in a while there are
potentially very long frames.
Fixes #3027.
|
|
|
|
| |
Fixes #3069.
|
|
|
|
|
| |
Switches to a black window if --force-window is used while coverart
"video" is playing.
|
|
|
|
|
| |
In particular, this won't overwrite the playback PTS in coverart mode,
which actually fixes relative seeks.
|
| |
|
|
|
|
| |
Damn.
|
|
|
|
|
|
| |
The active texture and some pixelstore parameters are now always reset
to defaults when entering and leaving the renderer. Could be important
for libmpv.
|
|
|
|
| |
Truly dumb bug introduced with the previous commit.
|
|
|
|
|
|
|
|
|
|
| |
Commit 382bafcb changed the behavior for ab-loop-a. This commit changes
ab-loop-b so that the behavior is symmetric.
Adjust the OSD rendering accordingly to the two changes.
Also fix mentions of the "ab_loop" command to the now preferred
"ab-loop".
|
| |
|
|
|
|
| |
Completely useless, expect for some special purposes.
|
|
|
|
| |
Just a theoretical issue, most likely.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The check whether video is ready yet was done only in STATUS_FILLING.
But it also switched to STATUS_READY, which means the next time
fill_audio_out_buffers() was called, audio would actually be started
before video.
In most situations, this bug didn't show up, because it was only
triggered if the demuxer didn't provide video packets quickly enough,
but did for audio packets.
Also log when audio is started.
(I hate fill_audio_out_buffers(), why did I write it?)
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Strictly schedule an update in regular intervals as long as either
stream cache or demuxer are prefetching. Don't update just always
because the stream cache is enabled ("idle != -1") or cache-related
properties are observed (mp_client_event_is_registered()).
Also, the "idle" variable was awkard; get rid of it with equivalent
code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Calculate the buffering percentage in the same code which determines
whether the player is or should be buffering. In particular it can't
happen that percentage and buffering state are slightly out of sync due
to calling DEMUXER_CTRL_GET_READER_STATE and reusing it with the
previously determined buffering state.
Now it's also easier to guarantee that the buffering state is updated
properly.
Add some more verbose output as well.
(Damn I hate this code, why did I write it?)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
demux_playlist.c recognizes if the source stream points to a directory,
and adds its directory entries. Until now, only 1 level was added.
During playback, further directory entries could be resolved as
directory paths were "played".
While this worked fine, it lead to frequent user confusion, because
playlist resuming and other things didn't work as expected. So just
recursively scan everything.
I'm unsure whether it's a good fix, but at least it gets rid of the
complaints. (And probably will add others.)
|
|
|
|
| |
Possibly slightly more useful/intuitive.
|
|
|
|
|
|
| |
This seems to cause problems, so only use it if H264_E is not available.
fixes #3059
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
This wasn't updated over multiple iterations.
|
|
|
|
|
|
|
| |
Commit a9bd4535 generally changed properties are set to string values.
This actually broke the fallback for non-string properties, because the
set string action was redirected directly to the property, instead of
the generic handler and its fallback code.
|
|
|
|
|
| |
OF COURSE Libav doesn't have AV_PICTURE_TYPE_NONE. Why the fuck would
it?
|
|
|
|
|
| |
Future code should always use mp_image_{to,from}_av_frame(). Everything
else is way too messy and fragile.
|
|
|
|
|
|
|
|
| |
As of ffmpeg git master, only the libavdevice decklink wrapper supports
this. Everything else has dropped support.
You're now supposed to use AV_CODEC_ID_WRAPPED_AVFRAME, which works a
bit differently. Normal AVFrames should still work for these encoders.
|
|
|
|
|
|
| |
This potentially makes it more efficient, and actually makes it simpler.
Yes, AV_PICTURE_TYPE_NONE is the default for pict_type.
|
|
|
|
|
| |
What mp_image_to_av_frame_and_unref() should have been. (The _unref
variant is still useful though.)
|
|
|
|
| |
Why was this so complex.
|
|
|
|
| |
In both directions.
|
|
|
|
| |
Should help debug problems with AC3 passthrough not working.
|
| |
|
|
|
|
| |
Fixes #3053.
|
|
|
|
|
|
|
| |
And remove the same thing from the client API code.
The command.c code has to deal with many specialized M_PROPERTY_SET_*
actions, and we bother with a subset only.
|
|
|
|
| |
In that case, it merely changes the underlying option value.
|
|
|
|
|
|
|
|
|
| |
If a mpv_node wrapped a string, the behavior was different from calling
mpv_set_property() with MPV_FORMAT_STRING directly. Change this.
The original intention was to be strict about types if MPV_FORMAT_NODE
is used. But I think the result was less than ideal, and the same change
towards less strict behavior was made to mpv_set_option() ages ago.
|
|
|
|
| |
Copy & paste bug.
|
|
|
|
| |
Probably fixes #3049.
|
|
|
|
|
| |
It's not used yet anywhere. Pushing this now so switching between
branches is less bothersome.
|
|
|
|
| |
In particular get rid of the semi-deprecated ":" separator.
|
|
|
|
| |
Requested in #3040.
|
|
|
|
|
|
|
|
|
| |
Since what we're doing is a linear blend of the four colors, we can just
do it for free by using GPU sampling.
This requires significantly fewer texture fetches and calculations to
compute the final color, making it much more efficient. The code is also
much shorter and simpler.
|
|
|
|
|
|
|
|
|
| |
Now it will always be able to seek back to the start, even if the index
is sparse or misses the first entry.
This can be achieved by reusing the logic for incremental index
generation (for files with no index), and start time probing (for making
sure the first block is always indexed).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, we have made the assumption that a driver will use only 1
hardware surface format. the format is dictated by the driver (you
don't create surfaces with a specific format - you just pass a
rt_format and get a surface that will be in a specific driver-chosen
format).
In particular, the renderer created a dummy surface to probe the format,
and hoped the decoder would produce the same format. Due to a driver
bug this required a workaround to actually get the same format as the
driver did.
Change this so that the format is determined in the decoder. The format
is then passed down as hw_subfmt, which allows the renderer to configure
itself with the correct format. If the hardware surface changes its
format midstream, the renderer can be reconfigured using the normal
mechanisms.
This calls va_surface_init_subformat() each time after the decoder
returns a surface. Since libavcodec/AVFrame has no concept of sub-
formats, this is unavoidable. It creates and destroys a derived
VAImage, but this shouldn't have any bad performance effects (at
least I didn't notice any measurable effects).
Note that vaDeriveImage() failures are silently ignored as some
drivers (the vdpau wrapper) support neither vaDeriveImage, nor EGL
interop. In addition, we still probe whether we can map an image
in the EGL interop code. This is important as it's the only way
to determine whether EGL interop is supported at all. With respect
to the driver bug mentioned above, it doesn't matter which format
the test surface has.
In vf_vavpp, also remove the rt_format guessing business. I think the
existing logic was a bit meaningless anyway. It's not even a given
that vavpp produces the same rt_format for output.
|
|
|
|
|
|
| |
Reverting because the use of deprecated API has been fixed.
This reverts commit d0238711dc776aeee2509452202ba4748f863ee4.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the past, --video-unscaled also disabled zooming and aspect ratio
corrections. But this didn't make much sense in terms of being a useful
option. The new behavior just sets the initial video size to be
unscaled, but it's still affected by zoom commands and aspect ratio
corrections.
To get the old behavior back, --video-aspect=0 --video-zoom=0 need to be
added as well (in the general case). Most of the time it should not make
a difference though.
Also, there seems to have been some additional dst_rect clamping code
inside src_dst_split_scaling that didn't seem to either be necessary nor
ever get triggered. (The code immediately above it already makes sure to
crop the video if it's larger than the dst_rect)
No idea why it was there, but I just removed it.
|
|
|
|
| |
Never needed them. This makes the code slightly more readable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Apply basic transformations like rotation by 90° and mirroring when
sampling from the source textures. The original idea was making this
part of img_tex.transform, but this didn't work: lots of code plays
tricks on the transform, so manipulating it is not necessarily
transparent, especially when width/height are switched. So add a new
pre_transform field, which is strictly applied before the normal
transform.
This fixes most glitches involved with rotating the image.
Cropping and rotation are now weirdly separated, even though they could
be done in the same step. I think this is not much of a problem, and
has the advantage that changing panscan does not trigger FBO
reallocations (I think...).
|
|
|
|
|
|
| |
I'm an idiot.
Fixes #3032.
|
|
|
|
|
|
| |
Just a bridge to the option.
(Did I ever mention that I hate the property/option separation.)
|
| |
|
|
|
|
|
|
|
| |
...unless no files match. Fixes #2892.
To get the old behaviour back, use something like:
zstyle ':completion:*:*:mpv:*' tag-order
|
|
|
|
|
|
| |
Typically happens with some implementations if no context is currrent,
or is otherwise broken. This is particularly relevant to the opengl_cb
API, because the API user will have no other indication what went wrong.
|
|
|
|
|
|
| |
Commit f009d16f accidentally broke it.
Thanks to RiCON for noticing and testing.
|
|
|
|
| |
Why not.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The underlying intention of this code is to make changing
--videotoolbox-format at runtime work. For this reason, the format can't
just be statically setup, but must be read from the option at runtime.
This means the format is not fixed anymore, and we have to make sure the
renderer is property reinitialized if the format changes. There is
currently no way to trigger reinit on this level, which is why the
mp_image_params.hw_subfmt field was introduced.
One sketchy thing remains: normally, the renderer is supposed to be
involved with VO format negotiation, which would ensure that the VO
can take the format at all. Since the hw_subfmt is not part of this
format negotiation, it's implied the get_vt_fmt() callback only
returns formats supported by the renderer. This is not necessarily
clear because vo_opengl checks this with converted_imgfmt separately.
None of this matters in practice though, because we know all formats
are always supported.
(This still requires somehow triggering decoder reinit to make the
change effective.)
|
|
|
|
|
|
|
|
|
|
|
| |
For hwaccel formats, mp_image will merely point to a hardware surface
handle. In these cases, the mp_image_params.imgfmt field describes the
format insufficiently, because it mostly only describes the type of the
hardware format, not its underlying format.
Introduce hw_subfmt to describe the underlying format. It makes sense to
use it with most hwaccels, though for now it will be used with the
following commit only.
|
|
|
|
|
|
|
|
|
|
| |
Until now, the presence of the process_image() callback was used to set
a delay queue with a hardcoded size. Change this to a vd_lavc_hwdec
field instead, so the decoder can explicitly set this if it's really
needed.
Do this so process_image() can be used in the VideoToolbox glue code for
something entirely unrelated.
|
|
|
|
|
|
|
|
|
|
| |
Some functions which expected a codec name (i.e. the name of the video
format itself) were passed a decoder name. Most "native" libavcodec
decoders have the same name as the codec, so this was never an issue.
This should mean that e.g. using "--vd=lavc:h264_mmal --hwdec=mmal"
should now actually enable native surface mode (instead of doing copy-
back).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The sync-by-display mode relies on using the vsync statistics for
timing. As a consequence discontinuities must be handled somehow. Until
now we have done this by completely resetting these statistics.
This can be somewhat annoying, especially if the GL driver's vsync
timing is not ideal. So after e.g. a seek it could take a second until
it locked back to the proper values.
Change it not to reset all statistics. Some state obviously has to be
reset, because it's a discontinuity. To make it worse, the driver's
vsync behavior will also change on such discontinuities. To compensate,
we discard the timings for the first 2 vsyncs after each discontinuity
(via num_successive_vsyncs). This is probably not fully ideal, and
num_total_vsync_samples handling in particular becomes a bit
questionable.
|
|
|
|
|
| |
Most players will interpret HTML-style tags (aka srt) in almost any kind
of text subtitles; make mpv do this too.
|
| |
|
|
|
|
| |
It's the same functionally.
|