| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Repeating frames (for display-sync) is not supposed to render the entire
frame again. When using hardware decoding, it unfortunately did: the
renderer uses the frame ID to check whether the frame data changed, and
unmapping the hwdec frame clears it.
Essentially reverts commit 761eeacf5407cab07. Back then I probably
thought it would be a good idea to release the hwdec image quickly in
order to return it to the decoder, but they're referenced anyway.
This should increase the performance and reduce GPU work.
|
|
|
|
| |
Not sure why there was 110, or why there is even a default.
|
|
|
|
|
|
|
|
|
| |
Normally such code is didsabled by have_mglsl==false in
check_gl_features(), but apparently not this one.
Just fix it. Seems also more readable.
Fixes #5069.
|
|
|
|
|
|
|
| |
Apparently this is required, but it doesn't check for it. To be fair,
this was tested by creating a compatibility context and pretending it's
GL 2.1. GL_ARB_shader_storage_buffer_object actually requires GL 4.0 or
up, but GL_ARB_uniform_buffer_object requires only GL 2.0.
|
|
|
|
|
| |
When the ifdeffery for the frame_params API was added, the new code
accidentally didn't include this.
|
|
|
|
|
|
|
|
|
|
|
| |
vo_gpu.c will call gl_video_icc_auto_enabled() to check whether it
should retrieve the ICC profile. But the value returned by this function
will be outdated, because gl_video_update_options() is not called yet.
Change the order of function calls so that this is done after updating
the options.
(This is fairly chaotic, but I guess this code will be refactored a
dozen of times anyway in the future.)
|
|
|
|
|
|
|
|
| |
All this code used to be required by the old variants of the libavcodec
hw decoding APIs. Almost all of that is gone, although the mediacodec
API unfortunately still pulls in some old stuff (but not all of it).
(mediacodec build/functionality is untested, but should work.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
All of this was dead code and completely unused.
get_buffer2_hwdec() is the biggest chunk. One unfortunate thing about it
is that, while it was active, it could perform a software fallback much
faster, because it didn't have to wait until a full frame is decoded (it
actually decoded a full frame, but the current code has to decode many
more frames due to the codec delay, because the current code waits until
the API returns a decoded frame.) We should probably restore the latter,
although since it's an optional optimization, and the current behavior
doesn't change with the removal of this code, don't actually do anything
about it.
|
|
|
|
|
| |
And move the remaining code (just 2 struct constant definitions) to
vd_lavc.c.
|
| |
|
|
|
|
| |
See #5062.
|
|
|
|
| |
Should fix #5062.
|
|
|
|
|
|
|
|
|
|
|
| |
This is where it should be. It only wasn't because of an old libavcodec
bug, that returned the side data only on every IDR. This required some
sort of caching, which is now dropped. (mp_image wouldn't have been able
to do this kind of caching, because this code is stateless.) We don't
support these old libavcodec versions anymore, which is why this is not
needed anymore.
Also move initialization of rotation/stereo stuff to dec_video.c.
|
|
|
|
| |
(Not tested on Windows and OSX.)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This simply didn't work. Unlike cuda-copy, this is a true hwaccel, and
obviously we need to provide it a device.
Implement this in a relatively generic way, which can probably reused
directly by videotoolbox (not doing this yet because it would require
testing on OSX).
Like with cuda-copy, --cuda-decode-device is ignored. We might be able
to provide a more general way to select devices at some later point.
|
|
|
|
|
|
|
| |
This is just a dumb consequence of HWDEC_ types somehow being part of
both decoder and VO. Obviously, the VO should only care about supporting
specific hardware surface types or providing specific device types, but
until they are separated, stupid unintuitive mismatches will occur.
|
|
|
|
|
|
| |
Fist we were required to use them for ABI compat. reasons (and other
BS), now they're deprecated and we're supposed to access them directly
again.
|
| |
|
|
|
|
|
|
|
|
| |
See manpage additions.
(In ffmpeg-mpv and Libav, this is still called "cuvid". Libav won't work
yet, because it has no frame params support yet, but this could get
fixed soon.)
|
|
|
|
|
|
|
|
| |
This removes the need for codec- and API-specific knowledge in the
libavcodec hardware acceleration API user. For mpv, this removes the
need for vd_lavc_hwdec.pixfmt_map and a few other things. (For now, we
still keep the "old" parts for the sake of supporting older Libav, and
FFgarbage.)
|
|
|
|
|
|
| |
params->rc was ignored in the calculation for the buffer size. I fucking
hate this stupid ra_tex_upload signature where *rc is randomly relevant
or not.
|
|
|
|
|
|
| |
Coverity complains about this, but it's probably a false positive.
Anyway, rewrite it in a slightly more readable way. Now it's more
obvious that it is correct.
|
|
|
|
|
|
|
|
| |
Should speed up seeks.
(Unfortunately it's useless for backstepping. Backstepping is like
precise seeking, except we're unable to drop frames, as we can't know
the previous frame if we drop it.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Comparing mpv's implementation against the ACES ODR reference samples
and algorithms, it seems like they're happy desaturating highlights
_way_ more aggressively than mpv currently does. And indeed, looking at
some example clips like The Redwoods (which is actually well-mastered),
the current desaturation produces unnatural-looking brightness fringes
where the sky meets the treeline.
Adjust the algorithm to make it apply to a much larger, more gradual
brightness region; and change the interpretation of the parameter. As a
bonus, the new parameter is actually sanely scaled (higher values = more
desaturation). Also, make it scale based on the signal level instead of
the luminance, to avoid under-desaturating bright blues.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The new_segment field was used to track the decoder data flow handler of
timeline boundaries, which are used for ordered chapters etc. (anything
that sets demuxer_desc.load_timeline). This broke seeking with the
demuxer cache enabled. The demuxer is expected to set the new_segment
field after every seek or segment boundary switch, so the cached packets
basically contained incorrect values for this, and the decoders were not
initialized correctly.
Fix this by getting rid of the flag completely. Let the decoders instead
compare the segment information by content, which is hopefully enough.
(In theory, two segments with same information could perhaps appear in
broken-ish corner cases, or in an attempt to simulate looping, and such.
I preferred the simple solution over others, such as generating unique
and stable segment IDs.)
We still add a "segmented" field to make it explicit whether segments
are used, instead of doing something silly like testing arbitrary other
segment fields for validity.
Cached seeking with timeline stuff is still slightly broken even with
this commit: the seek logic is not aware of the overlap that segments
can have, and the timestamp clamping that needs to be performed in
theory to account for the fact that a packet might contain a frame that
is always clipped off by segment handling. This can be fixed later.
|
|
|
|
|
| |
This allows to group them and most of all query the group config when
needed and when we don't have the access to vo.
|
|
|
|
|
|
|
|
|
|
|
| |
This commit allows to use the AV_PIX_FMT_DRM_PRIME newly introduced
format in ffmpeg that allows decoders to provide an AVDRMFrameDescriptor
struct.
That struct holds dmabuf fds and information allowing zerocopy rendering
using KMS / DRM Atomic.
This has been tested on RockChip ROCK64 device.
|
|
|
|
|
| |
We want primary plane to be one top of overlay (video), so we need it to
be 32 bits.
|
|
|
|
|
| |
libva 2.0 (VAAPI 1.0.0) was released without it, but it is scheduled to
be included in libva 2.1.
|
|
|
|
|
|
|
| |
Since we divide by it in a couple of places and compositors can be crazy,
its better to be safe than sorry.
Also checks cursor spawn durinig init (pointless since it does again on
cursor entry but its more correct).
|
|
|
|
|
|
| |
It seems the cursor hadn't had its position properly adjusted when scaled.
Hence, bring back correct buffer scaling to make the cursor look fine.
Also the cursor surface now gets created sooner so that's better.
|
|
|
|
|
| |
Only gnome does something as stupid as always applying scaling to
the cursor rather than just using a larger sized one with HIDPI.
|
|
|
|
|
|
|
|
|
| |
Regression since ec6e8a31e092a1d. Removal of the explicit else case
always applies the conversion to premultiplied alpha in the else branch.
We want to scale with multiplied alpha, but we don't want to multiply
with alpha again on top of it.
Fixes #4983, hopefully.
|
|
|
|
|
|
|
| |
This should be functionally identical to rgba16f, since the formats only
differ in their representation on the CPU, but it could be useful for RA
backends that don't expose rgba16f, like Vulkan. It's definitely useful
for the WIP D3D11 backend.
|
|
|
|
| |
Untested. If it works, fixes #4919.
|
|
|
|
| |
That's just unnecessary.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With video paused, changing the brightness controls (or similar) would
sometimes not rerender the video frame. So the OSD would redraw, but the
video wouldn't change. This is caused by output caching, and a redraw
request is free to return the cached frame. Change it such to invalidate
the cached frame if any of the options or the equalizer change.
In theory, gl_video_reset_surfaces() could be called if the equalizer
changes - this would apparently force interpolatzion to redraw all
frames. But this looks kind of crappy when changing the equalizer during
playback. It'll "eventually" use the correct settings anyway, and when
paused interpolation is off.
|
| |
|
|
|
|
|
|
|
| |
This was phased out, and was used only by vdpau by now. Drop the
mechanism and the vdpau special code, which means screenshots won't
include the vf_vdpaupp processing anymore. (I don't care enough about
vdpau, it's on its way out.)
|
|
|
|
| |
Sigh.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The mechanism introduced in b135af6842bf assumed AVHWFramesContext would
be enough. Apparently it's not - the intended use with Rockchip (not
Rokchip btw.) requires accessing actual frame data in order to access
the AVDRMFrameDescriptor struct.
Just pass the entire mp_image to the new function. This is more
flexible, although it slightly worries me that it will be less reusable
for things which require setting up mp_image_params before any real
frames are processed (such as filters).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The same should happen with any other side data that matters to mpv,
otherwise filters will drop it.
(No, don't try to argue that mpv should use AVFrame. That won't work.)
ffmpeg_garbage() is copy&paste from frame_new_side_data() in FFmpeg
(roughly feed201849b8f91), because it's not public API. The name
reflects my opinion about FFmpeg's API.
In mp_image_to_av_frame(), change the too-fragile
*new_ref = (struct mp_image){0};
into explicitly zeroing out the fields that are "transferred" to the
created AVFrame.
|
|
|
|
|
|
|
|
| |
Merge mp_image_copy_fields_to_av_frame() into mp_image_from_av_frame(),
same for the other direction.
There isn't any good reason to keep them separate, and the refcounting
handling makes it only more awkward.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It seems this will be useful for Rokchip DRM hwcontext integration.
DRM hwcontexts have additional internal structure which can be different
depending on the decoder, and which is not part of the generic hwcontext
API. Rockchip has 1 layer, which EGL interop happens to translate to a
RGB texture, while VAAPI (mapped as DRM hwcontext) will use multiple
layers. Both will use sw_format=nv12, and thus are indistinguishable on
the mp_image_params level. But this is needed to initialize the EGL
mapping and the vo_gpu video renderer correctly.
We hope that the layer count is enough to tell whether EGL will
translate the data to a RGB texture (vs. 2 texture resembling raw nv12
data). For that we introduce MP_IMAGE_HW_FLAG_OPAQUE.
This commit adds the flag, infrastructure to set it, and an "example"
for D3D11.
The D3D11 addition is quite useless at this point. But later we want to
get rid of d3d11_update_image_attribs() anyway, while we still need a
way to force d3d11vpp filter insertion, so maybe it has some
justification (who knows). In any case it makes testing this easier.
Obviously it also adds some basic support for triggering the opaque
format for decoding, which will use a driver-specific format, but which
is not supported in shaders. The opaque flag is not used to determine
whether d3d11vpp needs to be inserted, though.
|
|
|
|
|
|
|
|
| |
Mostly an obscure option for testing. But --videotoolbox-format can be
deprecated, as it becomes redundant.
We rely on the libavutil hwcontext implementation to reject invalid
pixfmts, or not to blow up if they are incompatible.
|
|
|
|
|
| |
This was for the "opengl" compat VO entry, which is now handled
differently.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was confusing at best. Change it to output the actual choices.
(Seems like in the end it's always me who has to clean up other people's
bullshit.)
Context names were not unique - but they should be, so fix it. The whole
point of the original --opengl-backend option was to side-step the
tricky auto-detection, so you know exactly what you get. The goal of
this commit is to make --gpu-context work the same way. Fix the
non-unique names by appending "vk" to the names.
Keep in mind that this was not suitable for slecting the "UI" backend
anyway, since "x11" would force GLX, whereas people on not-NVIDIA
actually want "x11egl". Users trying to use --gpu-context=x11 to force
the X11 backend would always end up with GLX, which would at least break
VAAPI hardware decoding for them. Basically the idea that this option
could select the "UI" type is completely broken - it selects an
implementation, which implies a UI. Selecting the UI type This would
require a separate mechanism. (Although in theory this separate
mechanism could be part of the --gpu-context option - in any case,
someone would have to implement it.)
To achieve help output that can actually be understood, just duplicate
the code. Most of that code is duplicated anyway, and trying to share
just the list code with the result of making the output unreadable
doesn't make too much sense. If we wanted to save code/effort, we could
just remove the help output altogether.
--gpu-api has non-unique entries, and it would be nice to group them
(e.g. list all OpenGL capable contexts with "opengl"), but C makes this
simple idea too much of a pain, so don't do it.
Also remove a stray tab from the android entry on the manpage.
|
|
|
|
|
|
|
|
|
|
| |
If the chroma location is missing, vo_gpu will use centered chroma.
Select a better chroma location by default: normally, it will always be
MPEG video chroma location. If full levels are used, use JPEG chroma
location, because that sort of sounds like it could make sense as it
might coincide with JPEG being decoded.
See e.g. #4804.
|
|
|
|
| |
Apparantly the context was renamed.
|
|
|
|
|
| |
Otherwise if display connection or xkb init failed the uninit function
could segfault.
|
|
|
|
|
|
| |
Every compositor (including toy compositors) has had support for wl_output v2
since forever, so there's little point in supporting degraded output for 5 year
old releases (especially considering we require zxdg6 which is far more recent).
|
|
|
|
|
|
|
| |
This adds symbol information to the generated SPIR-V, which shows up in
the SPIR-V assembly dump. It's also useful for potential RA backends
that use SPIRV-Cross, since the symbol information is used in the
generated shader source.
|
|
|
|
|
|
|
|
| |
This should actually cover all of them, if you take into account that
some unchanged GPL source files include header files with such checks.
Also this was done already for the libaf derived code.
This is only for "safety" and to avoid misunderstandings.
|
|
|
|
|
|
|
| |
It turns out compositors which do scaling scale the cursor as well,
so every single surface needs to get scaled too.
Also, 32 corresponds to the default size for both GTK+ and KDE.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This new interface in libva2 offers a cleaner way to export surfaces
which can then be imported to EGL. In particular, this works with
the Mesa driver, so we can have proper playback without a pointless
download and upload on AMD cards.
This change does nothing with libva1, and will fall back to the
libva1 interface (vaDeriveImage() + vaAcquireBufferHandle()) if
vaExportSurfaceHandle() is not present.
|