| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
This fixes an issue where blend stages with multiple passes got
overwritten by later passes, with only the final pass (typically the
overlayp pass) actually being shown.
|
|
|
|
| |
This was ignored as an oversight.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Today, lavfi filters are provided a hw_device from the first
hwdec_interop that was loaded, regardless of whether it's the right one
or not. In most situations where a hardware based filter is used, we
need more control over the device.
In this change, a `hwdec_interop` option is added to the lavfi wrapper
filter configuration and this is used to pick the correct hw_device to
inject into the filter or graph (in the case of a graph, all filters
get the same device).
Note that this requires the use of the explicit lavfi syntax to allow
for the extra configuration.
eg:
```
mpv --vf=hwupload
```
becomes
```
mpv --vf=lavfi=[hwupload]:hwdec_interop=cuda-nvdec
```
or
```
mpv --vf=lavfi-bridge=[hwupload]:hwdec_interop=cuda-nvdec
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we want to be able to handle conversion between hw formats in filter
chains, then we need to be able to load hwdec_interops from filters, as
the VO is only ever going to initialise one interop, based on its
configuration. That means that in almost all situations, only one of
the required interops will be loaded at the time the filter is
initialised.
The existing code has some assumptions that new hwdec_interops will not
be loaded after the vo has picked one to use. This change fixes two
instances:
* Refusing to load a new hwdec_interop if there is at least one
loaded already.
* Not recalculating the set of formats known to the autoconvert
filter when a new output format shows up. This leads to autoconvert
not knowing that a new format is supported when the hwdec interop is
lazily loaded.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It turns out that it's generally more useful to look up hwdecs by image
format, rather than device type. In the situations where we need to
find one, we generally know the image format we're dealing with. Doing
this avoids us having to create mappings from image format to device
type.
The most significant part of this change is filling in the image format
for the various hw interops. There is a hw_imgfmt field today today, but
only a couple of the interops fill it in, and that seems to be because
we've never actually used this piece of metadata before. Well, now we
have a good use for it.
|
|
|
|
| |
One step closer to vo_gpu_next feature parity with vo_gpu!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, running a debug build of mpv would crash with this output
when preinit() at vo_libmpv.c:732 calls control(vo, VOCTRL_PREINIT, NULL):
Swift/Optional.swift:247: Fatal error: unsafelyUnwrapped of nil optional
This comes from this line of code:
var data = UnsafeMutableRawPointer.init(bitPattern: 0).unsafelyUnwrapped
Unsafely unwrapping a UnsafeMutableRawPointer.init has always been UB,
but the Swift runtime began asserting on it in debug builds a couple macOS
versions ago.
|
|
|
|
|
| |
The current code also rejects e.g. rgb30, even though this format is
perfectly valid, because it only checks for alignment of the used bits.
|
|
|
|
|
|
|
| |
Relies on a new upstream API for adding save/load callbacks to the ICC
3DLUT generation parameters.
Closes: https://github.com/mpv-player/mpv/issues/10252
|
|
|
|
|
| |
`total_bits` may be less than the true pixel stride (`bpp`) for formats
which contain extra ignored components (e.g. rgb0).
|
| |
|
| |
|
| |
|
|
|
|
| |
Fixes https://github.com/mpv-player/mpv/issues/10594
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Facing down the multitude of ways to somehow wrangle the get_fn pointer
out of the GL environment and into libplacebo, I decided the easiest is
to just store it inside the GL struct itself.
The lifetime of this get_fn function is a bit murky, since it's not
clear on whether or not it survives the VO init call (especially in the
case of the GL render API, which explicitly states that parameters do
not need to survive the call they're passed to), so just add a
disclaimer.
(It's fine for us to use like this because `gpu_ctx_create` is still
part of VO init)
Closes https://code.videolan.org/videolan/libplacebo/-/issues/216
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In wayland-protocols 1.25, xdg-shell got a version bump which added the
configure_bounds event. The compositor can send this to clients to
indicate that they should not resize past a certain size. For mpv, we'll
choose to only listen to this on reconfig events (i.e. when the window
first appears and if the video resolution changes later in the
playlist). However, this behavior is still exposed as a user option
(default on) because it will neccesarily conflict with a user setting a
specific geometry size and/or window scale. Presumably, if someone is
setting a really large size that goes beyond the bounds of their
monitor, they actually want it like that. The wayland-protocols version
is newer-ish, but we can get around having to poke the build system by
just using a define that exists in the generated xdg-shell header.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unexpectedly, x11->screenrc actually doesn't update with randr events.
In a multimonitor configuration it could easily be wrong depending on
the user's layout. While it's tempting to modify the logic so screenrc
changes with randr events, this rectangle is currently used everywhere
and as far as we know, this pretty much works fine. Instead, just loop
over the randr displays that we have and select it if it overlaps with
the winrc. This follows the same logic as the fps selection in the case
of the mpv window overlapping multiple monitors (the last one is
selected).
|
|
|
|
|
|
| |
ae768a1e141eb88243e46757d41ca0cada9502b4 forgot to bump the required
libdrm version however Debian 11 just barely misses the requirement,
which is a good reason not to require it unconditionally anyway.
|
|
|
|
|
| |
Any error in page flipping caused mpv to wait indefinitely for a page
flip callback.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The older overlay based drmprime hwdec should be preferred to the new
texture mapping one. This is for a few reasons:
1. In any situation where both hwdecs work, it's probably right to use
the more mature one by default, for now.
2. It seems like the overlay path primarily works on older SoCs
where the texture path is less performant, and in at least one
tested case is visually buggy, so you definitely want it to be
tried first.
3. In situations where the old hwdec doesn't work, it will fall through
to the new one.
|
|
|
|
| |
I accidentally included an adjustment for a pending change.
|
|
|
|
|
| |
Obviously, this should be dmabuf_interop_priv as it's declared in a
header file that could get included anywhere.
|
|
|
|
|
|
|
|
| |
sfan5 found a few things after I pushed the change, so this fixes them.
* Use-after-free on drm_device_Path
* Not comparing render_fd against -1
* Not handling dup() errors
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the confusing landscape of hardware video decoding APIs, we have had
a long standing support gap for the v4l2 based APIs implemented for the
various SoCs from Rockship, Amlogic, Allwinner, etc. While VAAPI is the
defacto default for desktop GPUs, the developers who work on these SoCs
(who are not the vendors!) have preferred to implement kernel APIs
rather than maintain a userspace driver as VAAPI would require.
While there are two v4l2 APIs (m2m and requests), and multiple forks of
ffmpeg where support for those APIs languishes without reaching
upstream, we can at least say that these APIs export frames as DRMPrime
dmabufs, and that they use the ffmpeg drm hwcontext.
With those two constants, it is possible for us to write a
hwdec-interop without worrying about the mess underneath - for the most
part.
Accordingly, this change implements a hwdec-interop for any decoder
that produces frames as DRMPrime dmabufs. The bulk of the heavy
lifting is done by the dmabuf interop code we already had from
supporting vaapi, and which I refactored for reusability in a previous
set of changes.
When we combine that with the fact that we can't probe for supported
formats, the new code in this change is pretty simple.
This change also includes the hwcontext_fns that are required for us to
be able to configure the hwcontext used by `hwdec=drm-copy`. This is
technically unrelated, but it seemed a good time to fill this gap.
From a testing perspective, I have directly tested on a RockPRO64,
while others have tested with different flavours of Rockchip and on
Amlogic, providing m2m coverage.
I have some other SoCs that I need to spin up to test with, but I don't
expect big surprises, and when we inevitably need to account for new
special cases down the line, we can do so - we won't be able to support
every possible configuration blindly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I already added the equivalent logic for dmabuf_interop_pl previously
but I skipped the GL support because importing dmabufs into GL requires
explicitly providing the DRM format, and if you are taking a
multi-plane format and trying to treat each plane as a separate layer,
you need to come up with a DRM format for each synthetic layer.
But my initial testing has shown that the RockPRO64 board I've got to
work on drmprime hwdec will only produce NV12 in a single layer multi
plane format, and it doesn't have Vulkan support, so I have had to
tackle the GL multi-plane problem.
To that end, this change introduces the infrastructure to provide new
formats for synthetic layers. We only have lookup code for NV12 and
P010 as these were the only ones I could test.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Annoyingly, libva and libdrm use different structs to describe dmabufs
and if we are going to support drmprime, we must pick one format and do
some shuffling in the other case.
I've decided to use AVDRMFrameDescriptor as our internal format as this
removes the libva dependency from dmabuf_interop. That means that the
future drmprime hwdec will be able to populate it directly and the
existing hwdec_vaapi needs to copy the struct members around, but
that's cheap and not a concern.
|
|
|
|
|
|
| |
With the files renamed, we can now disentangle the shared private
struct between the interops and hwdec_vaapi. We need this separation
to allow the future drmprime hwdec to use the interops.
|
|
|
|
|
|
|
|
|
|
|
| |
This is the first in a series of changes that will introduce a drmprime
hwdec. As our vaapi hwdec is based around exporting surfaces as
drmprime dmabufs, we've actually got a lot of useful code already in
place in the GL/PL interops. I'm going to reorganise and adjust this
code to make the interops usable with the new hwdec as well.
The first step is to rename the files and functions. There are no
functional or other changes here. They will come next.
|
|
|
|
|
|
| |
Avoids another pitfall on systems where the first card has a primary
node but is not capable of KMS. With this change --drm-context=drm
should work correctly out-of-the-box in all cases.
|
|
|
|
|
|
|
|
| |
On S905X (meson) boards drmModeAtomicCommit called from
disable_video_plane in hwdec_drmprime_drm.c might still be running when
another call is made from queue_flip in context_drm_egl.c.
This causes EBUSY error in queue_flip, and causes mpv to hang.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is somewhat academic for now, as we explicitly ask for separate
layers and the scenarios where multi-plane images are required also use
complex formats that cannot be decomposed after the fact, but
nevertheless it is possible for us to consume simple multi-plane
images where there is one layer with n planes instead of n layers with
one plane each.
In these cases, we just treat the planes the same as we would if they
were each in a separate layer and everything works out.
It ought to be possible to make this work for OpenGL but I couldn't
wrap my head around how to provide the right DRM fourcc when
pretending a plane is a layer by itself. So I've left that
unimplemented.
|
|
|
|
|
| |
This check was wrong/outdated. PL_QUEUE_MORE does not imply an empty
frame mix, it can still contain partial frames.
|
|
|
|
|
|
| |
A VRAM memory leak was present in d3d11 when `idle=yes` and playback
stops for an item. This patch re-enables some of the code which is
only used during diagnostic which fixes the issue.
|
|
|
|
| |
This path incorrectly assumes there is a current frame.
|
|
|
|
|
|
|
|
|
|
|
|
| |
libplacebo 4.157 [1] rename context.h to log.h, and left a compatibility
header. In 5.x, this header has been removed.
Since we require libplacebo 4.157 to build mpv, we can just use log.h to
fix compatibility with 5.x.
[1]: https://github.com/haasn/libplacebo/commit/2459200a133d1a0f36f092a32a2d5d443cfbee55
Signed-off-by: Coelacanthus <coelacanthus@outlook.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Using cmsFLAGS_HIGHRESPRECALC results in Little-CMS generating an
internal 49x49x49 3DLUT, from which it then samples our own 3DLUT. This
is completely pointless and not only destroys the accuracy of the 3DLUT,
but also results in no additional gain from increasing the 3DLUT
precision further.
The correct flag for us to be using is cmsFLAGS_NOOPTIMIZE, which
suppresses this internal 3DLUT generation and gives us the full
precision. We can also specify cmsFLAGS_NOCACHE, which is negligible but
in theory prevents Little-CMS from unnecessary pixel equality tests.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was introduced in 7fb972fd3997bfa389caa7c1eb899ea4b8444083 and
later revised in f5a094db047ee0162774301a2ce4ed685ca9d539. Transparency
in EGL/X11 has been broken upstream for years in Mesa unfortunately.
However, the first commit claimed to have found a way to preserve
transparency by doing a trick with picking EGLConfigs (the second commit
revises this but keeps the logic in place). However, it doesn't appear
that the first commit actually fixes anything (transparency doesn't work
on my machine) and no one else seems to have reported it working. On the
other hand, if Mesa does ever actually fix this, transparency would
immediately be broken since mpv would always set the EGL_ALPHA_SIZE to
0. Go ahead and remove this since it doesn't seem to have any actual
utility and is technically a bit of a timebomb (not that deleting two
lines is a lot of work but still) if upstream ever does fix this.
|
|
|
|
| |
Fixes #9451
|
|
|
|
|
|
|
|
|
|
|
| |
This commit kind of mixes several related things together. The main
thing is to avoid calling any XPresent functions or internal functions
related to presentation when the feature is not auto-whitelisted or
enabled by the user. Internally rework this so it all works off of a
use_present bool (have_present is eliminated because having a non-zero
present_code covers exactly the same thing) and make sure it updates on
runtime. Finally, put some actual logging in here whenever XPresent is
enabled/disabled. Fixes #10326.
|
|
|
|
|
|
|
|
|
|
|
| |
With the recent addition of the libxpresent, it should improve frame
timings for most users. However, there were known cases of bad behavior
(Nvidia) which lead to a construction of a whitelist instead of just
enabling this all the time. Since there's no way to predict whatever
combination of hardware/drivers/etc. may work correctly, just give users
an option to switch the usage of xorg's presentation statistics on/off.
The default value, auto, works like before (basically, Mesa drivers and
no Nvidia are allowed), but now one can force it on/off if needed.
|
|
|
|
|
| |
A user noted that this worked correctly (i.e. vsync jitter of 0) so go
ahead and add it here as a safe driver for xpresent to use.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The old logic always reset the x11->has_mesa/has_nvidia values on every
loop through the provider. This meant that it would always just match
whatever the last provider happened to be. So in the case of a dual GPU
system, if nvidia was the very first provider and the integrated
intel/amd card was the second (in practice, this is probably mostly the
other way around), then mpv would set has_mesa to true and has_nvidia to
false and thus try to use presentation. This is not the intended
behavior. Just rework this by also checking x11->has_mesa/has_nvidia in
the loop so a true value from the previous iteration is preserved.
|
|
|
|
|
|
|
|
| |
This code was taken from the older vo_vaapi driver, which does
use the vaapi format list, but the new driver has no use for
these formats, as it is only interested in va surfaces that
can be mapped to wl buffers. The format doesn't enter into
it at all.
|
|
|
|
|
| |
strcasestr is a GNU extension, but we can just use bstr instead to do
the same thing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This builds off of present_sync which was introduced in a previous
commit to support xorg's present extension in all of the X11 backends
(sans vdpau) in mpv. It turns out there is an Xpresent library that
integrates the xorg present extention with Xlib (which barely anyone
seems to use), so this can be added without too much trouble. The
workflow is to first setup the event by telling Xorg we would like to
receive PresentCompleteNotify (there are others in the extension but
this is the only one we really care about). After that, just call
XPresentNotifyMSC after every buffer swap with a target_msc of 0. Xorg
then returns the last presentation through its usual event loop and we
go ahead and use that information to update mpv's values for vsync
timing purposes. One theoretical weakness of this approach is that the
present event is put on the same queue as the rest of the XEvents. It
would be nicer for it be placed somewhere else so we could just wait
on that queue without having to deal with other possible events in
there. In theory, xcb could do that with special events, but it doesn't
really matter in practice.
Unsurprisingly, this doesn't work on NVIDIA. Well NVIDIA does actually
receive presentation events, but for whatever the calculations used make
timings worse which defeats the purpose. This works perfectly fine on
Mesa however. Utilizing the previous commit that detects Xrandr
providers, we can enable this mechanism for users that have both Mesa
and not NVIDIA (to avoid messing up anyone that has a switchable
graphics system or such). Patches welcome if anyone figures out how to
fix this on NVIDIA.
Unlike the EGL/GLX sync extensions, the present extension works with any
graphics API (good for vulkan since its timing extension has been in
development hell). NVIDIA also happens to have zero support for the
EGL/GLX sync extensions, so we can just remove it with no loss. Only
Xorg ever used it and other backends already have their own present
methods. vo_vdpau VO is a special case that has its own fancying timing
code in its flip_page. This presumably works well, and I have no way of
testing it so just leave it as it is.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unfortunately there's a certain company that makes graphics drivers that
are harder to deal with. The next commit aims to implement presentation,
but some empirical testing from users show that it's actually broken.
Give up and just tap into Xrandr so we can figure what drivers (or well,
providers by the extension terminology) are driving the screen.
Basically if we find intel, amd, or radeon, assume it's a Mesa driver.
If we find nvidia, then it must be nvidia. This detection requires randr
1.4 (which means using presentation in mpv secretly depends on randr
1.4), but this protocol version is nearly a decade old anyway so
probably 99.9% of users are fine. Do the version query check and all
that anyway just to be on the safe side.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Wayland had some specific code that it used for implementing the
presentation time protocol. It turns out that xorg's present extension
is extremely similar, so it would be silly to duplicate this whole mess
again. Factor this out to separate, independent code and introduce the
mp_present struct which is used for handling the ust/msc values and some
other associated values. Also, add in some helper functions so all the
dirty details live specifically in present_sync. The only
wayland-specific part is actually obtaining ust/msc values. Since only
wayland or xorg are expected to use this, add a conditional to the build
that only adds this file when either one of those are present.
You may observe that sbc is completely omitted. This field existed in
wayland, but was completely unused (presentation time doesn't return
this). Xorg's present extension also doesn't use this so just get rid of
it all together. The actual calculation is slightly altered so it is
correct for our purposes. We want to get the presentation event of the
last fr |