| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
On machines with multiple GPUs, /dev/dri/renderD128 isn't guaranteed
to point to a valid vaapi device. This just adds the option to specify
what path to use.
The old fallback /dev/dri/card0 is gone but that's not a loss as its
a legacy interface no longer accepted as valid by libva.
Fixes #4320
|
|
|
|
|
| |
These were replaced by a different mechanism, but the old fields weren't
removed.
|
|
|
|
|
|
|
|
|
| |
Finally get rid of all the HWDEC_* things, and instead rely on the
libavutil equivalents. vdpau still uses a shitty hack, but fuck the
vdpau code.
Remove all the now unneeded remains. The vdpau preemption thing was not
unused anymore; if someone cares this could probably be restored.
|
|
|
|
|
|
| |
This code is for trying to avoid using an emulation layer when using
auto probing, so that we end up using the actual API the drivers
provide. It was destroyed in the recent refactor.
|
|
|
|
|
|
|
|
|
|
| |
Lots of shit code for nothing. We probably could just use libavutil's
code for all of this. But for now go with this, since it tends to
prevent stupid terminal messages during probing (libavutil has no
mechanism to selectively suppress errors specifically during probing).
Ignores the "emulated" API flag (for avoiding vaapi/vdpau wrappers), but
it doesn't matter that much for -copy anyway.
|
|
|
|
| |
They are no longer global, so they work vaguely sensibly.
|
|
|
|
|
|
|
|
|
|
| |
The current check_va_status() function could probably be argued to be
derived from the original VAAPI's patch check_status() function, thus
GPL-only. While I have my doubts that it applies to an idiom on this
level, it's better to replace it. Similar idea, different expression
equals no copyright association.
An earlier commit message promised this, but it was forgotten.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Originally mpv vaapi support was based on the MPlayer-vaapi patches.
These were never merged in upstream MPlayer. The license headers
indicated they were GPL-only. Although the actual author agreed to
relicensing, the company employing him to write this code did not, so
the original code is unusable to us.
Fortunately, vaapi support was refactored and rewritten several times,
meaning little code is actually left. The previous commits removed or
moved that to GPL-only code. Namely, vo_vaapi.c remains GPL-only. The
other code went away or became unnecessary mainly because libavcodec
itself gained the ability to manage the hw decoder, and libavutil
provides code to manage vaapi surfaces. We also changed to mainly using
EGL interop, making any of the old rendering code unnecessary.
hwdec_vaglx.c is still GPL. It's possibly relicensable, because much of
it was changed, but I'm not too sure and further investigation would be
required. Also, this has been disabled by default for a while now, so
bothering with this is a waste of time. This commit simply disables it
at compile time as well in LGPL mode.
|
|
|
|
|
|
| |
Done for license reasons. vo_vaapi.c is turned into some kind of
dumpster fire, and we'll remove it as soon as I'm mentally ready for
unkind users to complain about removal of this old POS.
|
|
|
|
|
| |
Another step to get rid of the legacy crap in vaapi.c. (Most is still
kept, because it's in use by vo_vaapi.c.)
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is for relicensing. Some of this code is loosely based on
vo_vaapi.c from the original MPlayer-vaapi patches. Most of the code has
changed, and only the initialization code and check_status() look
remotely similar. The initialization code is changed to be like Libav's
(hwcontext_vaapi.c). check_va_status() is just a C idiom, but to play it
safe, we'll either drop it from LGPL code (or recreate it).
vaapi.c still contains plenty of code from the original patches, but the
next commits will move them out of the LGPL code paths.
|
|
|
|
|
|
|
|
|
| |
Now you need FFmpeg git, or something.
This also gets rid of the last real use of gpu_memcpy(). libavutil does
that itself. (vaapi.c still used it, but it was essentially unused,
because the code path isn't really in use anymore. It wasn't even
included due to the d3d-hwaccel dependency in wscript.)
|
|
|
|
|
|
| |
Another legacy annoyance. The only place where packed YUV is still
important is slightly older Apple hardware or drivers, which require
it for efficient hardware decoding.
|
|
|
|
| |
It's a simple memory leak. (The API objects were destroyed anyway.)
|
|
|
|
|
|
|
|
| |
hw_vaapi.c didn't do much interesting anymore. Other than the function
to create a device for decoding with vaapi-copy, everything can be done
by generic code. Other libavcodec hwaccels are planned to provide the
same API as vaapi. It will be possible to drop the other hw_ files in
the future. They will use this generic code instead.
|
|
|
|
|
|
|
|
| |
The lock was disabled recently. This commit gets rid of the dummied out
calls. The main reason for removing it is that there is no apparent need
for it anymore, and the new FFmpeg vaapi code does not use or provide
such a lock (there are some places which we cannot control and which do
vaapi API calls, like frame destructors).
|
|
|
|
| |
Fixes vf_vavpp crashing with the new vaapi decode API.
|
|
|
|
| |
This makes "generic" interaction with libav* components easier.
|
|
|
|
|
|
| |
For convenience. Since we still have code that works even if creating a
AVHWDeviceContext fails, failure is ignored. (Although currently, it
succeeds creation even with the stale/abandoned vdpau wrapper driver.)
|
|
|
|
|
|
|
|
|
| |
Makes va_surface_download() call mp_image_hw_download() for
libavutil-allocated surfaces, which in turn calls
av_hwframe_transfer_data().
mp_image_hw_download() is actually not specific to vaapi, and can be
used for any hw surface allocated by libavutil.
|
|
|
|
|
|
| |
AVHWDeviceContext.user_opaque is reserved to libavutil under certain
circumstances, while AVHWFramesContext.user_opaque is truly free for use
by us. It's slightly simpler too.
|
|
|
|
|
|
|
|
|
| |
A recent commit added code that checks some HAVE_ symbols in this file.
No config.h include was added, so they could be unavailable and break
compilation (in practice, just --hwdec=vaapi-copy would break).
Not sure how I missed this, maybe waf defined these symbols on the
compiler command line for some reason.
|
|
|
|
|
|
|
|
|
|
|
| |
We usually attach some significant metadata and context to "our"
surfaces. Surfaces created by libavutil (such as we plan to do it when
using the new vaapi decode API in the following commit) don't have this
context, so e.g. copy decoding mode won't work.
Add tons of hacks to make this somehow work.
Eventually we will use libavutil's mechanisms and drop the hacks.
|
|
|
|
| |
Preparation for the following commits.
|
|
|
|
|
|
|
|
|
|
|
| |
This is available since the first commit after libva 0.39.4. Since the
version wasn't bumped since, we just check some random other symbol that
was added since (I'd rather not add a configure check).
The libva message callback repeats the endlessly repeated API mistakes
of libraries using global message callback handlers. But it's the only
way to shut up libva's dumb messages to stderr, so add something
complicated and dumb to workaround libva's stupidity.
|
|
|
|
|
|
|
|
| |
Just some minor refactoring within va_initialize() as preparation for
the next commit.
Also, do not call vaTerminate(display) on failures. All callers already
do this, so this would have led to a double-free.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The hw_subfmt field roughly corresponds to the field
AVHWFramesContext.sw_format in ffmpeg. The ffmpeg one is of the type
AVPixelFormat (instead of the underlying hardware format), so it's a
good idea to switch to this too for preparation.
Now the hw_subfmt field is an mp_imgfmt instead of an opaque/API-
specific number. VDPAU and Direct3D11 already used mp_imgfmt, but
Videotoolbox and VAAPI had to be switched.
One somewhat user-visible change is that the verbose log will now always
show the hw_subfmt as image format, instead of as nonsensical number.
(In the end it would be good if we could switch to AVHWFramesContext
completely, but the upstream API is incomplete and doesn't cover
Direct3D11 and Videotoolbox.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The main change is with video/hwdec.h. mp_hwdec_info is made opaque (and
renamed to mp_hwdec_devices). Its accessors are mainly thread-safe (or
documented where not), which makes the whole thing saner and cleaner. In
particular, thread-safety rules become less subtle and more obvious.
The new internal API makes it easier to support multiple OpenGL interop
backends. (Although this is not done yet, and it's not clear whether it
ever will.)
This also removes all the API-specific fields from mp_hwdec_ctx and
replaces them with a "ctx" field. For d3d in particular, we drop the
mp_d3d_ctx struct completely, and pass the interfaces directly.
Remove the emulation checks from vaapi.c and vdpau.c; they are
pointless, and the checks that matter are done on the VO layer.
The d3d hardware decoders might slightly change behavior: dxva2-copy
will not use the VO device anymore if the VO supports proper interop.
This pretty much assumes that any in such cases the VO will not use any
form of exclusive mode, which makes using the VO device in copy mode
unnecessary.
This is a big refactor. Some things may be untested and could be broken.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, we have made the assumption that a driver will use only 1
hardware surface format. the format is dictated by the driver (you
don't create surfaces with a specific format - you just pass a
rt_format and get a surface that will be in a specific driver-chosen
format).
In particular, the renderer created a dummy surface to probe the format,
and hoped the decoder would produce the same format. Due to a driver
bug this required a workaround to actually get the same format as the
driver did.
Change this so that the format is determined in the decoder. The format
is then passed down as hw_subfmt, which allows the renderer to configure
itself with the correct format. If the hardware surface changes its
format midstream, the renderer can be reconfigured using the normal
mechanisms.
This calls va_surface_init_subformat() each time after the decoder
returns a surface. Since libavcodec/AVFrame has no concept of sub-
formats, this is unavoidable. It creates and destroys a derived
VAImage, but this shouldn't have any bad performance effects (at
least I didn't notice any measurable effects).
Note that vaDeriveImage() failures are silently ignored as some
drivers (the vdpau wrapper) support neither vaDeriveImage, nor EGL
interop. In addition, we still probe whether we can map an image
in the EGL interop code. This is important as it's the only way
to determine whether EGL interop is supported at all. With respect
to the driver bug mentioned above, it doesn't matter which format
the test surface has.
In vf_vavpp, also remove the rt_format guessing business. I think the
existing logic was a bit meaningless anyway. It's not even a given
that vavpp produces the same rt_format for output.
|
| |
|
|
|
|
|
|
|
|
| |
They are evil and should be eradicated. Some of these were pretty dumb
anyway.
There are probably some more around in platform specific code or other
code not enabled by default on Linux.
|
|
|
|
|
|
|
| |
This VA_FOURCC isn't even defined by latest drivers, so I'm just
assuming it doesn't exist and never existed. For planar 4:2:0,
VA_FOURCC_YV12 is normally preferred, and there's even a VA_FOURCC_IYUV
for 4:2:0 with unswapped planes.
|
|
|
|
|
|
|
|
|
| |
This makes it much faster if the surface is really mapped from GPU
memory. It's slightly slower than system memcpy if used on system
memory. We don't really know definitely in which type of memory
it's located, so we use the GPU memcpy in all cases.
Fixes #2317.
|
|
|
|
|
|
|
|
|
|
| |
Printing "Using vaDeriveImage()" every frame is too verbose, so raise
the log level.
mp_image strides are in int and not unsigned int; fix this. It's not
like it actually matters, though.
Finish a comment.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This can happen if the hw decoder allocates padded surfaces (e.g.
mod16), but the VPP output surface was allocated with the exact size.
Apparently VPP requires matching input and output sizes, or it will add
artifacts. In this case, it added mirrored pixels to the bottom few
pixels.
Note that the previous commit should have fixed this. But it didn't
work, while this commit does.
Fixes #2320.
|
|
|
|
| |
Appears to be required by some hardware. Whatever.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vaQueryImageFormats() returns a randomly ordered list - so we shouldn't
assume the first format on the list which works is the best. This
effectively switches to nv12 instead of yuv420p on some drivers.
We handle this by reusing va_to_imgfmt[], and ordering it by preference.
We hardcode that GPUs prefer nv12 pver yuv420p. In theory we could do
complicated probing (allocate dummy surface + use vaDeriveImage on it,
then retrieve the FourCC) - but all things which could break assumption
in the future are not supported yet (like 10 bit or 4:4:4), so this is
fine.
|
|
|
|
|
|
| |
This reverts commit d660e67be9cc7d79d81e0c09c2720ea6d0a35e3a.
Fixes #2123.
|
|
|
|
|
|
|
|
|
|
|
| |
Drop libva versions below 0.34.0. These are ancient, so I don't care.
Drop the vo_vaapi deinterlacer as well. With 0.34.0, VPP is always
available, and deinterlacing is done with vf_vavpp.
The vaCreateSurfaces() function changes its signature - actually it did
in 0.34.0 or so, and the <va/va_compat.h> defined a macro to make it use
the old signature.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Work around that FFmpeg doesn't distinguish between surface and cropped
size. The decoder always aligns the surface size to something
"convenient" (e.g. 16 for h264), and to get to the correct cropped size,
the output image's width/height is reduced. Using the cropped size
instead of the real surface size breaks the libva API in certain cases,
so we simply store and use the original size in our per-surface struct.
(If data is cropped on the left/top borders, hw decoding will simply
display these - FFmpeg doesn't let us do better.)
|
| |
|
|
|
|
|
|
|
| |
In theory, this code path avoids a copy. In practice, it never seems
to get enabled at all. But it does have potential for weird bugs or
performance issues (like being mapped from non-cacheable memory),
so kill it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When using --hwdec=auto, about half of all systems will print:
"[vdpau] Error when calling vdp_device_create_x11: 1"
this happens because usually mpv will be linked against both vdpau and
vaapi libs, but the drivers are not necessarily available. Then trying
to load a driver will fail. This is a normal part of probing, but the
error messages were printed anyway. Silence them by explicitly
distinguishing probing.
This pretty much goes through all the layers. We actually consider
loading hw backends for vo_opengl always "auto probed", even if a hw
backend is explicitly requested. In this case vd_lavc will print a
warning message anyway (adjust this message a bit).
|
|
|
|
|
|
|
|
|
| |
This is somewhat imperfect, because detection of hw decoding APIs is
mostly done on demand, and often avoided if not necessary. (For example,
we know very well that there are no hw decoders for certain codecs.)
This also requires every hwdec backend to identify itself (see hwdec.h
changes).
|
|
|
|
|
|
|
|
| |
Instead of converting the hw surface to an image in the VO, provide a
generic way to convet hw surfaces, and use this in the screenshot code.
It's all relatively straightforward, except vdpau is being terrible. It
needs a huge chunk of new code, because copying back is not simple.
|
|
|
|
|
|
|
|
|
|
|
| |
Before this commit, each hw backend had their own specific struct types
for context, and some, like VDA, had none at all. Add a context struct
(mp_hwdec_ctx) that provides a somewhat generic way to pass the hwdec
context around. Some things get slightly better, some slightly more
verbose.
mp_hwdec_info is still around; it's still needed, but is reduced to its
role of handling delayed loading of the hwdec backend.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
So talking to a certain Intel dev, it sounded like modern VA-API drivers
are reasonable thread-safe. But apparently that is not the case. Not at
all. So add approximate locking around all vaapi API calls.
The problem appeared once we moved decoding and display to different
threads. That means the "vaapi-copy" mode was unaffected, but decoding
with vo_vaapi or vo_opengl lead to random crashes.
Untested on real Intel hardware. With the vdpau emulation, it seems to
work fine - but actually it worked fine even before this commit, because
vdpau was written and designed not by morons, but competent people
(vdpau is guaranteed to be fully thread-safe).
There is some probability that this commit doesn't fix things entirely.
One problem is that locking might not be complete. For one, libavcodec
_also_ accesses vaapi, so we have to rely on our own guesses how and
when lavc uses vaapi (since we disable multithreading when doing hw
decoding, our guess should be relatively good, but it's still a lavc
implementation detail). One other reason that this commit might not
help is Intel's amazing potential to fuckup anything that is good and
holy.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mpv supports two hardware decoding APIs on Linux: vdpau and vaapi. Each
of these has emulation wrappers. The wrappers are usually slower and
have fewer features than their native opposites. In particular the libva
vdpau driver is practically unmaintained.
Check the vendor string and print a warning if emulation is detected.
Checking vendor strings is a very stupid thing to do, but I find the
thought of people using an emulated API for no reason worse.
Also, make --hwdec=auto never use an API that is detected as emulated.
This doesn't work quite right yet, because once one API is loaded,
vo_opengl doesn't unload it, so no hardware decoding will be used if the
first probed API (usually vdpau) is rejected. But good enough.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
VAAPI has some ambiguous image formats, like VA_FOURCC_I420,
VA_FOURCC_IYUV, VA_FOURCC_YV12 (the latter exactly the same as the first
two, just with swapped planes). There is potentially a problem when one
specific VAAPI format was picked, and converting it to a mpv format and
back to a VAAPI FourCC would r |