| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
hw_vaapi.c didn't do much interesting anymore. Other than the function
to create a device for decoding with vaapi-copy, everything can be done
by generic code. Other libavcodec hwaccels are planned to provide the
same API as vaapi. It will be possible to drop the other hw_ files in
the future. They will use this generic code instead.
|
|
|
|
|
|
|
|
| |
The lock was disabled recently. This commit gets rid of the dummied out
calls. The main reason for removing it is that there is no apparent need
for it anymore, and the new FFmpeg vaapi code does not use or provide
such a lock (there are some places which we cannot control and which do
vaapi API calls, like frame destructors).
|
|
|
|
| |
Fixes vf_vavpp crashing with the new vaapi decode API.
|
|
|
|
| |
This makes "generic" interaction with libav* components easier.
|
|
|
|
|
|
| |
For convenience. Since we still have code that works even if creating a
AVHWDeviceContext fails, failure is ignored. (Although currently, it
succeeds creation even with the stale/abandoned vdpau wrapper driver.)
|
|
|
|
|
|
|
|
|
| |
Makes va_surface_download() call mp_image_hw_download() for
libavutil-allocated surfaces, which in turn calls
av_hwframe_transfer_data().
mp_image_hw_download() is actually not specific to vaapi, and can be
used for any hw surface allocated by libavutil.
|
|
|
|
|
|
| |
AVHWDeviceContext.user_opaque is reserved to libavutil under certain
circumstances, while AVHWFramesContext.user_opaque is truly free for use
by us. It's slightly simpler too.
|
|
|
|
|
|
|
|
|
| |
A recent commit added code that checks some HAVE_ symbols in this file.
No config.h include was added, so they could be unavailable and break
compilation (in practice, just --hwdec=vaapi-copy would break).
Not sure how I missed this, maybe waf defined these symbols on the
compiler command line for some reason.
|
|
|
|
|
|
|
|
|
|
|
| |
We usually attach some significant metadata and context to "our"
surfaces. Surfaces created by libavutil (such as we plan to do it when
using the new vaapi decode API in the following commit) don't have this
context, so e.g. copy decoding mode won't work.
Add tons of hacks to make this somehow work.
Eventually we will use libavutil's mechanisms and drop the hacks.
|
|
|
|
| |
Preparation for the following commits.
|
|
|
|
|
|
|
|
|
|
|
| |
This is available since the first commit after libva 0.39.4. Since the
version wasn't bumped since, we just check some random other symbol that
was added since (I'd rather not add a configure check).
The libva message callback repeats the endlessly repeated API mistakes
of libraries using global message callback handlers. But it's the only
way to shut up libva's dumb messages to stderr, so add something
complicated and dumb to workaround libva's stupidity.
|
|
|
|
|
|
|
|
| |
Just some minor refactoring within va_initialize() as preparation for
the next commit.
Also, do not call vaTerminate(display) on failures. All callers already
do this, so this would have led to a double-free.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The hw_subfmt field roughly corresponds to the field
AVHWFramesContext.sw_format in ffmpeg. The ffmpeg one is of the type
AVPixelFormat (instead of the underlying hardware format), so it's a
good idea to switch to this too for preparation.
Now the hw_subfmt field is an mp_imgfmt instead of an opaque/API-
specific number. VDPAU and Direct3D11 already used mp_imgfmt, but
Videotoolbox and VAAPI had to be switched.
One somewhat user-visible change is that the verbose log will now always
show the hw_subfmt as image format, instead of as nonsensical number.
(In the end it would be good if we could switch to AVHWFramesContext
completely, but the upstream API is incomplete and doesn't cover
Direct3D11 and Videotoolbox.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The main change is with video/hwdec.h. mp_hwdec_info is made opaque (and
renamed to mp_hwdec_devices). Its accessors are mainly thread-safe (or
documented where not), which makes the whole thing saner and cleaner. In
particular, thread-safety rules become less subtle and more obvious.
The new internal API makes it easier to support multiple OpenGL interop
backends. (Although this is not done yet, and it's not clear whether it
ever will.)
This also removes all the API-specific fields from mp_hwdec_ctx and
replaces them with a "ctx" field. For d3d in particular, we drop the
mp_d3d_ctx struct completely, and pass the interfaces directly.
Remove the emulation checks from vaapi.c and vdpau.c; they are
pointless, and the checks that matter are done on the VO layer.
The d3d hardware decoders might slightly change behavior: dxva2-copy
will not use the VO device anymore if the VO supports proper interop.
This pretty much assumes that any in such cases the VO will not use any
form of exclusive mode, which makes using the VO device in copy mode
unnecessary.
This is a big refactor. Some things may be untested and could be broken.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, we have made the assumption that a driver will use only 1
hardware surface format. the format is dictated by the driver (you
don't create surfaces with a specific format - you just pass a
rt_format and get a surface that will be in a specific driver-chosen
format).
In particular, the renderer created a dummy surface to probe the format,
and hoped the decoder would produce the same format. Due to a driver
bug this required a workaround to actually get the same format as the
driver did.
Change this so that the format is determined in the decoder. The format
is then passed down as hw_subfmt, which allows the renderer to configure
itself with the correct format. If the hardware surface changes its
format midstream, the renderer can be reconfigured using the normal
mechanisms.
This calls va_surface_init_subformat() each time after the decoder
returns a surface. Since libavcodec/AVFrame has no concept of sub-
formats, this is unavoidable. It creates and destroys a derived
VAImage, but this shouldn't have any bad performance effects (at
least I didn't notice any measurable effects).
Note that vaDeriveImage() failures are silently ignored as some
drivers (the vdpau wrapper) support neither vaDeriveImage, nor EGL
interop. In addition, we still probe whether we can map an image
in the EGL interop code. This is important as it's the only way
to determine whether EGL interop is supported at all. With respect
to the driver bug mentioned above, it doesn't matter which format
the test surface has.
In vf_vavpp, also remove the rt_format guessing business. I think the
existing logic was a bit meaningless anyway. It's not even a given
that vavpp produces the same rt_format for output.
|
| |
|
|
|
|
|
|
|
|
| |
They are evil and should be eradicated. Some of these were pretty dumb
anyway.
There are probably some more around in platform specific code or other
code not enabled by default on Linux.
|
|
|
|
|
|
|
| |
This VA_FOURCC isn't even defined by latest drivers, so I'm just
assuming it doesn't exist and never existed. For planar 4:2:0,
VA_FOURCC_YV12 is normally preferred, and there's even a VA_FOURCC_IYUV
for 4:2:0 with unswapped planes.
|
|
|
|
|
|
|
|
|
| |
This makes it much faster if the surface is really mapped from GPU
memory. It's slightly slower than system memcpy if used on system
memory. We don't really know definitely in which type of memory
it's located, so we use the GPU memcpy in all cases.
Fixes #2317.
|
|
|
|
|
|
|
|
|
|
| |
Printing "Using vaDeriveImage()" every frame is too verbose, so raise
the log level.
mp_image strides are in int and not unsigned int; fix this. It's not
like it actually matters, though.
Finish a comment.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This can happen if the hw decoder allocates padded surfaces (e.g.
mod16), but the VPP output surface was allocated with the exact size.
Apparently VPP requires matching input and output sizes, or it will add
artifacts. In this case, it added mirrored pixels to the bottom few
pixels.
Note that the previous commit should have fixed this. But it didn't
work, while this commit does.
Fixes #2320.
|
|
|
|
| |
Appears to be required by some hardware. Whatever.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
vaQueryImageFormats() returns a randomly ordered list - so we shouldn't
assume the first format on the list which works is the best. This
effectively switches to nv12 instead of yuv420p on some drivers.
We handle this by reusing va_to_imgfmt[], and ordering it by preference.
We hardcode that GPUs prefer nv12 pver yuv420p. In theory we could do
complicated probing (allocate dummy surface + use vaDeriveImage on it,
then retrieve the FourCC) - but all things which could break assumption
in the future are not supported yet (like 10 bit or 4:4:4), so this is
fine.
|
|
|
|
|
|
| |
This reverts commit d660e67be9cc7d79d81e0c09c2720ea6d0a35e3a.
Fixes #2123.
|
|
|
|
|
|
|
|
|
|
|
| |
Drop libva versions below 0.34.0. These are ancient, so I don't care.
Drop the vo_vaapi deinterlacer as well. With 0.34.0, VPP is always
available, and deinterlacing is done with vf_vavpp.
The vaCreateSurfaces() function changes its signature - actually it did
in 0.34.0 or so, and the <va/va_compat.h> defined a macro to make it use
the old signature.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Work around that FFmpeg doesn't distinguish between surface and cropped
size. The decoder always aligns the surface size to something
"convenient" (e.g. 16 for h264), and to get to the correct cropped size,
the output image's width/height is reduced. Using the cropped size
instead of the real surface size breaks the libva API in certain cases,
so we simply store and use the original size in our per-surface struct.
(If data is cropped on the left/top borders, hw decoding will simply
display these - FFmpeg doesn't let us do better.)
|
| |
|
|
|
|
|
|
|
| |
In theory, this code path avoids a copy. In practice, it never seems
to get enabled at all. But it does have potential for weird bugs or
performance issues (like being mapped from non-cacheable memory),
so kill it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When using --hwdec=auto, about half of all systems will print:
"[vdpau] Error when calling vdp_device_create_x11: 1"
this happens because usually mpv will be linked against both vdpau and
vaapi libs, but the drivers are not necessarily available. Then trying
to load a driver will fail. This is a normal part of probing, but the
error messages were printed anyway. Silence them by explicitly
distinguishing probing.
This pretty much goes through all the layers. We actually consider
loading hw backends for vo_opengl always "auto probed", even if a hw
backend is explicitly requested. In this case vd_lavc will print a
warning message anyway (adjust this message a bit).
|
|
|
|
|
|
|
|
|
| |
This is somewhat imperfect, because detection of hw decoding APIs is
mostly done on demand, and often avoided if not necessary. (For example,
we know very well that there are no hw decoders for certain codecs.)
This also requires every hwdec backend to identify itself (see hwdec.h
changes).
|
|
|
|
|
|
|
|
| |
Instead of converting the hw surface to an image in the VO, provide a
generic way to convet hw surfaces, and use this in the screenshot code.
It's all relatively straightforward, except vdpau is being terrible. It
needs a huge chunk of new code, because copying back is not simple.
|
|
|
|
|
|
|
|
|
|
|
| |
Before this commit, each hw backend had their own specific struct types
for context, and some, like VDA, had none at all. Add a context struct
(mp_hwdec_ctx) that provides a somewhat generic way to pass the hwdec
context around. Some things get slightly better, some slightly more
verbose.
mp_hwdec_info is still around; it's still needed, but is reduced to its
role of handling delayed loading of the hwdec backend.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
So talking to a certain Intel dev, it sounded like modern VA-API drivers
are reasonable thread-safe. But apparently that is not the case. Not at
all. So add approximate locking around all vaapi API calls.
The problem appeared once we moved decoding and display to different
threads. That means the "vaapi-copy" mode was unaffected, but decoding
with vo_vaapi or vo_opengl lead to random crashes.
Untested on real Intel hardware. With the vdpau emulation, it seems to
work fine - but actually it worked fine even before this commit, because
vdpau was written and designed not by morons, but competent people
(vdpau is guaranteed to be fully thread-safe).
There is some probability that this commit doesn't fix things entirely.
One problem is that locking might not be complete. For one, libavcodec
_also_ accesses vaapi, so we have to rely on our own guesses how and
when lavc uses vaapi (since we disable multithreading when doing hw
decoding, our guess should be relatively good, but it's still a lavc
implementation detail). One other reason that this commit might not
help is Intel's amazing potential to fuckup anything that is good and
holy.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mpv supports two hardware decoding APIs on Linux: vdpau and vaapi. Each
of these has emulation wrappers. The wrappers are usually slower and
have fewer features than their native opposites. In particular the libva
vdpau driver is practically unmaintained.
Check the vendor string and print a warning if emulation is detected.
Checking vendor strings is a very stupid thing to do, but I find the
thought of people using an emulated API for no reason worse.
Also, make --hwdec=auto never use an API that is detected as emulated.
This doesn't work quite right yet, because once one API is loaded,
vo_opengl doesn't unload it, so no hardware decoding will be used if the
first probed API (usually vdpau) is rejected. But good enough.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
VAAPI has some ambiguous image formats, like VA_FOURCC_I420,
VA_FOURCC_IYUV, VA_FOURCC_YV12 (the latter exactly the same as the first
two, just with swapped planes). There is potentially a problem when one
specific VAAPI format was picked, and converting it to a mpv format and
back to a VAAPI FourCC would result in a numerically different format
(even if it's actually the same). Then it could e.g. happen that
functions like va_surface_upload() reallocate the underlying VAImage,
which would be inefficient. Change the code so that this can't happen.
(Probably not a problem in practice with the current VAAPI usage.)
|
|
|
|
| |
Merge va_surface_priv into va_surface.
|
|
|
|
|
|
| |
It's not really needed to be public. Other code can just use mp_image.
The only disadvantage is that the other code needs to call an accessor
to get the VASurfaceID.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Although I at first thought it would be better to have a separate
implementation for hwaccels because the difference to software images
are too large, it turns out you can actually save some code with it.
Note that the old implementation had a small memory management bug. This
got painted over in commit 269c1e1, but is hereby solved properly.
Also note that I couldn't test vf_vavpp.c (due to lack of hardware), and
I hope I didn't accidentally break it.
|
|
|
|
| |
"res" can be uninitialized in the error case.
|
|
|
|
|
|
| |
This ended up a little bit messy. In order to get a mp_log everywhere,
mostly make use of the fact that va_surface already references global
state anyway.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
PIX_FMT_VDA_VLD and PIX_FMT_VAAPI_VLD were never used anywhere. I'm not
sure why they were even added, and they sound like they are just for
compatibility with XvMC-style decoding, which sucks anyway.
Now that there's only a single vaapi format, remove the
IMGFMT_IS_VAAPI() macro. Also get rid of IMGFMT_IS_VDA(), which was
unused.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before, a VO could easily refuse to respond to VOCTRL_REDRAW_FRAME,
which means the VO wouldn't redraw OSD and window contents, and the
player would appear frozen to the user. This was a bit stupid, and makes
dealing with some corner cases much harder (think of --keep-open, which
was hard to implement, because the VO gets into this state if there are
no new video frames after a seek reset).
Change this, and require VOs to always react to VOCTRL_REDRAW_FRAME.
There are two aspects of this: First, behavior after a (successful)
vo_reconfig() call, but before any video frame has been displayed.
Second, behavior after a vo_seek_reset().
For the first issue, we define that sending VOCTRL_REDRAW_FRAME after
vo_reconfig() should clear the window with black. This requires minor
changes to some VOs. In particular vaapi makes this horribly
complicated, because OSD rendering is bound to a video surface. We
create a black dummy surface for this purpose.
The second issue is much simpler and works already with most VOs: they
simply redraw whatever has been uploaded previously. The exception is
vdpau, which has a complicated mechanism to track and filter video
frames. The state associated with this mechanism is completely cleared
with vo_seek_reset(), so implementing this to work as expected is not
trivial. For now, we just clear the window with black.
|
|
|
|
|
|
| |
How embarrassing. This could make --hwdec=vaapi-copy as well as
screenshots with vo_vaapi randomly fail. Regression since
commit b8382aa.
|
|
|
|
|
| |
This can just be not supported, so making it look like a real error
doesn't make much sense.
|
|
|
|
| |
Just for robustness. Also print a warning in vo_vaapi if this happens.
|
|
|
|
|
|
|
|
| |
Don't allocate a VAImage and a mp_image every time. VAImage are cached
in the surfaces themselves, and for mp_image an explicit pool is
created. The retry loop runs only once for each surface now.
This also makes use of vaDeriveImage() if possible.
|
|
|
|
|
|
|
|
|
|
|
| |
The code using FFSWAP was moved from vo_vaapi.c to vaapi.c, which didn't
include libavutil/common.h anymore, just libavutil/avutil.h. The header
avutil.h doesn't include common.h recursively in Libav, so it broke
there.
Add FFSWAP as MPSWAP in mp_common.h (copy pasted from ffmpeg) to make
sure this doesn't happen again. (This kind of stuff happens all too
often, so screw libavutil.)
|
|
Merged from pull request |