diff options
author | wm4 <wm4@nowhere> | 2016-04-11 20:46:05 +0200 |
---|---|---|
committer | wm4 <wm4@nowhere> | 2016-04-11 22:03:26 +0200 |
commit | f5ff2656e0d192a2e25fe5f65edf219972211a48 (patch) | |
tree | 4400fd5f98c38a7c0923a5f8b468335684334f29 /video/vaapi.c | |
parent | 49431626cb1bde400ab6de6adc68ce39cdbbf6f8 (diff) | |
download | mpv-f5ff2656e0d192a2e25fe5f65edf219972211a48.tar.bz2 mpv-f5ff2656e0d192a2e25fe5f65edf219972211a48.tar.xz |
vaapi: determine surface format in decoder, not in renderer
Until now, we have made the assumption that a driver will use only 1
hardware surface format. the format is dictated by the driver (you
don't create surfaces with a specific format - you just pass a
rt_format and get a surface that will be in a specific driver-chosen
format).
In particular, the renderer created a dummy surface to probe the format,
and hoped the decoder would produce the same format. Due to a driver
bug this required a workaround to actually get the same format as the
driver did.
Change this so that the format is determined in the decoder. The format
is then passed down as hw_subfmt, which allows the renderer to configure
itself with the correct format. If the hardware surface changes its
format midstream, the renderer can be reconfigured using the normal
mechanisms.
This calls va_surface_init_subformat() each time after the decoder
returns a surface. Since libavcodec/AVFrame has no concept of sub-
formats, this is unavoidable. It creates and destroys a derived
VAImage, but this shouldn't have any bad performance effects (at
least I didn't notice any measurable effects).
Note that vaDeriveImage() failures are silently ignored as some
drivers (the vdpau wrapper) support neither vaDeriveImage, nor EGL
interop. In addition, we still probe whether we can map an image
in the EGL interop code. This is important as it's the only way
to determine whether EGL interop is supported at all. With respect
to the driver bug mentioned above, it doesn't matter which format
the test surface has.
In vf_vavpp, also remove the rt_format guessing business. I think the
existing logic was a bit meaningless anyway. It's not even a given
that vavpp produces the same rt_format for output.
Diffstat (limited to 'video/vaapi.c')
-rw-r--r-- | video/vaapi.c | 32 |
1 files changed, 32 insertions, 0 deletions
diff --git a/video/vaapi.c b/video/vaapi.c index 61d94ef156..9a5820a98c 100644 --- a/video/vaapi.c +++ b/video/vaapi.c @@ -487,6 +487,38 @@ struct mp_image *va_surface_download(struct mp_image *src, return NULL; } +// Set the hw_subfmt from the surface's real format. Because of this bug: +// https://bugs.freedesktop.org/show_bug.cgi?id=79848 +// it should be assumed that the real format is only known after an arbitrary +// vaCreateContext() call has been made, or even better, after the surface +// has been rendered to. +// If the hw_subfmt is already set, this is a NOP. +void va_surface_init_subformat(struct mp_image *mpi) +{ + VAStatus status; + if (mpi->params.hw_subfmt) + return; + struct va_surface *p = va_surface_in_mp_image(mpi); + if (!p) + return; + + VAImage va_image = { .image_id = VA_INVALID_ID }; + + va_lock(p->ctx); + + status = vaDeriveImage(p->display, va_surface_id(mpi), &va_image); + if (status != VA_STATUS_SUCCESS) + goto err; + + mpi->params.hw_subfmt = va_image.format.fourcc; + + status = vaDestroyImage(p->display, va_image.image_id); + CHECK_VA_STATUS(p->ctx, "vaDestroyImage()"); + +err: + va_unlock(p->ctx); +} + struct pool_alloc_ctx { struct mp_vaapi_ctx *vaapi; int rt_format; |