summaryrefslogtreecommitdiffstats
path: root/video/out/opengl/hwdec_vaegl.c
Commit message (Collapse)AuthorAgeFilesLines
* vo_opengl: EGL: fix hwdec probingwm42016-05-051-1/+1
| | | | | | | | | | | | | If ANGLE was probed before (but rejected), the ANGLE API can remain "initialized", and eglGetCurrentDisplay() will return a non-NULL EGLDisplay. Then if a native GL context is used, the ANGLE/EGL API will then (apparently) keep working alongside native OpenGL API. Since GL objects are just numbers, they'll simply fail to interact, and OpenGL will get invalid textures. For some reason this will result in black textures. With VAAPI-EGL, something similar could happen in theory, but didn't in practice.
* vaapi: determine surface format in decoder, not in rendererwm42016-04-111-37/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Until now, we have made the assumption that a driver will use only 1 hardware surface format. the format is dictated by the driver (you don't create surfaces with a specific format - you just pass a rt_format and get a surface that will be in a specific driver-chosen format). In particular, the renderer created a dummy surface to probe the format, and hoped the decoder would produce the same format. Due to a driver bug this required a workaround to actually get the same format as the driver did. Change this so that the format is determined in the decoder. The format is then passed down as hw_subfmt, which allows the renderer to configure itself with the correct format. If the hardware surface changes its format midstream, the renderer can be reconfigured using the normal mechanisms. This calls va_surface_init_subformat() each time after the decoder returns a surface. Since libavcodec/AVFrame has no concept of sub- formats, this is unavoidable. It creates and destroys a derived VAImage, but this shouldn't have any bad performance effects (at least I didn't notice any measurable effects). Note that vaDeriveImage() failures are silently ignored as some drivers (the vdpau wrapper) support neither vaDeriveImage, nor EGL interop. In addition, we still probe whether we can map an image in the EGL interop code. This is important as it's the only way to determine whether EGL interop is supported at all. With respect to the driver bug mentioned above, it doesn't matter which format the test surface has. In vf_vavpp, also remove the rt_format guessing business. I think the existing logic was a bit meaningless anyway. It's not even a given that vavpp produces the same rt_format for output.
* vo_opengl: hwdec: use IDs for API, and log which backend is usedwm42016-02-011-1/+2
| | | | | | | Since there can be multiple backends for a single API (vaapi can use GLX or EGL), not logging the exact backend name is annoying. So add it. At the same time, there is no need to duplicate the name as used by the --hwdec options, so replace it with using the numeric hwdec API ID.
* vo_opengl: vaapi: don't expect EGL exts. to be in common ext. stringwm42016-01-221-2/+6
|
* vo_opengl: vaapi: reorganize platform entrypoints as tablewm42016-01-211-15/+20
|
* vo_opengl: add KMS/DRM VAAPI hardware decoding interopwm42016-01-201-0/+18
| | | | Just requires glueing it together with Bloat Super Glue (tm).
* vaapi: replace VA_STR_FOURCCwm42016-01-111-2/+2
|
* vo_opengl: hwdec_vaegl: change license to LGPL 2.1wm42016-01-071-9/+7
| | | | | | | | | This file claims to be based on the "MPlayer VA-API patch", but this is untrue. Only some glue code was copied from hwdec_vaglx.c, and this glue code was never in MPlayer or the MPlayer VA-API patch in any form, and instead part of the mpv-original way we do hardware decoding OpenGL interop. The EGL interop method didn't exist at the time the MPlayer VA-API patch was created either.
* vo_opengl: never load vaapi GLX interop by defaultwm42015-11-091-1/+1
| | | | | | | Causes more harm than it helps. Will eventually be removed. Also rename the "reject_emulated" field to "probing" - this is more appropriate now.
* vo_opengl: vaapi: fix compilation failure on older systemswm42015-10-231-1/+2
| | | | | | | Older systems have certain EGL extension definitions missing. We redefine them to make the build system easier, and because it's trivial. But we forgot to define the EGL_LINUX_DMA_BUF_EXT identifier. (I hope it's the only missing one.)
* vo_opengl: remove leftover variable from vaglx in vaeglEmmanuel Gil Peyrot2015-10-021-1/+0
| | | | This was preventing compilation on systems without X11 headers.
* vo_opengl: vaapi: add Wayland supportwm42015-09-271-0/+15
| | | | | | Pretty trivial with the new EGL interop. Fixes #478.
* vaapi: remove dependency on X11wm42015-09-271-6/+24
| | | | | | | | | | | | | There are at least 2 ways of using VAAPI without X11 (Wayland, DRM). Remove the X11 requirement from the decoder part and the EGL interop. This will be used by a following commit, which adds Wayland support. The worst about this is the decoder part, which includes a bad hack for using the decoder without any VO interop (also known as "vaapi-copy" mode). Separate the X11 parts so that they're self-contained. For the EGL interop code we do something similar (it's kept slightly simpler, because it essentially only has to translate between our silly MPGetNativeDisplay abstraction and the vaGetDisplay...() call).
* vo_opengl: vaapi: provide symbols for missing extensionswm42015-09-271-0/+14
| | | | | We also could just check at build time, but since it's not much, just redefine them inline if not present.
* vo_opengl: vaapi: redo how EGL extensions are loadedwm42015-09-271-13/+32
| | | | | | | It looks like my hope that we can unconditionally include EGL headers in the OpenGL code is not coming true, because OSX does not support EGL at all. So I prefer loading the VAAPI EGL/GL specific extensions manually, because it's less of a mess. Partially reverts commit d47dff3f.
* vo_opengl: vaapi: probe the surface formatwm42015-09-261-2/+68
| | | | | | | | | | | | | | | | Probe the surface format, and check whether it's really something we support. This also does a complete check whether the EGL interop works at all (the only way to find this out is actually running this code). Also, support YV12. Under some circumstances, vaapi (with Intel drivers) can be made to use this format. Unfortunately, the Intel drivers show some very weird behavior, which is hopefully a bug. insane_hack() provides a very evil workaround (see comments). A proper solution might be passing the hw format as part of mp_image_params, but as long as hw surfaces appear to be able to change the format on the fly, attempting this is probably not worth the extra complexity and likely fragility. The hack allows us to pretend that there is sane behavior for now.
* vo_opengl: vaapi: undo vaAcquireBufferHandle() correctly on errorwm42015-09-251-2/+4
| | | | | | | Checking and resetting the VAImage.buf field is non-sense, even if it happened to work out in the normal case. buf is actually freed when vaDestroyImage() is called (not quite intuitive), and we need an extra field to know whether vaReleaseBufferHandle() has to be called.
* vo_opengl: vaapi: handle YV12 correctlywm42015-09-251-0/+3
| | | | This specific FourCC has its planes swapped compared to FFmpeg yuv420p.
* vo_opengl: vaapi: document DRM fourcc upstream defineswm42015-09-251-3/+4
| | | | | | | | | | | Add the upstream symbolic names as comments. Normally, these should be defined in libdrm's drm_fourcc.h header. But DRM_FORMAT_R8 and DRM_FORMAT_GR88 are not defined anywhere, except in the kernel userland headers of Linux 4.3 (!). We don't want mpv to depend on bleeding-edge Linux kernel headers, so this will have to do. Also, just for completeness, add fourccs for the 3 and 4 channel formats. I didn't manage to test them, though.
* vo_opengl: vaapi: use dummy image to determine plane layoutwm42015-09-251-9/+8
| | | | | | | Reduces the amount of hardcoded assumptions about the layout drastically. (Now adding yuv420 support would be just adjusting an if, if you ignore the other problems, such as determining the hw format at all early enough.)
* vo_opengl: vaapi: remove unnecessary loopwm42015-09-251-2/+1
| | | | Not sure what I was thinking.
* vo_opengl: vaapi: fix cleanupwm42015-09-251-0/+2
| | | | | | | Don't call eglDestroyImageKHR() on the same ID possibly more than once. Clear the image reference on termination, or we would leak up to 1 image per VO recreation.
* vo_opengl: support new VAAPI EGL interopwm42015-09-251-0/+247
Should work much better than the old GLX interop code. Requires Mesa 11, and explicitly selecting the X11 EGL backend with: --vo=opengl:backend=x11egl Should it turn out that the new interop works well, we will try to autodetect EGL by default. This code still uses some bad assumptions, like expecting surfaces to be in NV12. (This is probably ok, because virtually all HW will use this format. But we should at least check this on init or so, instead of failing to render an image if our assumption doesn't hold up.) This repo was a lot of help: https://github.com/gbeauchesne/ffvademo The kodi code was also helpful (the magic FourCC it uses for EGL_LINUX_DRM_FOURCC_EXT are nowhere documented, and EGL_IMAGE_INTERNAL_FORMAT_EXT as used in ffvademo does not actually exist). (This is the 3rd VAAPI GL interop that was implemented in this player.)