summaryrefslogtreecommitdiffstats
path: root/video/out/opengl/hwdec_vaegl.c
Commit message (Collapse)AuthorAgeFilesLines
* vo_gpu: hwdec_vaegl: Rename and move to hwdec_vaapiPhilip Langdale2019-07-081-558/+0
| | | | | In preparation for adding Vulkan interop support, let's rename to remove the egl reference and move to an api neutral location.
* hwdec_vaegl: Fix VAAPI EGL interop used with gpu-context=drmAnton Kindestam2018-07-091-5/+6
| | | | | | | | Add another parameter to mpv_opengl_drm_params to hold the FD to the render node, so that the fd can be passed to hwdec_vaegl. The render node is opened in context_drm_egl and inferred from the primary device fd using drmGetRenderDeviceNameFromFd.
* client API: add a new way to pass X11 Display etc. to render APIwm42018-03-261-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Hardware decoding things often need access to additional handles from the windowing system, such as the X11 or Wayland display when using vaapi. The opengl-cb had nothing dedicated for this, and used the weird GL_MP_MPGetNativeDisplay GL extension (which was mpv specific and not officially registered with OpenGL). This was awkward, and a pain due to having to emulate GL context behavior (like needing a TLS variable to store context for the pseudo GL extension function). In addition (and not inherently due to this), we could pass only one resource from mpv builtin context backends to hwdecs. It was also all GL specific. Replace this with a newer mechanism. It works for all RA backends, not just GL. the API user can explicitly pass the objects at init time via mpv_render_context_create(). Multiple resources are naturally possible. The API uses MPV_RENDER_PARAM_* defines, but internally we use strings. This is done for 2 reasons: 1. trying to leave libmpv and internal mechanisms decoupled, 2. not having to add public API for some of the internal resource types (especially D3D/GL interop stuff). To remain sane, drop support for obscure half-working opengl-cb things, like the DRM interop (was missing necessary things), the RPI window thing (nobody used it), and obscure D3D interop things (not needed with ANGLE, others were undocumented). In order not to break ABI and the C API, we don't remove the associated structs from opengl_cb.h. The parts which are still needed (in particular DRM interop) needs to be ported to the render API.
* Fix various typos in log messagesNicolas F2017-12-031-1/+1
|
* vo_gpu: hwdec: remove redundant fieldswm42017-12-011-1/+0
| | | | | | | | | | | | | The testing_only field is not referenced anymore with vaglx removed and the previous commit dropping all uses. The ra_hwdec_driver.api field became unused with the previous commit, but all hwdec interop drivers still initialized it. Since this touches highly OS-specific code, build regressions are possible (plus the previous commit might break hw decoding at runtime). At least hwdec_cuda.c still used the .api field, other than initializing it.
* vo_opengl: hwdec_vaegl: Reenable vaExportSurfaceHandle()Mark Thompson2017-11-301-3/+3
| | | | It will be present from libva 2.1 (VAAPI 1.1.0 or higher).
* vo_opengl: hwdec_vaegl: Disable vaExportSurfaceHandle()Mark Thompson2017-10-231-3/+3
| | | | | libva 2.0 (VAAPI 1.0.0) was released without it, but it is scheduled to be included in libva 2.1.
* vo_opengl: hwdec_vaegl: Use vaExportSurfaceHandle() if presentMark Thompson2017-10-091-0/+80
| | | | | | | | | | | This new interface in libva2 offers a cleaner way to export surfaces which can then be imported to EGL. In particular, this works with the Mesa driver, so we can have proper playback without a pointless download and upload on AMD cards. This change does nothing with libva1, and will fall back to the libva1 interface (vaDeriveImage() + vaAcquireBufferHandle()) if vaExportSurfaceHandle() is not present.
* vo_opengl: refactor into vo_gpuNiklas Haas2017-09-211-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is done in several steps: 1. refactor MPGLContext -> struct ra_ctx 2. move GL-specific stuff in vo_opengl into opengl/context.c 3. generalize context creation to support other APIs, and add --gpu-api 4. rename all of the --opengl- options that are no longer opengl-specific 5. move all of the stuff from opengl/* that isn't GL-specific into gpu/ (note: opengl/gl_utils.h became opengl/utils.h) 6. rename vo_opengl to vo_gpu 7. to handle window screenshots, the short-term approach was to just add it to ra_swchain_fns. Long term (and for vulkan) this has to be moved to ra itself (and vo_gpu altered to compensate), but this was a stop-gap measure to prevent this commit from getting too big 8. move ra->fns->flush to ra_gl_ctx instead 9. some other minor changes that I've probably already forgotten Note: This is one half of a major refactor, the other half of which is provided by rossy's following commit. This commit enables support for all linux platforms, while his version enables support for all non-linux platforms. Note 2: vo_opengl_cb.c also re-uses ra_gl_ctx so it benefits from the --opengl- options like --opengl-early-flush, --opengl-finish etc. Should be a strict superset of the old functionality. Disclaimer: Since I have no way of compiling mpv on all platforms, some of these ports were done blindly. Specifically, the blind ports included context_mali_fbdev.c and context_rpi.c. Since they're both based on egl_helpers, the port should have gone smoothly without any major changes required. But if somebody complains about a compile error on those platforms (assuming anybody actually uses them), you know where to complain.
* vo_opengl: separate hwdec context and mapping, port it to use rawm42017-08-101-141/+150
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | This does two separate rather intrusive things: 1. Make the hwdec context (which does initialization, provides the device to the decoder, and other basic state) and frame mapping (getting textures from a mp_image) separate. This is more flexible, and you could map multiple images at once. It will help removing some hwdec special-casing from video.c. 2. Switch all hwdec API use to ra. Of course all code is still GL specific, but in theory it would be possible to support other backends. The most important change is that the hwdec interop returns ra objects, instead of anything GL specific. This removes the last dependency on GL-specific header files from video.c. I'm mixing these separate changes because both requires essentially rewriting all the glue code, so better do them at once. For the same reason, this change isn't done incrementally. hwdec_ios.m is untested, since I can't test it. Apart from superficial mistakes, this also requires dealing with Apple's texture format fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED and GL_RG. We also need to report the correct format via ra_tex to the renderer, which is done by find_la_variant(). It's unknown whether this works correctly. hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my RPI out of a dusty pile of devices and cables, so, later.)
* vo_opengl: restructure format setupwm42017-06-301-2/+0
| | | | | | | | | | | | Instead of setting up a weird swizzle (which is linked to how the internal renderer code works, rather than the generic format code), add per-component mapping to gl_imgfmt_desc. The renderer still computes the weird swizzle, but at least it's confined to itself. Also, it appears the hwdec backends don't need this anymore. It's really nice that the messy init_format() goes away too.
* Drop/move img_fourcc.hwm42017-06-181-5/+5
| | | | | | | | | | | | | | | This file is an leftover from when img_format.h was changed from using the ancient FourCCs (based on Microsoft multimedia conventions) for pixel formats to a simple enum. The remaining cases still inherently used FourCCs for whatever reasons. Instead of worrying about residual copyrights in this file, just move it into code we don't want to relicense (the ancient Linux TV code). We have to fix some other code depending on it. For the most part, we just replace the MP_FOURCC macro with libavutil's MKTAG (although the macro definition is exactly the same). In demux_raw, we drop some pre-defined FourCCs, but it's not like it matters. (Instead of --demuxer-rawvideo-format use --demuxer-rawvideo-mp-format.)
* vo_opengl: drop TLS usagewm42017-05-111-5/+3
| | | | | | | | | | | TLS is a headache. We should avoid it if we can. The involved mechanism is unfortunately entangled with the unfortunate libmpv API for returning pointers to host API objects. This has to be kept until we change the API somehow. Practically untested out of pure laziness. I'm sure I'll get a bunch of reports if it's broken.
* video: drop vaapi/vdpau hw decoding support with FFmpeg 3.2wm42017-04-231-47/+34
| | | | | | | | | | This drops support for the old libavcodec APIs. Now FFmpeg 3.3 or FFmpeg git is required. Libav has no release with the new APIs yet, so for Libav git as of a few weeks or months ago or so is required if you want to use Libav. Not much actually changes in hwdec_vaegl.c - some code is removed, but the reindentation inflates the diff.
* vo_opengl: hwdec_vaegl: use new format setup functionwm42017-02-171-13/+19
| | | | Plus add a helper.
* vo_opengl: hwdec_vaegl: fix potentially undefined memory accesswm42017-02-141-2/+2
|
* vaapi: remove central lock around vaapi API callswm42017-01-281-8/+0
| | | | | | | | The lock was disabled recently. This commit gets rid of the dummied out calls. The main reason for removing it is that there is no apparent need for it anymore, and the new FFmpeg vaapi code does not use or provide such a lock (there are some places which we cannot control and which do vaapi API calls, like frame destructors).
* vo_opengl: hwdec_vaegl: add a lie for compatibilitywm42017-01-131-1/+1
| | | | | | | EGL rendering + new decode API didn't work due to a certain libva bug with sort-of legacy API use hitting again. It will report the wrong vaapi pixel format. It's old code and always nv12 anyway, so stop worrying about it.
* vo_opengl, vaapi: properly probe 10 bit rendering supportwm42017-01-131-28/+117
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are going to be users who have a Mesa installation which do not support 10 bit, but a GPU which can decode to 10 bit. So it's probably better not to hardcode whether it is supported. Introduce a more general way to signal supported formats from renderer to decoder. Obviously this is imperfect, because it still isn't part of proper format negotation (for example, what if there's a vavpp filter, which accepts anything). Still slightly better than before. I don't know any way to probe for vaapi dmabuf/EGL dmabuf support properly (in particular testing specific formats, not just general availability). So we stay with the current approach and try to create and map dummy surfaces on init to probe for support. Overdo it and check all formats that AVHWFramesConstraints reports, instead of only NV12 and P010 surfaces. Since we can support unknown formats now, add explicitly checks to the EGL/dmabuf mapper code to reject unsupported formats. I also noticed that libavutil signals support for RGB0/BGR0, but couldn't get it to work. Remove the DRM formats that are unused/didn't work the way I tried to use them. With this, 10 bit decoding + rendering should work, provided you have a capable CPU and a patched Mesa. The required Mesa patch adds support for the R16 and GR32 formats. It was sent by a Kodi developer to the Mesa developer mailing list and was not accepted yet.
* vo_opengl: hwdec_vaegl: remove redundant vaapi surface format checkwm42017-01-131-8/+1
| | | | | | | | | | | | | | | | | For surfaces allocated by libavutil, we assume that the sw_format (i.e. in hw_subfmt in mp_image_params) is always correct. The API guarantees that it explicitly sets the equivalent vaapi format on surface allocation. For surfaces allocated by mpv's old vaapi code, we explicitly retrieve the format right after decoding. Unless the driver magically changes the format asynchronously, it will still be correct once the surface reaches the renderer. In both cases, checking the format again is obviously redundant. In addition, it doesn't require us to maintain a libva fourcc <-> mpfmt table and the va_fourcc_to_imgfmt() function. This also unbreaks 10 bit rendering support (still disabled by default).
* vo_opengl: hwdec_vaegl: fix terminology in commentwm42017-01-131-2/+2
| | | | Bad idea to call a component "pixel" - that's true only for the Y plane.
* vo_opengl: hwdec_vaegl: DRM_FORMAT_GR16 was renamed to DRM_FORMAT_GR32Mark Thompson2017-01-131-1/+1
| | | | Signed-off-by: wm4 <wm4@nowhere>
* vo_opengl: hwdec_vaegl: add experimental P010 supportwm42017-01-121-6/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | This does not work, because Mesa has no support for the proposed DRM_FORMAT_R16 and DRM_FORMAT_GR16 formats. It's also untested of course. As long as video/decode/vaapi.c doesn't hand down P010 surfaces, this is fine anyway. This can be tested by removing the code that disables P010 output: diff --git a/video/decode/vaapi.c b/video/decode/vaapi.c --- a/video/decode/vaapi.c +++ b/video/decode/vaapi.c @@ -55,13 +55,6 @@ static int init_decoder(struct lavc_ctx *ctx, int w, int h) assert(!ctx->avctx->hw_frames_ctx); - // If we use direct rendering, disallow 10 bit - it's probably not - // implemented yet, and our downstream components can't deal with it. - if (!p->own_ctx && required_sw_format != AV_PIX_FMT_NV12) { - MP_WARN(ctx, "10 bit surfaces are currently supported.\n"); - return -1; - } -
* vo_opengl: vaegl: log more debugging infoswm42016-09-301-7/+12
|
* vo_opengl: hwdec: reset hw_subfmt fieldwm42016-07-151-0/+1
| | | | | | In theory, mp_image_params with hw_subfmt set to non-0 if imgfmt is not a hwaccel format is invalid. (It worked fine because nothing checks this yet.)
* video: change hw_subfmt meaningwm42016-07-151-4/+3
| | | | | | | | | | | | | | | | | | The hw_subfmt field roughly corresponds to the field AVHWFramesContext.sw_format in ffmpeg. The ffmpeg one is of the type AVPixelFormat (instead of the underlying hardware format), so it's a good idea to switch to this too for preparation. Now the hw_subfmt field is an mp_imgfmt instead of an opaque/API- specific number. VDPAU and Direct3D11 already used mp_imgfmt, but Videotoolbox and VAAPI had to be switched. One somewhat user-visible change is that the verbose log will now always show the hw_subfmt as image format, instead of as nonsensical number. (In the end it would be good if we could switch to AVHWFramesContext completely, but the upstream API is incomplete and doesn't cover Direct3D11 and Videotoolbox.)
* vo_opengl: refactor how hwdec interop exports textureswm42016-05-101-25/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rename gl_hwdec_driver.map_image to map_frame, and let it fill out a struct gl_hwdec_frame describing the exact texture layout. This gives more flexibility to what the hwdec interop can export. In particular, it can export strange component orders/permutations and textures with padded size. (The latter originating from cropped video.) The way gl_hwdec_frame works is in the spirit of the rest of the vo_opengl video processing code, which tends to put as much information in immediate state (as part of the dataflow), instead of declaring it globally. To some degree this duplicates the texplane and img_tex structs, but until we somehow unify those, it's better to give the hwdec state its own struct. The fact that changing the hwdec struct would require changes and testing on at least 4 platform/GPU combinations makes duplicating it almost a requirement to avoid pain later. Make gl_hwdec_driver.reinit set the new image format and remove the gl_hwdec.converted_imgfmt field. Likewise, gl_hwdec.gl_texture_target is replaced with gl_hwdec_plane.gl_target. Split out a init_image_desc function from init_format. The latter is not called in the hwdec case at all anymore. Setting up most of struct texplane is also completely separate in the hwdec and normal cases. video.c does not check whether the hwdec "mapped" image format is supported. This should not really happen anyway, and if it does, the hwdec interop backend must fail at creation time, so this is not an issue.
* video: refactor how VO exports hwdec device handleswm42016-05-091-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | The main change is with video/hwdec.h. mp_hwdec_info is made opaque (and renamed to mp_hwdec_devices). Its accessors are mainly thread-safe (or documented where not), which makes the whole thing saner and cleaner. In particular, thread-safety rules become less subtle and more obvious. The new internal API makes it easier to support multiple OpenGL interop backends. (Although this is not done yet, and it's not clear whether it ever will.) This also removes all the API-specific fields from mp_hwdec_ctx and replaces them with a "ctx" field. For d3d in particular, we drop the mp_d3d_ctx struct completely, and pass the interfaces directly. Remove the emulation checks from vaapi.c and vdpau.c; they are pointless, and the checks that matter are done on the VO layer. The d3d hardware decoders might slightly change behavior: dxva2-copy will not use the VO device anymore if the VO supports proper interop. This pretty much assumes that any in such cases the VO will not use any form of exclusive mode, which makes using the VO device in copy mode unnecessary. This is a big refactor. Some things may be untested and could be broken.
* vo_opengl: EGL: fix hwdec probingwm42016-05-051-1/+1
| | | | | | | | | | | | | If ANGLE was probed before (but rejected), the ANGLE API can remain "initialized", and eglGetCurrentDisplay() will return a non-NULL EGLDisplay. Then if a native GL context is used, the ANGLE/EGL API will then (apparently) keep working alongside native OpenGL API. Since GL objects are just numbers, they'll simply fail to interact, and OpenGL will get invalid textures. For some reason this will result in black textures. With VAAPI-EGL, something similar could happen in theory, but didn't in practice.
* vaapi: determine surface format in decoder, not in rendererwm42016-04-111-37/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Until now, we have made the assumption that a driver will use only 1 hardware surface format. the format is dictated by the driver (you don't create surfaces with a specific format - you just pass a rt_format and get a surface that will be in a specific driver-chosen format). In particular, the renderer created a dummy surface to probe the format, and hoped the decoder would produce the same format. Due to a driver bug this required a workaround to actually get the same format as the driver did. Change this so that the format is determined in the decoder. The format is then passed down as hw_subfmt, which allows the renderer to configure itself with the correct format. If the hardware surface changes its format midstream, the renderer can be reconfigured using the normal mechanisms. This calls va_surface_init_subformat() each time after the decoder returns a surface. Since libavcodec/AVFrame has no concept of sub- formats, this is unavoidable. It creates and destroys a derived VAImage, but this shouldn't have any bad performance effects (at least I didn't notice any measurable effects). Note that vaDeriveImage() failures are silently ignored as some drivers (the vdpau wrapper) support neither vaDeriveImage, nor EGL interop. In addition, we still probe whether we can map an image in the EGL interop code. This is important as it's the only way to determine whether EGL interop is supported at all. With respect to the driver bug mentioned above, it doesn't matter which format the test surface has. In vf_vavpp, also remove the rt_format guessing business. I think the existing logic was a bit meaningless anyway. It's not even a given that vavpp produces the same rt_format for output.
* vo_opengl: hwdec: use IDs for API, and log which backend is usedwm42016-02-011-1/+2
| | | | | | | Since there can be multiple backends for a single API (vaapi can use GLX or EGL), not logging the exact backend name is annoying. So add it. At the same time, there is no need to duplicate the name as used by the --hwdec options, so replace it with using the numeric hwdec API ID.
* vo_opengl: vaapi: don't expect EGL exts. to be in common ext. stringwm42016-01-221-2/+6
|
* vo_opengl: vaapi: reorganize platform entrypoints as tablewm42016-01-211-15/+20
|
* vo_opengl: add KMS/DRM VAAPI hardware decoding interopwm42016-01-201-0/+18
| | | | Just requires glueing it together with Bloat Super Glue (tm).
* vaapi: replace VA_STR_FOURCCwm42016-01-111-2/+2
|
* vo_opengl: hwdec_vaegl: change license to LGPL 2.1wm42016-01-071-9/+7
| | | | | | | | | This file claims to be based on the "MPlayer VA-API patch", but this is untrue. Only some glue code was copied from hwdec_vaglx.c, and this glue code was never in MPlayer or the MPlayer VA-API patch in any form, and instead part of the mpv-original way we do hardware decoding OpenGL interop. The EGL interop method didn't exist at the time the MPlayer VA-API patch was created either.
* vo_opengl: never load vaapi GLX interop by defaultwm42015-11-091-1/+1
| | | | | | | Causes more harm than it helps. Will eventually be removed. Also rename the "reject_emulated" field to "probing" - this is more appropriate now.
* vo_opengl: vaapi: fix compilation failure on older systemswm42015-10-231-1/+2
| | | | | | | Older systems have certain EGL extension definitions missing. We redefine them to make the build system easier, and because it's trivial. But we forgot to define the EGL_LINUX_DMA_BUF_EXT identifier. (I hope it's the only missing one.)
* vo_opengl: remove leftover variable from vaglx in vaeglEmmanuel Gil Peyrot2015-10-021-1/+0
| | | | This was preventing compilation on systems without X11 headers.
* vo_opengl: vaapi: add Wayland supportwm42015-09-271-0/+15
| | | | | | Pretty trivial with the new EGL interop. Fixes #478.
* vaapi: remove dependency on X11wm42015-09-271-6/+24
| | | | | | | | | | | | | There are at least 2 ways of using VAAPI without X11 (Wayland, DRM). Remove the X11 requirement from the decoder part and the EGL interop. This will be used by a following commit, which adds Wayland support. The worst about this is the decoder part, which includes a bad hack for using the decoder without any VO interop (also known as "vaapi-copy" mode). Separate the X11 parts so that they're self-contained. For the EGL interop code we do something similar (it's kept slightly simpler, because it essentially only has to translate between our silly MPGetNativeDisplay abstraction and the vaGetDisplay...() call).
* vo_opengl: vaapi: provide symbols for missing extensionswm42015-09-271-0/+14
| | | | | We also could just check at build time, but since it's not much, just redefine them inline if not present.
* vo_opengl: vaapi: redo how EGL extensions are loadedwm42015-09-271-13/+32
| | | | | | | It looks like my hope that we can unconditionally include EGL headers in the OpenGL code is not coming true, because OSX does not support EGL at all. So I prefer loading the VAAPI EGL/GL specific extensions manually, because it's less of a mess. Partially reverts commit d47dff3f.
* vo_opengl: vaapi: probe the surface formatwm42015-09-261-2/+68
| | | | | | | | | | | | | | | | Probe the surface format, and check whether it's really something we support. This also does a complete check whether the EGL interop works at all (the only way to find this out is actually running this code). Also, support YV12. Under some circumstances, vaapi (with Intel drivers) can be made to use this format. Unfortunately, the Intel drivers show some very weird behavior, which is hopefully a bug. insane_hack() provides a very evil workaround (see comments). A proper solution might be passing the hw format as part of mp_image_params, but as long as hw surfaces appear to be able to change the format on the fly, attempting this is probably not worth the extra complexity and likely fragility. The hack allows us to pretend that there is sane behavior for now.
* vo_opengl: vaapi: undo vaAcquireBufferHandle() correctly on errorwm42015-09-251-2/+4
| | | | | | | Checking and resetting the VAImage.buf field is non-sense, even if it happened to work out in the normal case. buf is actually freed when vaDestroyImage() is called (not quite intuitive), and we need an extra field to know whether vaReleaseBufferHandle() has to be called.
* vo_opengl: vaapi: handle YV12 correctlywm42015-09-251-0/+3
| | | | This specific FourCC has its planes swapped compared to FFmpeg yuv420p.