summaryrefslogtreecommitdiffstats
path: root/video/out/opengl/context_x11egl.c
Commit message (Collapse)AuthorAgeFilesLines
* vo_gpu: opengl: make it work with EGL 1.4wm42019-12-161-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This tries to deal with the crazy EGL situation. The summary is: - using eglGetDisplay() with multiple windowing platforms doesn't really work, but Mesa had an awful hack for it - this hack can be disabled at build time, and some distros sometimes accidentally or intentionally do so - Mesa will probably eventually disable it by default - we switched to eglGetPlatformDisplay(), but this requires EGL 1.5 - the very regrettable graphics company (also known as Nvidia) ships drivers (for old hardware I think) that are EGL 1.4 only - that means even though we "require" EGL 1.5 and link against it, the runtime EGL may be 1.4 - trying to run mpv there crashes in the dynamic linker - so we have to go through some more awful compatibility hacks This commit tries to do it "properly", but using EGL 1.4 as base. The plaform selection mechanism is a messy extension there, which got elevated to core API in 1.5 (but OF COURSE in incompatible ways). I'm not sure whether the EGL 1.5 code path (by parsing the EGL_VERSION) is really needed, but if you ask me, it feels slightly saner not to rely on an EGL 1.4 kludge forever. But maybe this is just an instance of self-harm, since they will most likely never drop or not provide this API. Also, unlike before, we actually check the extension string for the individual platform extensions, because who knows, some EGL implementations might curse us if we pass unknown platform parameters. (But actually, the more I think about this, the more bullshit it is.) X11 and Wayland were the only ones trying to call eglGetPlatformDisplay, so they're the only ones which are adjusted in this commit. Unfortunately, correct function of this commit is unconfirmed. It's possible that it crashes with the old drivers mentioned above. Why didn't they solve it like this: struct native_display { int platform_type; void *native_display; }; Could have kept eglGetDisplay() without all the obnoxious extension BS.
* vo_gpu: x11egl: log EGL config IDwm42019-12-151-2/+6
| | | | Somewhat useful for debugging.
* vo_gpu: x11egl: cleanup EGL correctlywm42019-12-121-6/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ...probably. The EGL backend had a strange problem: when recreating the window, EGL surface creation sometimes mysteriously failed. For example, keeping the "_" key down (cycles video by default) destroys and recreates the window in rapid succession, which will often enough show the "Could not create EGL surface!" message. This was puzzling because due to mpv's architecture, the X11 Window and even the X11 Display were fully destroyed, the thread on which they ran was destroyed, and then everything was recreated. There shouldn't have been any state that could make subsequent EGL initialization fail. It turns out mpv forgot to free EGLSurfaces in the x11 code. EGL is a pretty crazy API (full of thread local and global state with weird lifetime requirements), and for example it seems EGLDisplay cannot be explicitly released, but apparently implicitly dies when the native display is closed (at least EGL 1.5 claims eglTerminate() does _not_ invalidate the display, only certain objects linked to it). It appears that Mesa still referenced at least EGLSurface in some form, and either some pointer or some X11 ID was dangling, and when it randomly matched when eglCreateWindowSurface() was called, it failed. Fix this by calling eglTerminate(), which supposedly destroys (or rather unreferences) contexts and surfaces created from the display (but absurdly not the display itself). Now why can't you just destroy the display? If it's implicitly invalidated, why can't it just call eglTerminate() implicitly when this happens? Did Mesa do something wrong when they somehow didn't automatically remove the dangling object (so I could claim not to be responsible for the bug)? Who the fuck knows, and I'm too tired to figure this out (both because it's late, and because I'm tired of this EGL crap API). Still not sure if the code is correct now. I think EGL was designed to maximize implementation and API-use complications. How else could you possibly come up with something like the EGLDisplay life cycle? Or am I just making a fuss? Anyway, fuck EGL, fuck computers, fuck technology. Fixes: #7129
* x11: require EGL 1.5 and use eglGetPlatformDisplay()wm42019-11-161-6/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | eglGetPlatform() is a broken API, since it takes a windowing specific argument, yet is supposed to work for multiple APIs at the same time. On Linux, it can take both a X11 "Display" and a "wl_display". Obviously there is no way to specify what kind of display the argument is (it's just a void*). Mesa has _eglNativePlatformDetectNativeDisplay, which does funny stuff to try to guess the display type, including trying to call mincore() to determine whether the pointer can be accessed at all. I guess this recently accidentally broke (as a bug), but on the other hand, maybe it's time to do this properly. The fix is using eglGetPlaformDisplay(). This requires EGL 1.5, plus Mesa needs to support the associated platform extension (EGL_KHR_platform_x11). Since I see no reasonable way to do this in a compatible way, just require that EGL 1.5 is available. The problem is that EGL 1.4 seems to require you to create a display to query EGL version and extension, and you have a chicken-and-egg problem. It's very stupid. Maybe you could jump through some more hoops to get something compatible, but fuck that. Users on "too old" Mesa will fall back to GLX (which we keep around for a regrettable company known by the name of Nvidia). I think Wayland and GBM should do the same. They're sufficiently bleeding-edge that you can expect them to have EGL 1.5. On the other hand, the cursed RPI code will have to stay with a eglGetDisplay(). Speculative fix for #7154. (Rant about EGL follows. Actually I deleted it.)
* vo_gpu: context_x11egl: check eglGetConfigAttrib() for errorswm42019-11-081-1/+4
| | | | | Not sure why it assumes that it always succeeds (although generally it won't fail).
* vo_gpu: x11egl: support Mesa OML sync extensionwm42019-09-081-0/+24
| | | | | | | | | | | | | | | | | | | | Mesa supports the EGL_CHROMIUM_sync_control extension, and it's available out of the box with AMD drivers. In practice, this is exactly the same as GLX_OML_sync_control, but for EGL. The extension specification is separate from the GLX one though, and buried somewhere in the Chromium code. This appears to work, although I don't know if it really works. In theory, this could be useful for other EGL targets. Support code for it could have been added to egl_helpers.c to avoid some minor duplicated glue code if another EGL target were to provide this extension. I didn't bother with that. ANGLE on Windows can't support it, because the extension spec. explicitly requires POSIX timers. ANGLE on Linux/OSX is actively harmful for mpv and hopefully won't ever use it. Wayland uses EGL, but has its own fancy presentation feedback stuff (and besides, I don't think basic video player functionality works on Wayland at all). context_drm_egl maybe? But I think DRM has its own stuff.
* client API: add a new way to pass X11 Display etc. to render APIwm42018-03-261-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Hardware decoding things often need access to additional handles from the windowing system, such as the X11 or Wayland display when using vaapi. The opengl-cb had nothing dedicated for this, and used the weird GL_MP_MPGetNativeDisplay GL extension (which was mpv specific and not officially registered with OpenGL). This was awkward, and a pain due to having to emulate GL context behavior (like needing a TLS variable to store context for the pseudo GL extension function). In addition (and not inherently due to this), we could pass only one resource from mpv builtin context backends to hwdecs. It was also all GL specific. Replace this with a newer mechanism. It works for all RA backends, not just GL. the API user can explicitly pass the objects at init time via mpv_render_context_create(). Multiple resources are naturally possible. The API uses MPV_RENDER_PARAM_* defines, but internally we use strings. This is done for 2 reasons: 1. trying to leave libmpv and internal mechanisms decoupled, 2. not having to add public API for some of the internal resource types (especially D3D/GL interop stuff). To remain sane, drop support for obscure half-working opengl-cb things, like the DRM interop (was missing necessary things), the RPI window thing (nobody used it), and obscure D3D interop things (not needed with ANGLE, others were undocumented). In order not to break ABI and the C API, we don't remove the associated structs from opengl_cb.h. The parts which are still needed (in particular DRM interop) needs to be ported to the render API.
* vo_opengl: refactor into vo_gpuNiklas Haas2017-09-211-34/+50
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is done in several steps: 1. refactor MPGLContext -> struct ra_ctx 2. move GL-specific stuff in vo_opengl into opengl/context.c 3. generalize context creation to support other APIs, and add --gpu-api 4. rename all of the --opengl- options that are no longer opengl-specific 5. move all of the stuff from opengl/* that isn't GL-specific into gpu/ (note: opengl/gl_utils.h became opengl/utils.h) 6. rename vo_opengl to vo_gpu 7. to handle window screenshots, the short-term approach was to just add it to ra_swchain_fns. Long term (and for vulkan) this has to be moved to ra itself (and vo_gpu altered to compensate), but this was a stop-gap measure to prevent this commit from getting too big 8. move ra->fns->flush to ra_gl_ctx instead 9. some other minor changes that I've probably already forgotten Note: This is one half of a major refactor, the other half of which is provided by rossy's following commit. This commit enables support for all linux platforms, while his version enables support for all non-linux platforms. Note 2: vo_opengl_cb.c also re-uses ra_gl_ctx so it benefits from the --opengl- options like --opengl-early-flush, --opengl-finish etc. Should be a strict superset of the old functionality. Disclaimer: Since I have no way of compiling mpv on all platforms, some of these ports were done blindly. Specifically, the blind ports included context_mali_fbdev.c and context_rpi.c. Since they're both based on egl_helpers, the port should have gone smoothly without any major changes required. But if somebody complains about a compile error on those platforms (assuming anybody actually uses them), you know where to complain.
* vo_opengl: add a generic EGL function loader functionwm42017-04-061-4/+1
| | | | | | | This is pretty trivial, but also quite annoying due to details like mismatching eglGetProcAddress() function signature (most callers just cast the function pointer), and ARM/Linux hacks. So move them all to one place.
* vo_opengl: x11egl: fix alpha modewm42016-12-301-2/+32
| | | | | | | | | | | | | | | | | | The way it should (probably) work is that selecting a RGBA framebuffer format will simply make the compositor use the alpha. It works this way on Wayland. On X11, this is... not done. Instead, both GLX and EGL report two FB configs, which are exactly the same, except for the platform-specific visual. Only the latter (non-default) points to a visual that actually has alpha. So you can't make the pure GLX and EGL APIs select alpha mode, and you have to override manually. Or in other words, alpha was hacked violently into X11, in a way that doesn't really make sense for the sake of compatibility, and forces API users to wade through metaphorical cow shit to deal with it. To be fair, some other platforms actually also require you to enable alpha explicitly (rather than looking at the framebuffer type), but they skip the metaphorical cow shit step.
* vo_opengl: factor some EGL context creation codewm42016-09-131-74/+16
| | | | | | | Add a function to egl_helpers.c for creating an EGL context and make context_x11egl.c use it. This is meant to be generic, and should work with other windowing APIs as well. The other EGL-using code in mpv can be switched to it.
* x11: stop using vo.event_fdwm42016-07-201-0/+12
| | | | Instead let it do its own event loop wakeup handling.
* vo_opengl: use standard functions to retrieve display depthwm42016-06-141-1/+0
| | | | | | | | | | | | | Until now, we've used system-specific API (GLX, EGL, etc.) to retrieve the depth of the default framebuffer. (We equal this to display depth and use the determined depth for dithering.) We can actually retrieve this value through standard GL API, and it works everywhere (except GLES 2 of course). This simplifies everything a great deal. egl_helpers.c is empty now. But I expect that some EGL boilerplate will be moved to it, so don't remove it yet.
* vo_opengl: request core profile on X11/EGL toowm42016-06-101-0/+11
| | | | Avoids that some OpenGL implementation will pin it to 3.0.
* Change GPL/LGPL dual-licensed files to LGPLwm42016-01-191-12/+7
| | | | | | | | | | | Do this to make the license situation less confusing. This change should be of no consequence, since LGPL is compatible with GPL anyway, and making it LGPL-only does not restrict the use with GPL code. Additionally, the wording implies that this is allowed, and that we can just remove the GPL part.
* vo_opengl: prefix per-backend source files with context_wm42015-12-191-0/+197