summaryrefslogtreecommitdiffstats
path: root/video/out/opengl
Commit message (Collapse)AuthorAgeFilesLines
* drm: add typedef for PFNEGLGETPLATFORMDISPLAYEXTPROC (#7314)Jan Palus2020-05-141-0/+5
| | | | extension is not mandatory and is not provided on ie Raspberry Pi
* video: fix rgb30 component orderwm42020-05-091-1/+1
| | | | | | | | | Was broken with a zimg wrapper refucktor before the previous commit. In addition, it seems this didn't match the vo_drm format, or the format naming convention. So the order actually changes, and the format is redefined. (The img_format.h comment was probably wrong.) Change vo_gpu to the new format as well, so we can still test it.
* egl_helpers: add typedef for EGLAttrib (#7314)Jan Palus2020-04-231-0/+1
| | | | part of EGL 1.5 which is not present ie on Raspberry Pi
* wayland: use mp_time deltas for presentation timeDudemanguy2020-04-201-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | One not-so-nice hack in the wayland code is the assumption of when a window is hidden (out of view from the compositor) and an arbitrary delay for enabling/disabling the usage of presentation time. Since you do not receive any presentation feedback when a window is hidden on wayland (a feature or misfeature depending on who you ask), the ust is updated based on the refresh_nsec statistic gathered from the previous feedback event. The flaw with this is that refresh_nsec basically just reports back the display's refresh rate (1 / refresh_rate * 10^9). It doesn't tell you how long the vsync interval really was. So as a video is left playing out of view, the wl->last_queue_display_time becomes increasingly inaccurate. This led to a vsync spike when bringing the mpv window back into sight after it was hidden for a period of time. The hack for working around this is to just wait a while before enabling presentation time again. The discrepancy between the "bogus" wl->last_queue_display_time and the actual value you get from the feedback only happens initially after a switch. If you just discard those values, you avoid the dramatic vsync spike. It turns out that there's a smarter way to do this. Just use mp_time_us deltas. The whole reason for these hacks is because wl->last_queue_display_time wasn't close enough to how long it would take for a frame to actually display if it wasn't hidden. Instead, mpv's internal timer can be used, and the difference between wayland_sync_swap calls is a close enough proxy for the vsync interval (certainly better than using the monitor's refresh rate). This avoids the entire conundrum of massive vsync spikes when bringing the player back into view, and it means we can get rid of extra crap like wl->hidden.
* vo_gpu: opengl: make sure to always clean up debug callbacksNiklas Haas2020-04-151-0/+4
| | | | | | | | | In theory this mostly happens automatically, especially after the 5 vsync limit disables this already. But if we uninit before 5 vsyncs are rendered, this can get left in a dangling 'enabled' state, which leaks a debug report callback. Always explicitly disable it just to be on the safe side.
* options: change option macros and all option declarationswm42020-03-183-36/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change all OPT_* macros such that they don't define the entire m_option initializer, and instead expand only to a part of it, which sets certain fields. This requires changing almost every option declaration, because they all use these macros. A declaration now always starts with {"name", ... followed by designated initializers only (possibly wrapped in macros). The OPT_* macros now initialize the .offset and .type fields only, sometimes also .priv and others. I think this change makes the option macros less tricky. The old code had to stuff everything into macro arguments (and attempted to allow setting arbitrary fields by letting the user pass designated initializers in the vararg parts). Some of this was made messy due to C99 and C11 not allowing 0-sized varargs with ',' removal. It's also possible that this change is pointless, other than cosmetic preferences. Not too happy about some things. For example, the OPT_CHOICE() indentation I applied looks a bit ugly. Much of this change was done with regex search&replace, but some places required manual editing. In particular, code in "obscure" areas (which I didn't include in compilation) might be broken now. In wayland_common.c the author of some option declarations confused the flags parameter with the default value (though the default value was also properly set below). I fixed this with this change.
* options: remove intpair option typewm42020-03-131-1/+2
| | | | | | | | | | | This was mostly unused, and has certain problems. Just get rid of it. It was still used in CDDA (--cdda-span) and a debug option for OpenGL (--opengl-check-pattern). Replace both of these with 2 options, where each sets the start/end values of the former span. Both were undocumented somehow (normally we require all options to be documented), so I'm not caring about compatibility, and not bothering to add it to the API changelog.
* drm_prime: double free bugSven Kroeger2020-03-051-3/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit fixes a bug where handle for a framebuffer gets double freed. It seems to happen that the same prime fd gets two framebuffers. As the prime fd is the same the resulting prime handle is also the same. This means one handle but 2 framebuffers and can lead to the following chain: 1. The first framebuffer gets deleted the handle gets also freed via the ioctl. 2. In startup phase not all 4 dumb buffers for overlay drawing are set up. It can happen that the last dumb buffer gets the handle we freed above. 3. The second framebuffer gets freed and the handle will be freed again resulting that the 4's dumb buffer handle is not backed by a buffer. 4. Drm prime continues to assign handles to its prime fds an will lead to have this handle which was just freed to reassign again but to an prime buffer. 5.Now the overlay should be drawn into dumb buffer 4 which still has the same handle but is backed by the wrong buffer. This leads to two different behaviors: - MPV crashes as the drm prime buffers size als calculated by the decoder output format. The overlay output format differs and it takes more space. SO the size check in kernel fails. - MPV is continuing play. This happens when the decoders allocates a bigger buffer than needed for the overlay. For example overlay is Full HD and decoder output is 4k. This leads to the behavior das the overlay wil be drawn into the wrong buffer as its a drm prime buffer and results in a flicker every fourth step.
* OpenGL: Also detect softpipe as a software driverlinkmauve2020-02-251-0/+1
| | | Because it is.
* wayland: remove wayland-frame-wait-offset optiondudemanguy2020-01-311-1/+1
| | | | | | | | | | | | | | | | | This originally existed as a hack for weston. In certain scenarios, a frame taking too long to render would cause vo_wayland_wait_frame to timeout which would result in a ton of dropped frames. The naive solution was to just to add a slight delay to the time value. If a frame took too long, it would likely to fall under the timeout value and all was well. This was exposed to the user since the default delay (1000) was completely arbitrary. However with presentation time, this doesn't appear to be neccesary. Fresh frames that take longer than the display's refresh rate (16.666 ms in most cases) behave well in Weston. In the other two main compositors without presentation time (GNOME and Plasma), they also do not experience any ill effects. It's better not to overcomplicate things, so this "feature" can be removed now.
* cocoa-cb: add support for forcing the dedicated GPU for renderingder richter2020-01-261-3/+5
| | | | | | | | this deprecates the old cocoa backend only option and moves it to the general macos ones. add support for the new option in the cocoa-cb layer creation and use the new option in the olde cocoa backend. Fixes #7272
* vo_gpu: hwdec_vdpau: remove direct_modePhilip Langdale2019-12-281-124/+47
| | | | | | | | | | | | | | | | | | | | | | | | As we are less and less interested in vpdpau, with nvdec and vaapi being better choices in general on nvidia and AMD respectively, we might consider removing direct_mode, where we bypass the vdpau mixer and work directly with yuv textures. Normally, working with yuv textures would be great, but vdpau built in an assumption that all frames are delivered as separate fields, causing us to have to re-interleave them. nvidia then introduces a new OpenGL extension that can return the yuv frames as frames, but we can't just unconditionally switch to that as we'd want to keep supporting older hardware where the drivers are no longer getting new features. The end result is that we wouldn't be able to get rid of the old code paths. Removing direct_mode means we always use the mixer, and work with rgba frame textures. There are some theoretical limitations to this, but in practice they probably don't matter much - unsupported colourspaces don't matter because without 10bit decoding support, we can't use them anyway, and apparently we're not doing separate chroma scaling these days, so scaling the rbga doesn't really lose anything (and the vdpau hq scaling option remains available).
* vo_gpu: opengl: make it work with EGL 1.4wm42019-12-164-4/+82
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This tries to deal with the crazy EGL situation. The summary is: - using eglGetDisplay() with multiple windowing platforms doesn't really work, but Mesa had an awful hack for it - this hack can be disabled at build time, and some distros sometimes accidentally or intentionally do so - Mesa will probably eventually disable it by default - we switched to eglGetPlatformDisplay(), but this requires EGL 1.5 - the very regrettable graphics company (also known as Nvidia) ships drivers (for old hardware I think) that are EGL 1.4 only - that means even though we "require" EGL 1.5 and link against it, the runtime EGL may be 1.4 - trying to run mpv there crashes in the dynamic linker - so we have to go through some more awful compatibility hacks This commit tries to do it "properly", but using EGL 1.4 as base. The plaform selection mechanism is a messy extension there, which got elevated to core API in 1.5 (but OF COURSE in incompatible ways). I'm not sure whether the EGL 1.5 code path (by parsing the EGL_VERSION) is really needed, but if you ask me, it feels slightly saner not to rely on an EGL 1.4 kludge forever. But maybe this is just an instance of self-harm, since they will most likely never drop or not provide this API. Also, unlike before, we actually check the extension string for the individual platform extensions, because who knows, some EGL implementations might curse us if we pass unknown platform parameters. (But actually, the more I think about this, the more bullshit it is.) X11 and Wayland were the only ones trying to call eglGetPlatformDisplay, so they're the only ones which are adjusted in this commit. Unfortunately, correct function of this commit is unconfirmed. It's possible that it crashes with the old drivers mentioned above. Why didn't they solve it like this: struct native_display { int platform_type; void *native_display; }; Could have kept eglGetDisplay() without all the obnoxious extension BS.
* vo_gpu: x11egl: log EGL config IDwm42019-12-151-2/+6
| | | | Somewhat useful for debugging.
* vo_gpu: x11egl: cleanup EGL correctlywm42019-12-121-6/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ...probably. The EGL backend had a strange problem: when recreating the window, EGL surface creation sometimes mysteriously failed. For example, keeping the "_" key down (cycles video by default) destroys and recreates the window in rapid succession, which will often enough show the "Could not create EGL surface!" message. This was puzzling because due to mpv's architecture, the X11 Window and even the X11 Display were fully destroyed, the thread on which they ran was destroyed, and then everything was recreated. There shouldn't have been any state that could make subsequent EGL initialization fail. It turns out mpv forgot to free EGLSurfaces in the x11 code. EGL is a pretty crazy API (full of thread local and global state with weird lifetime requirements), and for example it seems EGLDisplay cannot be explicitly released, but apparently implicitly dies when the native display is closed (at least EGL 1.5 claims eglTerminate() does _not_ invalidate the display, only certain objects linked to it). It appears that Mesa still referenced at least EGLSurface in some form, and either some pointer or some X11 ID was dangling, and when it randomly matched when eglCreateWindowSurface() was called, it failed. Fix this by calling eglTerminate(), which supposedly destroys (or rather unreferences) contexts and surfaces created from the display (but absurdly not the display itself). Now why can't you just destroy the display? If it's implicitly invalidated, why can't it just call eglTerminate() implicitly when this happens? Did Mesa do something wrong when they somehow didn't automatically remove the dangling object (so I could claim not to be responsible for the bug)? Who the fuck knows, and I'm too tired to figure this out (both because it's late, and because I'm tired of this EGL crap API). Still not sure if the code is correct now. I think EGL was designed to maximize implementation and API-use complications. How else could you possibly come up with something like the EGLDisplay life cycle? Or am I just making a fuss? Anyway, fuck EGL, fuck computers, fuck technology. Fixes: #7129
* rpi: destroy fullscreen change handlingwm42019-12-111-3/+0
| | | | | | | | | Get rid of the legacy VOCTRL (which will be removed later). I'm not sure what exactly fullscreen was supposed to do (toggling between using the entire display, and what --geometry forced?), but I don't care, just get rid of the VOCTRL. PRs to fix regressions caused by this will be accepted, but personally I don't care since this is excessively fringe and obscure.
* drm: avoid division by 0 in drm_pflip_cb with bad driversAnton Kindestam2019-12-071-0/+1
| | | | | | | | | | | | | | Seems like some drivers only increment msc every other page flip when running in interlaced mode (I'm looking at you nouveau). I.e. it seems to be incremented at the frame rate, rather than the field rate. Obviously we can't work with this, so shame the driver and bail. On intel this isn't an issue, as msc is incremented at field rate there. This means presentation feedback won't work correctly in interlaced modes with those drivers, but who in their right mind uses an interlaced mode these days, anyway?
* vo_gpu: opengl: add hack for ancient Mesa/GLXwm42019-11-301-23/+47
| | | | | | | | | | | | | | | | glx.h recursively includes gl.h, and there is no way to prevent this. Old Mesa defines some GL symbols, but not all which mpv needs. In particular, one user who was too lazy to update his ancient Ubuntu and preferred to bother us with obscure bug reports, had Mesa headers which did not define GL 3.2, so GLsync was not defined. All in all I still think the idea of providing the GL API definitions ourselves was a good idea; just GLX should have been isolated better. But isolating GLX now is too much effort. Not sure why I'm bothering with this at all. Fixes: #7201 (unconfirmed)
* vo_gpu: opengl: do not free "GL" sub-allocationswm42019-11-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | This function always expects the GL struct pointer to be a talloc allocation. So far so bad. But the terrible thing is that _lots_ of code in mpv didn't quite get this (including the code which introduced the way it is used this way). For example, in context_glx.c you see this: struct priv { GL gl; ... GL is not a talloc allocation, but since it's at the start of a talloc allocation, it works anyway. So far so bad. But the really terrible thing is that mpgl_load_functions2() calls talloc_free_children() on the GL pointer, which means that all of priv's. This would be unintentional and could create dangling pointers. And this happens at the about 1 dozen of callers. I'm amazed it didn't broke yet anywhere. Removing this anti-pattern with making GL "implicitly" a talloc allocation would be too much effort at this point. So just manually free the only allocation that the function attached to GL.
* vo_gpu: context_glx: Add X11 native resourcePhilip Langdale2019-11-161-0/+2
| | | | | | | Surprisingly, we've managed to get this far without context_glx ever adding the X11 display as a native resource. But with the recent change to attempt to enable vdpau when using EGL, the hwdec now requires the display to be added. So let's add it.
* wayland: use eglGetPlatformDisplay()Dudemanguy2019-11-161-1/+2
| | | | | See aacc194. The same logic all applies to Wayland. In fact, we already require EGL 1.5 for wayland anyway, so it's better to do it right.
* x11: require EGL 1.5 and use eglGetPlatformDisplay()wm42019-11-161-6/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | eglGetPlatform() is a broken API, since it takes a windowing specific argument, yet is supposed to work for multiple APIs at the same time. On Linux, it can take both a X11 "Display" and a "wl_display". Obviously there is no way to specify what kind of display the argument is (it's just a void*). Mesa has _eglNativePlatformDetectNativeDisplay, which does funny stuff to try to guess the display type, including trying to call mincore() to determine whether the pointer can be accessed at all. I guess this recently accidentally broke (as a bug), but on the other hand, maybe it's time to do this properly. The fix is using eglGetPlaformDisplay(). This requires EGL 1.5, plus Mesa needs to support the associated platform extension (EGL_KHR_platform_x11). Since I see no reasonable way to do this in a compatible way, just require that EGL 1.5 is available. The problem is that EGL 1.4 seems to require you to create a display to query EGL version and extension, and you have a chicken-and-egg problem. It's very stupid. Maybe you could jump through some more hoops to get something compatible, but fuck that. Users on "too old" Mesa will fall back to GLX (which we keep around for a regrettable company known by the name of Nvidia). I think Wayland and GBM should do the same. They're sufficiently bleeding-edge that you can expect them to have EGL 1.5. On the other hand, the cursed RPI code will have to stay with a eglGetDisplay(). Speculative fix for #7154. (Rant about EGL follows. Actually I deleted it.)
* vo_gpu: context_x11egl: check eglGetConfigAttrib() for errorswm42019-11-081-1/+4
| | | | | Not sure why it assumes that it always succeeds (although generally it won't fail).
* vo_gpu: vdpau actually works under EGLwm42019-11-071-4/+2
| | | | | | | | | | | | | | | | | | The use of glXGetCurrentDisplay() restricted this to the GLX backend. But actually it works under EGL as well. Removing the GLX-specific call and using the general mpv-internal method to get the X "Display" makes it work in mpv. I didn't know this. Nvidia didn't list this as extension in the EGL context when I still used their GPUs. Note that this might in theory break use of vdpau in some libmpv clients using the render API. But only if MPV_RENDER_PARAM_X11_DISPLAY is not used, and they relied on mpv using glXGetCurrentDisplay(). EGL does not provide such an API, and hwdec_vaapi.c also uses what hwdec_vdpau.c uses now. Considering that vaapi is preferable these days, it's not bad at all if these clients get "broken". They can be easily fixed by passing the display to mpv correctly.
* vo_gpu: opengl: add support for IMGFMT_RGB30wm42019-11-021-0/+11
| | | | | | | | This integrates it as "special" format, with no alpha component, as the equivalent IMGFMT_RGB30 isn't meant to contain any. Nothing can produce this format in the video chain yet, so the next commits are needed to make this actually work.
* vo_gpu/opengl: fully initialize FBO when passing it to renderingJan Ekström2019-10-301-2/+4
| | | | | | | Until now, we only properly initialized two values, leaving the rest be garbage. Fixes #7104
* wayland: fix presentation timeDudemanguy9112019-10-201-1/+1
| | | | | | | | | | There's 2 stupid things here that need to be fixed. First of all, vulkan wasn't actually using presentation time because somehow the get_vsync function in context.c disappeared. Secondly, if the mpv window was hidden it was updating the ust time based on the refresh_usec but really it should simply just not feed any information to the vsync info structure. So this adds some logic to assume whether or not a window is hidden.
* wayland: add various render-related optionsdudemanguy2019-10-201-1/+2
| | | | | The newest wayland changes have some new logic that make sense to expose to users as configurable options.
* wayland: add presentation timedudemanguy2019-10-201-2/+78
| | | | | Use ust/msc/refresh values from wayland's presentation time in mpv's ra_swapchain_fns.get_vsync for the wayland contexts.
* vo_gpu: hwdec_d3d11egl: add missing P010 format to supported listwm42019-10-171-1/+1
| | | | | | | | This was obviously missing from the recent commit, which probably broke 10 bit decoding. The original commit didn't test this for lack of working hardware; this commit isn't tested either. Fixes: a1c7d613935424b69b3
* vo_gpu: hwdec_d3d11eglrgb: remove thiswm42019-10-161-279/+0
| | | | | Finally. Since with the previous commit we can (probably) handle P010 directly, this hack isn't needed anymore.
* vo_gpu: hwdec_d3d11egl: adapt to newer ANGLE APIwm42019-10-161-24/+33
| | | | | | | | | | | | | | | | | | | 2 years ago, ANGLE removed the old NV12-specific extension, and added a new one that supports a number of formats, including P010. Actually they just renamed it and removed their initial annoying and obvious design error (bravo, Google). Since it broke 2 years ago, nobody should give a shit about this code, and it should just be removed. But for some reason I still dived the shit-tank (Windows development). I guess Intel code monkeys can't write drivers (or maybe the issue is because we're doing zero-copy, which probably maybe is not actually allowed by D3D11 due to array textures, see --d3d11va-zero-copy), so the P010 path is completely untested. It doesn't work, I'll delete all this ANGLE hwdec code. Fixes: #7054
* wayland: use callback flag + poll for buffer swapdudemanguy2019-10-101-0/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The old way of using wayland in mpv relied on an external renderloop for semi-accurate timings. This had multiple issues though. Display sync would break whenever the window was hidden (since the frame callback stopped being executed) which was really annoying. Also the entire external renderloop logic was kind of fragile and didn't play well with mpv's internal structure (i.e. using presentation time in that old paradigm breaks stats.lua). Basically the problem is that swap buffers blocks on wayland which is crap whenever you hide the mpv window since it looks up the entire player. So you have to make swap buffers not block, but this has a different problem. Timings will be terrible if you use the unblocked swap buffers call. Based on some discussion in #wayland, the trick here is relatively simple and works well enough for our purposes. Instead we basically build a way to block with a timeout in the wayland buffer swap functions. A bool is set in the frame callback function that indicates whether or not mpv is waiting for a frame to be displayed. In the actual buffer swap function, we enter into a while loop waiting for this flag to be set. At the same time, the wl_display is polled to block the thread and wakeup if it receives any events from the compositor. This loop only breaks if enough time has passed or if the frame callback bool is received. In the near future, it is better to set whether or not frame a frame has been displayed in the presentation feedback. However as a first pass, doing it in the frame callback is more than good enough. The "downside" is that we render frames that aren't actually shown on screen when the player is hidden (it seems like wayland people don't like that). But who cares. Accurate timings are way more important. It's probably not too hard to add that behavior back in the player though.
* wayland opengl: actually call uninit if init failsdudemanguy2019-10-031-1/+3
| | | | | | | | This is the proper fix to the memory leak @wm4 pointed out. It turns out that when you autoprobe opengl and vo_wayland_init returns false, vo_wayland_uninit is never actually executed. So you have a leftover pointer. The vulkan context does this correctly which was why my old, dumb "fix" broke it.
* vo: make swapchain-depth option generic for all VOsAnton Kindestam2019-09-284-6/+7
| | | | In preparation for making vo_drm able to use swapchain-depth
* drm: refactor page_flipped callbackAnton Kindestam2019-09-281-54/+6
| | | | | Avoid duplicating the same callback function in both context_drm_egl and vo_drm.
* drm: move struct vsync_tuple into drm_common as drm_vsync_tupleAnton Kindestam2019-09-281-9/+2
| | | | This struct will be useful in vo_drm as well.
* context_drm_egl: define EGL_PLATFORM_GBM_MESA, EGL_PLATFORM_GBM_KHR if not ↵Anton Kindestam2019-09-271-0/+8
| | | | | | | in system headers To account for oddball setups where EGL_PLATFORM_GBM_MESA or EGL_PLATFORM_GBM_KHR might not be defined for whatever reason.
* vo_gpu: hwdec_drmprime_drm: add hwdec ctxJonas Karlman2019-09-271-0/+14
| | | | | | | This allows to use drm hwaccels that require a hwdevice. Tested with v4l2request hwaccel and cedrus driver on an allwinner device running mpv with --vo=gpu --gpu-context=drm --hwdec=drm.
* context_android: move common code to a separate filesfan52019-09-271-52/+9
| | | | | | In preparation for a Vulkan Android context. This also replaces querying for EGL_WIDTH and EGL_HEIGHT with equivalent ANativeWindow calls.
*