summaryrefslogtreecommitdiffstats
path: root/video/out/opengl
Commit message (Collapse)AuthorAgeFilesLines
* Revert "wayland: conditionally commit surface on resize"Dudemanguy2020-11-081-2/+0
| | | | | | | | | | | | 30dcfbc is a workaround for incorrect border sizes that could occur on sway/wlroots in certain edge cases. This seemed harmless enough, but it turns out that on mutter the extra wl_surface_commit somehow causes the window always go to the top left of the screen after you leave fullscreen. No idea why this occurs, but the original commit is a workaround a sway bug and causing regressions for other users isn't right despite the author being biased towards sway/wlroots. This reverts commit 30dcfbc9cb3f77dbb729fb6f95ffde7dbdddc4cb.
* vo_gpu: EGL: hack for alpha on different platformsDudemanguy2020-10-152-1/+4
| | | | | | | | | | | 7fb972f fixed transparency on x11/EGL/Mesa but happened to also break it for wayland and nvidia. Ideally on wayland, you should just be able to pick the right EGLConfig that has alpha but this doesn't seem to work because reasons. So just go back to setting the EGL_ALPHA_SIZE bit if the user asks for alpha. Apparently this worked before for nvidia as well. The hack is to just run an eglQueryString in the x11egl context. If it picks up Mesa as the EGL_VENDOR, then force ctx->opts.want_alpha to 0 and let pick_xrgba_config take care of the rest.
* wayland: update opaque region on runtimeDudemanguy2020-10-151-17/+19
| | | | | | | | | | Made possible with 00b9c81. 34b8adc let the wayland surface set an opaque region depending on if alpha was set by the user or not. However, there was no attempted detection for runtime changes and it is possible (at least in wayland vulkan) to toggle the alpha on and off. So this meant, we could be incorrectly signalling an opaque region if the user happened to change the alpha. Additionally, add a helper function for this and use it everywhere we want to set the opaque region.
* wayland: be less strict about when to renderDudemanguy2020-10-151-1/+1
| | | | | | | | | | | | | | | | | | | | | efb0c5c changed the rendering logic of mpv on wayland and made it skip rendering when it did not receive frame callback in time. The idea was to skip rendering when the surface was hidden and be less wasteful. This unfortunately had issues in certain instances where a frame callback could be missed (but the window was still in view) due to imprecise rendering (like the default audio video-sync mode). This would lead to the video appearing to stutter since mpv would skip rendering in those cases. To account for this case, simply re-add an old heuristic for detecting if a window is hidden or not since the goal is to simply not render when a window is hidden. If the wait on the frame callback times out enough times in a row, then we consider the window hidden and thus begin to skip rendering then. The actual threshold to consider a surface as hidden is completely arbitrary (greater than your monitor's refresh rate), but it's safe enough since realistically you're not going to miss 60+ frame callbacks in a row unless the surface actually is hidden. Fixes #8169.
* wayland: set an opaque regionDudemanguy2020-10-011-0/+7
| | | | | | | | | Apparently a part of the wayland spec. A compositor may use a surface that has set part of itself as opaque for various optimizations. For mpv, we simply set the entire surface as opaque as long as the user has not set alpha=yes (note: alpha is technically broken in the wayland EGL backend at the time of this commit but oh well). wlshm is always opaque. Fixes #8125.
* wayland: only render if we have frame callbackDudemanguy2020-09-211-9/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Back in the olden days, mpv's wayland backend was driven by the frame callback. This had several issues and was removed in favor of the current approach which allowed some advanced features (like display-resample and presentation time) to actually work properly. However as a consequence, it meant that mpv always rendered, even if the surface was hidden. Wayland people consider this "wasteful" (and well they aren't wrong). This commit aims to avoid wasteful rendering by doing some additional checks in the swapchain. There's three main parts to this. 1. Wayland EGL now uses an external swapchain (like the drm context). Before we start a new frame, we check to see if we are waiting on a callback from the compositor. If there is no wait, then go ahead and proceed to render the frame, swap buffers, and then initiate vo_wayland_wait_frame to poll (with a timeout) for the next potential callback. If we are still waiting on callback from the compositor when starting a new frame, then we simple skip rendering it entirely until the surface comes back into view. 2. Wayland on vulkan has essentially the same approach although the details are a little different. The ra_vk_ctx does not have support for an external swapchain and although such a mechanism could theoretically be added, it doesn't make much sense with libplacebo. Instead, start_frame was added as a param and used to check for callback. 3. For wlshm, it's simply a matter of adding frame callback to it, leveraging vo_wayland_wait_frame, and using the frame callback value to whether or not to draw the image.
* vo_gpu: EGL: fix transparency on X11/EGL/Mesawm42020-08-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Transparent windows on X11/EGL/native Mesa GL didn't work for various reasons. From what I remember, the current code did work with nvidia at least. Mesa has made attempts to fix this, but they never really made it in. But it turns out you can make EGL/Mesa list the EGLConfigs that use X11 RGBA visuals, and context_x11egl.c contains code that explicitly selects them if alpha is requested (see pick_xrgba_config()). The reason EGL/Mesa did not list them (and thus breaking transparency) is because we requested a EGL_ALPHA_SIZE != 0 if alpha is requested. But the transparent EGLConfigs use EGL_ALPHA_SIZE == 0. That's because EGL doesn't actually support the concept of transparent windows; the alpha size parameter is something else (memory rendering without FBOs or something, I don't care enough to look up the real reasons). This still won't work on Wayland. Every EGL backend needs platform specific code. (Good job, EGL, such an awesome platform independent standard.) Fixes: #6590
* vo_gpu: EGL: slightly better debug logging of EGL configswm42020-08-271-1/+2
|
* wayland: conditionally commit surface on resizeDudemanguy2020-08-201-0/+2
| | | | | | | | | | | | | | | | | | | | | It was possible for sway to get incorrectly sized borders if you resized the mpv window in a creative manner (e.g. open a video in a non-floating mode, set window scale to 2, then float it and witness wrong border sizes). This is possibly a sway bug (Plasma doesn't have these border issues at least), but there's a reasonable workaround for this. The reason for the incorrect border size is because it is possible for mpv to ignore the width/height from the toplevel listener and set its own size. This new size can differ from what sway/wlroots believes the size is which is what causes the sever side decorations to be drawn on incorrect dimensions. A simple trick is to just explicitly commit the surface after a resize is performed. This is only done if mpv is not fullscreened or maximized since we always obey the compositor widths/heights in those cases. Sending the commit signals the compositor of the new change in the surface and thus sway/wlroots updates its internal coordinates appropriately and borders are no longer broken.
* wayland: don't rely on presentation discardedDudemanguy2020-08-161-3/+0
| | | | | | | | | | | | | When using presentation time, we have to be sure to update the ust when no presentation events are received to make sure playback is still smooth and in sync. Part of the recent presentation time refactor was to use the presentation discarded event to signal that the window is hidden. Evidently, this doesn't work the same everywhere for whatever reason (drivers?? hardware??) and at least one user experienced issues with playback getting out of sync since (presumably) the discarded event didn't occur when hiding the window. Instead, let's just go back to the old way of checking if the last_ust is equal to the ust value of the last member in the wayland sync queue. Fixes #8010.
* wayland: refactor presentation timeDudemanguy2020-08-161-30/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The motivation for this change was a segfault caused by e107342 which has complicated reasons for occuring (i.e. I'm not 100% sure but I think it is a really weird race). The major part of this commit is moving the initialization of presentation listener to the frame_callback function. Calling it in swap_buffers worked fine but in practice it meant a lot of meaningless function calls if a window was hidden (the presentation would just be immediately discarded). By calling it in frame_callback, we ensure the listener is only created when it is possible to receive a presentation event. Of course calling the presentation listener in feedback_presented or feedback_discarded was considered, but ultimately these events are too slow. Receiving the ust/msc/sbc triplet here and then passing it to mpv results in higher vsync judder since there is (likely) not enough time before the next pageflip. By design, the frame callback is meant to give us as much time as possible before the next repaint so calling it here is probably optimal. Additionally, we can make better use of the feedback_discarded event. The wp_presentation_feedback should not be destroyed here. It will be taken care of either when we get feedback again or when the player quits. Instead what we can do is set a bool that tells wayland_sync_swap to update itself based on mp_time delta. In practice, the result is not any different than before, but it should be more understandable what is going on now. Of course, the segfault mentioned at the beginning is fixed with this as well.
* wayland: fix buildwm42020-06-041-1/+1
| | | | | | Broken by previous commit. I've split a commit incorrectly. Fixes: #7802
* drm: add typedef for PFNEGLGETPLATFORMDISPLAYEXTPROC (#7314)Jan Palus2020-05-141-0/+5
| | | | extension is not mandatory and is not provided on ie Raspberry Pi
* video: fix rgb30 component orderwm42020-05-091-1/+1
| | | | | | | | | Was broken with a zimg wrapper refucktor before the previous commit. In addition, it seems this didn't match the vo_drm format, or the format naming convention. So the order actually changes, and the format is redefined. (The img_format.h comment was probably wrong.) Change vo_gpu to the new format as well, so we can still test it.
* egl_helpers: add typedef for EGLAttrib (#7314)Jan Palus2020-04-231-0/+1
| | | | part of EGL 1.5 which is not present ie on Raspberry Pi
* wayland: use mp_time deltas for presentation timeDudemanguy2020-04-201-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | One not-so-nice hack in the wayland code is the assumption of when a window is hidden (out of view from the compositor) and an arbitrary delay for enabling/disabling the usage of presentation time. Since you do not receive any presentation feedback when a window is hidden on wayland (a feature or misfeature depending on who you ask), the ust is updated based on the refresh_nsec statistic gathered from the previous feedback event. The flaw with this is that refresh_nsec basically just reports back the display's refresh rate (1 / refresh_rate * 10^9). It doesn't tell you how long the vsync interval really was. So as a video is left playing out of view, the wl->last_queue_display_time becomes increasingly inaccurate. This led to a vsync spike when bringing the mpv window back into sight after it was hidden for a period of time. The hack for working around this is to just wait a while before enabling presentation time again. The discrepancy between the "bogus" wl->last_queue_display_time and the actual value you get from the feedback only happens initially after a switch. If you just discard those values, you avoid the dramatic vsync spike. It turns out that there's a smarter way to do this. Just use mp_time_us deltas. The whole reason for these hacks is because wl->last_queue_display_time wasn't close enough to how long it would take for a frame to actually display if it wasn't hidden. Instead, mpv's internal timer can be used, and the difference between wayland_sync_swap calls is a close enough proxy for the vsync interval (certainly better than using the monitor's refresh rate). This avoids the entire conundrum of massive vsync spikes when bringing the player back into view, and it means we can get rid of extra crap like wl->hidden.
* vo_gpu: opengl: make sure to always clean up debug callbacksNiklas Haas2020-04-151-0/+4
| | | | | | | | | In theory this mostly happens automatically, especially after the 5 vsync limit disables this already. But if we uninit before 5 vsyncs are rendered, this can get left in a dangling 'enabled' state, which leaks a debug report callback. Always explicitly disable it just to be on the safe side.
* options: change option macros and all option declarationswm42020-03-183-36/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Change all OPT_* macros such that they don't define the entire m_option initializer, and instead expand only to a part of it, which sets certain fields. This requires changing almost every option declaration, because they all use these macros. A declaration now always starts with {"name", ... followed by designated initializers only (possibly wrapped in macros). The OPT_* macros now initialize the .offset and .type fields only, sometimes also .priv and others. I think this change makes the option macros less tricky. The old code had to stuff everything into macro arguments (and attempted to allow setting arbitrary fields by letting the user pass designated initializers in the vararg parts). Some of this was made messy due to C99 and C11 not allowing 0-sized varargs with ',' removal. It's also possible that this change is pointless, other than cosmetic preferences. Not too happy about some things. For example, the OPT_CHOICE() indentation I applied looks a bit ugly. Much of this change was done with regex search&replace, but some places required manual editing. In particular, code in "obscure" areas (which I didn't include in compilation) might be broken now. In wayland_common.c the author of some option declarations confused the flags parameter with the default value (though the default value was also properly set below). I fixed this with this change.
* options: remove intpair option typewm42020-03-131-1/+2
| | | | | | | | | | | This was mostly unused, and has certain problems. Just get rid of it. It was still used in CDDA (--cdda-span) and a debug option for OpenGL (--opengl-check-pattern). Replace both of these with 2 options, where each sets the start/end values of the former span. Both were undocumented somehow (normally we require all options to be documented), so I'm not caring about compatibility, and not bothering to add it to the API changelog.
* drm_prime: double free bugSven Kroeger2020-03-051-3/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit fixes a bug where handle for a framebuffer gets double freed. It seems to happen that the same prime fd gets two framebuffers. As the prime fd is the same the resulting prime handle is also the same. This means one handle but 2 framebuffers and can lead to the following chain: 1. The first framebuffer gets deleted the handle gets also freed via the ioctl. 2. In startup phase not all 4 dumb buffers for overlay drawing are set up. It can happen that the last dumb buffer gets the handle we freed above. 3. The second framebuffer gets freed and the handle will be freed again resulting that the 4's dumb buffer handle is not backed by a buffer. 4. Drm prime continues to assign handles to its prime fds an will lead to have this handle which was just freed to reassign again but to an prime buffer. 5.Now the overlay should be drawn into dumb buffer 4 which still has the same handle but is backed by the wrong buffer. This leads to two different behaviors: - MPV crashes as the drm prime buffers size als calculated by the decoder output format. The overlay output format differs and it takes more space. SO the size check in kernel fails. - MPV is continuing play. This happens when the decoders allocates a bigger buffer than needed for the overlay. For example overlay is Full HD and decoder output is 4k. This leads to the behavior das the overlay wil be drawn into the wrong buffer as its a drm prime buffer and results in a flicker every fourth step.
* OpenGL: Also detect softpipe as a software driverlinkmauve2020-02-251-0/+1
| | | Because it is.
* wayland: remove wayland-frame-wait-offset optiondudemanguy2020-01-311-1/+1
| | | | | | | | | | | | | | | | | This originally existed as a hack for weston. In certain scenarios, a frame taking too long to render would cause vo_wayland_wait_frame to timeout which would result in a ton of dropped frames. The naive solution was to just to add a slight delay to the time value. If a frame took too long, it would likely to fall under the timeout value and all was well. This was exposed to the user since the default delay (1000) was completely arbitrary. However with presentation time, this doesn't appear to be neccesary. Fresh frames that take longer than the display's refresh rate (16.666 ms in most cases) behave well in Weston. In the other two main compositors without presentation time (GNOME and Plasma), they also do not experience any ill effects. It's better not to overcomplicate things, so this "feature" can be removed now.
* cocoa-cb: add support for forcing the dedicated GPU for renderingder richter2020-01-261-3/+5
| | | | | | | | this deprecates the old cocoa backend only option and moves it to the general macos ones. add support for the new option in the cocoa-cb layer creation and use the new option in the olde cocoa backend. Fixes #7272
* vo_gpu: hwdec_vdpau: remove direct_modePhilip Langdale2019-12-281-124/+47
| | | | | | | | | | | | | | | | | | | | | | | | As we are less and less interested in vpdpau, with nvdec and vaapi being better choices in general on nvidia and AMD respectively, we might consider removing direct_mode, where we bypass the vdpau mixer and work directly with yuv textures. Normally, working with yuv textures would be great, but vdpau built in an assumption that all frames are delivered as separate fields, causing us to have to re-interleave them. nvidia then introduces a new OpenGL extension that can return the yuv frames as frames, but we can't just unconditionally switch to that as we'd want to keep supporting older hardware where the drivers are no longer getting new features. The end result is that we wouldn't be able to get rid of the old code paths. Removing direct_mode means we always use the mixer, and work with rgba frame textures. There are some theoretical limitations to this, but in practice they probably don't matter much - unsupported colourspaces don't matter because without 10bit decoding support, we can't use them anyway, and apparently we're not doing separate chroma scaling these days, so scaling the rbga doesn't really lose anything (and the vdpau hq scaling option remains available).
* vo_gpu: opengl: make it work with EGL 1.4wm42019-12-164-4/+82
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This tries to deal with the crazy EGL situation. The summary is: - using eglGetDisplay() with multiple windowing platforms doesn't really work, but Mesa had an awful hack for it - this hack can be disabled at build time, and some distros sometimes accidentally or intentionally do so - Mesa will probably eventually disable it by default - we switched to eglGetPlatformDisplay(), but this requires EGL 1.5 - the very regrettable graphics company (also known as Nvidia) ships drivers (for old hardware I think) that are EGL 1.4 only - that means even though we "require" EGL 1.5 and link against it, the runtime EGL may be 1.4 - trying to run mpv there crashes in the dynamic linker - so we have to go through some more awful compatibility hacks This commit tries to do it "properly", but using EGL 1.4 as base. The plaform selection mechanism is a messy extension there, which got elevated to core API in 1.5 (but OF COURSE in incompatible ways). I'm not sure whether the EGL 1.5 code path (by parsing the EGL_VERSION) is really needed, but if you ask me, it feels slightly saner not to rely on an EGL 1.4 kludge forever. But maybe this is just an instance of self-harm, since they will most likely never drop or not provide this API. Also, unlike before, we actually check the extension string for the individual platform extensions, because who knows, some EGL implementations might curse us if we pass unknown platform parameters. (But actually, the more I think about this, the more bullshit it is.) X11 and Wayland were the only ones trying to call eglGetPlatformDisplay, so they're the only ones which are adjusted in this commit. Unfortunately, correct function of this commit is unconfirmed. It's possible that it crashes with the old drivers mentioned above. Why didn't they solve it like this: struct native_display { int platform_type; void *native_display; }; Could have kept eglGetDisplay() without all the obnoxious extension BS.
* vo_gpu: x11egl: log EGL config IDwm42019-12-151-2/+6
| | | | Somewhat useful for debugging.
* vo_gpu: x11egl: cleanup EGL correctlywm42019-12-121-6/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ...probably. The EGL backend had a strange problem: when recreating the window, EGL surface creation sometimes mysteriously failed. For example, keeping the "_" key down (cycles video by default) destroys and recreates the window in rapid succession, which will often enough show the "Could not create EGL surface!" message. This was puzzling because due to mpv's architecture, the X11 Window and even the X11 Display were fully destroyed, the thread on which they ran was destroyed, and then everything was recreated. There shouldn't have been any state that could make subsequent EGL initialization fail. It turns out mpv forgot to free EGLSurfaces in the x11 code. EGL is a pretty crazy API (full of thread local and global state with weird lifetime requirements), and for example it seems EGLDisplay cannot be explicitly released, but apparently implicitly dies when the native display is closed (at least EGL 1.5 claims eglTerminate() does _not_ invalidate the display, only certain objects linked to it). It appears that Mesa still referenced at least EGLSurface in some form, and either some pointer or some X11 ID was dangling, and when it randomly matched when eglCreateWindowSurface() was called, it failed. Fix this by calling eglTerminate(), which supposedly destroys (or rather unreferences) contexts and surfaces created from the display (but absurdly not the display itself). Now why can't you just destroy the display? If it's implicitly invalidated, why can't it just call eglTerminate() implicitly when this happens? Did Mesa do something wrong when they somehow didn't automatically remove the dangling object (so I could claim not to be responsible for the bug)? Who the fuck knows, and I'm too tired to figure this out (both because it's late, and because I'm tired of this EGL crap API). Still not sure if the code is correct now. I think EGL was designed to maximize implementation and API-use complications. How else could you possibly come up with something like the EGLDisplay life cycle? Or am I just making a fuss? Anyway, fuck EGL, fuck computers, fuck technology. Fixes: #7129
* rpi: destroy fullscreen change handlingwm42019-12-111-3/+0
| | | | | | | | | Get rid of the legacy VOCTRL (which will be removed later). I'm not sure what exactly fullscreen was supposed to do (toggling between using the entire display, and what --geometry forced?), but I don't care, just get rid of the VOCTRL. PRs to fix regressions caused by this will be accepted, but personally I don't care since this is excessively fringe and obscure.
* drm: avoid division by 0 in drm_pflip_cb with bad driversAnton Kindestam2019-12-071-0/+1
| | | | | | | | | | | | | | Seems like some drivers only increment msc every other page flip when running in interlaced mode (I'm looking at you nouveau). I.e. it seems to be incremented at the frame rate, rather than the field rate. Obviously we can't work with this, so shame the driver and bail. On intel this isn't an issue, as msc is incremented at field rate there. This means presentation feedback won't work correctly in interlaced modes with those drivers, but who in their right mind uses an interlaced mode these days, anyway?
* vo_gpu: opengl: add hack for ancient Mesa/GLXwm42019-11-301-23/+47
| | | | | | | | | | | | | | | | glx.h recursively includes gl.h, and there is no way to prevent this. Old Mesa defines some GL symbols, but not all which mpv needs. In particular, one user who was too lazy to update his ancient Ubuntu and preferred to bother us with obscure bug reports, had Mesa headers which did not define GL 3.2, so GLsync was not defined. All in all I still think the idea of providing the GL API definitions ourselves was a good idea; just GLX should have been isolated better. But isolating GLX now is too much effort. Not sure why I'm bothering with this at all. Fixes: #7201 (unconfirmed)
* vo_gpu: opengl: do not free "GL" sub-allocationswm42019-11-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | This function always expects the GL struct pointer to be a talloc allocation. So far so bad. But the terrible thing is that _lots_ of code in mpv didn't quite get this (including the code which introduced the way it is used this way). For example, in context_glx.c you see this: struct priv { GL gl; ... GL is not a talloc allocation, but since it's at the start of a talloc allocation, it works anyway. So far so bad. But the really terrible thing is that mpgl_load_functions2() calls talloc_free_children() on the GL pointer, which means that all of priv's. This would be unintentional and could create dangling pointers. And this happens at the about 1 dozen of callers. I'm amazed it didn't broke yet anywhere. Removing this anti-pattern with making GL "implicitly" a talloc allocation would be too much effort at this point. So just manually free the only allocation that the function attached to GL.
* vo_gpu: context_glx: Add X11 native resourcePhilip Langdale2019-11-161-0/+2
| | | | | | | Surprisingly, we've managed to get this far without context_glx ever adding the X11 display as a native resource. But with the recent change to attempt to enable vdpau when using EGL, the hwdec now requires the display to be added. So let's add it.
* wayland: use eglGetPlatformDisplay()Dudemanguy2019-11-161-1/+2
| | | | | See aacc194. The same logic all applies to Wayland. In fact, we already require EGL 1.5 for wayland anyway, so it's better to do it right.
* x11: require EGL 1.5 and use eglGetPlatformDisplay()wm42019-11-161-6/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | eglGetPlatform() is a broken API, since it takes a windowing specific argument, yet is supposed to work for multiple APIs at the same time. On Linux, it can take both a X11 "Display" and a "wl_display". Obviously there is no way to specify what kind of display the argument is (it's just a void*). Mesa has _eglNativePlatformDetectNativeDisplay, which does funny stuff to try to guess the display type, including trying to call mincore() to determine whether the pointer can be accessed at all. I guess this recently accidentally broke (as a bug), but on the other hand, maybe it's time to do this properly. The fix is using eglGetPlaformDisplay(). This requires EGL 1.5, plus Mesa needs to support the associated platform extension (EGL_KHR_platform_x11). Since I see no reasonable way to do this in a compatible way, just require that EGL 1.5 is available. The problem is t