summaryrefslogtreecommitdiffstats
path: root/video/out/opengl
Commit message (Collapse)AuthorAgeFilesLines
* wayland: use callback flag + poll for buffer swapdudemanguy2019-10-101-0/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The old way of using wayland in mpv relied on an external renderloop for semi-accurate timings. This had multiple issues though. Display sync would break whenever the window was hidden (since the frame callback stopped being executed) which was really annoying. Also the entire external renderloop logic was kind of fragile and didn't play well with mpv's internal structure (i.e. using presentation time in that old paradigm breaks stats.lua). Basically the problem is that swap buffers blocks on wayland which is crap whenever you hide the mpv window since it looks up the entire player. So you have to make swap buffers not block, but this has a different problem. Timings will be terrible if you use the unblocked swap buffers call. Based on some discussion in #wayland, the trick here is relatively simple and works well enough for our purposes. Instead we basically build a way to block with a timeout in the wayland buffer swap functions. A bool is set in the frame callback function that indicates whether or not mpv is waiting for a frame to be displayed. In the actual buffer swap function, we enter into a while loop waiting for this flag to be set. At the same time, the wl_display is polled to block the thread and wakeup if it receives any events from the compositor. This loop only breaks if enough time has passed or if the frame callback bool is received. In the near future, it is better to set whether or not frame a frame has been displayed in the presentation feedback. However as a first pass, doing it in the frame callback is more than good enough. The "downside" is that we render frames that aren't actually shown on screen when the player is hidden (it seems like wayland people don't like that). But who cares. Accurate timings are way more important. It's probably not too hard to add that behavior back in the player though.
* wayland opengl: actually call uninit if init failsdudemanguy2019-10-031-1/+3
| | | | | | | | This is the proper fix to the memory leak @wm4 pointed out. It turns out that when you autoprobe opengl and vo_wayland_init returns false, vo_wayland_uninit is never actually executed. So you have a leftover pointer. The vulkan context does this correctly which was why my old, dumb "fix" broke it.
* vo: make swapchain-depth option generic for all VOsAnton Kindestam2019-09-284-6/+7
| | | | In preparation for making vo_drm able to use swapchain-depth
* drm: refactor page_flipped callbackAnton Kindestam2019-09-281-54/+6
| | | | | Avoid duplicating the same callback function in both context_drm_egl and vo_drm.
* drm: move struct vsync_tuple into drm_common as drm_vsync_tupleAnton Kindestam2019-09-281-9/+2
| | | | This struct will be useful in vo_drm as well.
* context_drm_egl: define EGL_PLATFORM_GBM_MESA, EGL_PLATFORM_GBM_KHR if not ↵Anton Kindestam2019-09-271-0/+8
| | | | | | | in system headers To account for oddball setups where EGL_PLATFORM_GBM_MESA or EGL_PLATFORM_GBM_KHR might not be defined for whatever reason.
* vo_gpu: hwdec_drmprime_drm: add hwdec ctxJonas Karlman2019-09-271-0/+14
| | | | | | | This allows to use drm hwaccels that require a hwdevice. Tested with v4l2request hwaccel and cedrus driver on an allwinner device running mpv with --vo=gpu --gpu-context=drm --hwdec=drm.
* context_android: move common code to a separate filesfan52019-09-271-52/+9
| | | | | | In preparation for a Vulkan Android context. This also replaces querying for EGL_WIDTH and EGL_HEIGHT with equivalent ANativeWindow calls.
* context_drm_egl: Don't get stuck forever if drmHandleEvent failsAnton Kindestam2019-09-221-1/+3
|
* context_drm_egl: Use eglGetPlatformDisplayEXT if availablememeka2019-09-201-1/+20
| | | | | | | | Check if eglGetPlatformDisplayEXT is available and try to use it to obtain the display connection. Fall back to eglGetDisplay if eglGetPlatformDisplayEXT is not available or failing. From PR #5992
* rpi: Update for modern systemsCameron Cawley2019-09-201-2/+2
|
* oml_sync: fix typo in commentwm42019-09-201-2/+2
| | | | I think... Also reword another part of the text.
* vo_gpu: remove vdpau/GLX backendwm42019-09-191-417/+0
| | | | | | | Useless garbage. This was once added to test whether vdpau presentation feedback could be used. Results were always unsatisfactory, and now vdpau is dead.
* vo_gpu: remove mali-fbdevwm42019-09-191-158/+0
| | | | | Useless at this point, I don't even know if it still works, or how to test it.
* drm: fix libmpv ABI breakage introduced in ↵Anton Kindestam2019-09-182-7/+7
| | | | | | | | | | | | | | | 351c083487050c88adb0e3d60f2174850f869018 Extending the client-allocated mpv_opengl_drm_params struct constituted a break of ABI that could cause UB. Create a clean break by deprecating "drm_params" and related structs and enum values, and replacing it with "drm_params_v2". Also fix some comments and code that wrongly assumed that open could return any other negative number than -1 for failure. This commit updates the libmpv version to 1.104
* vo_gpu: x11: remove special vdpau probing, use EGL by defaultwm42019-09-151-25/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Originally, vo_gpu/vo_opengl considered the case of Nvidia proprietary drivers, which required vdpau/GLX, and Intel open source drivers, which require vaapi/EGL. Since window creation and GPU context creation are inseparable in mpv's internal API, it had to pick the correct API very early, or hardware decoding wouldn't work. "x11probe" was introduced for this reason. It created a GLX context (without showing the window yet), and checked whether vdpau was available. If yes, it used GLX, if not, it continued probing x11/EGL. (Obviously it couldn't always fail on GLX without vdpau, which is why it was a separate "probe" backend.) Years passed, and now the situation is different. Vdpau is dead. Nvidia drivers and libavcodec now provide CUDA interop, which requires EGL, and fixes some of the vdpau problems. AMD drivers now provide vaapi, which generally works better than vdpau. Intel didn't change. In particular, vaapi provides working HEVC Main10 support. In theory, it should work on vdpau too, with quality reduction (no 10 bit surfaces), but I couldn't get it to work. So always prefer EGL. And suddenly hardware decoding works. This is actually rather important, because HEVC is unfortunately on the rise, despite shitty encoders and unoptimized decoders. The latter may mean that hardware decoding works better than libavcodec. This should have been done a long, long time ago.
* vo_gpu: x11egl: support Mesa OML sync extensionwm42019-09-081-0/+24
| | | | | | | | | | | | | | | | | | | | Mesa supports the EGL_CHROMIUM_sync_control extension, and it's available out of the box with AMD drivers. In practice, this is exactly the same as GLX_OML_sync_control, but for EGL. The extension specification is separate from the GLX one though, and buried somewhere in the Chromium code. This appears to work, although I don't know if it really works. In theory, this could be useful for other EGL targets. Support code for it could have been added to egl_helpers.c to avoid some minor duplicated glue code if another EGL target were to provide this extension. I didn't bother with that. ANGLE on Windows can't support it, because the extension spec. explicitly requires POSIX timers. ANGLE on Linux/OSX is actively harmful for mpv and hopefully won't ever use it. Wayland uses EGL, but has its own fancy presentation feedback stuff (and besides, I don't think basic video player functionality works on Wayland at all). context_drm_egl maybe? But I think DRM has its own stuff.
* vo_gpu: glx: move OML sync code to an independent filewm42019-09-083-96/+145
| | | | | | | | | | | | | | | | | So the next commit can make EGL use it. EGL has a quite similar function, that practically works the same. Although it's relatively trivial, it's still tricky, and probably shouldn't end up as duplicated code. There are no functional changes, except initialization, and how failure of the glXGetSyncValues call is handled. Also, some comments mention the EGL extension. Note that there's no intention for this code to handle anything else than the very specific OML sync extension (and its EGL equivalent). This is just too weirdly specific to the weird idiosyncrasies of the extension, and it makes no sense to extend it to handle anything else. (Such as Wayland or DXGI presentation feedback.)
* vo_gpu: hwdec_vaegl: Rename and move to hwdec_vaapiPhilip Langdale2019-07-081-558/+0
| | | | | In preparation for adding Vulkan interop support, let's rename to remove the egl reference and move to an api neutral location.
* vo/gpu: hwdec_vdpau: Support direct mode for 4:4:4 contentPhilip Langdale2019-07-081-4/+15
| | | | | | | New releases of VDPAU support decoding 4:4:4 content, and that comes back as NV24 when using 'direct mode' in OpenGL Interop. That means we need to be a little bit smarter about how we set up the OpenGL textures.
* opengl/context_wayland: Fix crash on configure before initial reconfigMichael Forney2019-07-081-1/+3
| | | | | | | | | If the compositor sends a configure event before the surface is initially mapped, resize gets called before the egl_window gets created, resulting in a crash in wl_egl_window_resize. This was fixed back in 618361c697, but was reintroduced when the wayland code was rewritten in 68f9ee7e0b.
* video/out/gpu: Add a `storable` flag to ra_formatPhilip Langdale2019-07-081-0/+3
| | | | | | | | | | | | | | | | While `ra` supports the concept of a texture as a storage destination, it does not support the concept of a texture format being usable for a storage texture. This can lead to us attempting to create a texture from an incompatible format, with undefined results. So, let's introduce an explicit format flag for storage and use it. In `ra_pl` we can simply reflect the `storable` flag. For GL and D3D, we'll need to write some new code to do the compatibility checks. I'm not going to do it here because it's not a regression; we were already implicitly assuming all formats were storable. Fixes #6657
* drm_common: Add proper help option to drm-modeAnton Kindestam2019-05-041-1/+1
| | | | | | | This was implemented by using OPT_STRING_VALIDATE for drm-mode, instead of OPT_INT. Using a string here also prepares for future additions to drm-mode that aim to allow specifying a mode by its resolution.
* drm_common: Add option to toggle use of atomic modesettingAnton Kindestam2019-05-041-1/+2
| | | | | | It is useful when debugging to be able to force atomic off, or as a workaround if atomic breaks for some user. Legacy modesetting is less likely to break by virtue of being a less complex API.
* vo/gpu: hwdec_cuda: Refactor gpu api specific code into separate filesPhilip Langdale2019-05-031-749/+0
| | | | | | | | The amount of code now present that's specific to Vulkan or OpenGL has reached the point where we really want to split it out to avoid a mess of #ifdefs. At the same time, I'm moving the code to an api neutral location.
* context_drm_egl: Add support for presentation feedbackAnton Kindestam2019-05-031-15/+118
| | | | | This implements presentation feedback for context_drm_egl using the values that get fed to the page flip handler.
* vo_gpu/hwdec_cuda: fixup compilation with vulkan disabledJan Ekström2019-04-221-0/+2
| | | | | | | The actual code utilizing this enum was seemingly properly if'd, but not the enum in the struct itself. Fixes compilation.
* vo/gpu: hwdec_cuda: Reorganise backend-specific codePhilip Langdale2019-04-211-151/+223
| | | | | This tries to tidy up the GL vs Vulkan code to be a bit cleaner and easier to read.
* vo_gpu: hwdec_cuda: Implement interop for placeboPhilip Langdale2019-04-211-145/+224
| | | | | | | | | | | | | | | | | | | | This change updates the vulkan interop code to work with the libplacebo based ra_vk, but also introduces direct VkImage sharing to avoid the use of the intermediate buffer. It is also necessary and desirable to introduce explicit semaphore bsed synchronisation for operations on the shared images. Synchronisation means we can safely reuse the same VkImage for every mapped frame, by ensuring the frame is copied to the VkImage before mapping the next frame. This functionality requires a 417.xx or newer nvidia driver, due to bugs in the VkImage interop in the earlier 411 and 415 drivers. It's definitely worth the effort, as the raw throughput is about twice that of implementation using an intermediate buffer.
* vo_gpu: vulkan: use libplacebo insteadNiklas Haas2019-04-211-1/+10
| | | | | | | | | | | | | | | | | | | | | | | | This commit rips out the entire mpv vulkan implementation in favor of exposing lightweight wrappers on top of libplacebo instead, which provides much of the same except in a more up-to-date and polished form. This (finally) unifies the code base between mpv and libplacebo, which is something I've been hoping to do for a long time. Note: The ra_pl wrappers are abstract enough from the actual libplacebo device type that we can in theory re-use them for other devices like d3d11 or even opengl in the future, so I moved them to a separate directory for the time being. However, the rest of the code is still vulkan-specific, so I've kept the "vulkan" naming and file paths, rather than introducing a new `--gpu-api` type. (Which would have been ended up with significantly more code duplicaiton) Plus, the code and functionality is similar enough that for most users this should just be a straight-up drop-in replacement. Note: This commit excludes some changes; specifically, the updates to context_win and hwdec_cuda are deferred to separate commits for authorship reasons.
* vo_gpu: index desc namespaces by raNiklas Haas2019-04-211-1/+1
| | | | | No reason to require them be constant. This allows them to depend on runtime characteristics of the `ra`.
* Merge branch 'master' into pr6360Jan Ekström2019-03-112-59/+151
|\ | | | | | | | | | | Manual changes done: * Merged the interface-changes under the already master'd changes. * Moved the hwdec-related option changes to video/decode/vd_lavc.c.
| * context_drm_egl: implement n-bufferingAnton Kindestam2019-02-251-59/+150
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows context_drm_egl to use as many buffers as libgbm or the swapchain_depth setting allows (whichever is smaller). On pause and on still images (cover art etc.) to make sure that output does not lag behind user input, the swapchain is drained and reverts to working in a dual buffered (equivalent to swapchain-depth=1) manner. When possible (swapchain-depth>=2), the wait on the page flip event is now not done immediately after queueing, but is deferred to the next invocation of swap_buffers. Which should give us more CPU time between invocations. Although, since gbm_surface_has_free_buffers() can only tell us a boolean value and not how many buffers we have left, we are forced to do this contortionist dance where we first overshoot until gbm_surface_has_free_buffers() reports 0, followed by immediately waiting so we can free a buffer, to be able to get the deferred wait on page flip rolling. With this commit we do not rely on the default vsync fences/latency emulation of video/out/opengl/context.c, but supply our own, since the places we create and wait for the fences needs to be somewhat different for best performance. Minor fixes: * According to GBM documentation all BO:s gotten with gbm_surface_lock_front_buffer must be released before gbm_surface_destroy is called on the surface. * We let the page flip handler function handle the waiting_for_flip flag.
| * opengl: Support GL_ARB_sync style fences on OpenGL ES 3.0Anton Kindestam2019-02-251-0/+1
| | | | | | | | | | OpenGL ES 3.0 and up has suppport for for GL_ARB_sync style fences. Make sure that mpv can use them.
* | vo, vo_gpu, glx: correct GLX_OML_sync_control usagewm42018-12-061-67/+95
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I misunderstood how this extension works. If I understand it correctly now, it's worse than I thought. They key thing is that the (ust, msc, sbc) tripple is not for a single swap event. Instead, (ust, msc) run independently from sbc. Assuming a CFR display/compositor, this means you can at best know the vsync phase and frequency, but not the exact time a sbc changed value. There is GLX_INTEL_swap_event, which might work as expected, but it has no EGL equivalent (while GLX_OML_sync_control does, in theory). Redo the context_glx sync code. Now it's either more correct or less correct. I wanted to add proper skip detection (if a vsync gets skipped due to rendering taking too long and other problems), but it turned out to be too complex, so only some unused fields in vo.h are left of it. The "generic" skip detection has to do. The vsync_duration field is also unused by vo.c. Actually this seems to be an improvement. In cases where the flip call timing is off, but the real driver-level timing apparently still works, this will not report vsync skips or higher vsync jitter anymore. I could observe this with screenshots and fullscreen switching. On the other hand, maybe it just introduces an A/V offset or so. Why the fuck can't there be a proper API for retrieving these statistics? I'm not even asking for much.
* | vo: use a struct for vsync feedback stuffwm42018-12-063-8/+10
| | | | | | | | So new useless stuff can be easily added.
* | vo_gpu: glx: use GLX_OML_sync_control for better vsync reportingwm42018-12-063-0/+111
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the extension to compute the (hopefully correct) video delay and vsync phase. This is very fuzzy, because the latency will suddenly be applied after some frames have already been shown. This means there _will_ be "jumps" in the time accounting, which can lead to strange effects at start of playback (such as making initial "dropped" etc. frames worse). The only reasonable way to fix this would be running a few dummy frame swaps at start of playback until the latency is known. The same happens when unpausing. This only affects display-sync mode. Correct function was not confirmed. It only "looks right". I don't have the equipment to make scientifically correct measurements. A potentially bad thing is that we trust the timestamps we're receiving. Out of bounds timestamps could wreak havoc. On the other hand, this will probably cause the higher level code to panic and just disable DS. As a further caveat, this makes a bunch of assumptions about UST timestamps. If there are delayed frames (i.e. we skipped one or more vsyncs), the latency logic is mostly reset. There is no attempt to make the vo.c skipped vsync logic to use this. Also, the latency computation determines a vsync duration, and there's no effort to reconcile or share the vo.c logic for determining vsync duration.
* drm: rename plane options to better, invariant, namesAnton Kindestam2018-12-012-70/+70
| | | | | | | | | | | | | | | | | | | | | This commit bumps the libmpv version to 1.102 drm-osd-plane -> drm-draw-plane drm-video-plane -> drm-drmprime-video-plane drm-osd-size -> drm-draw-surface-size "draw plane", as in the plane that OpenGL draws to, whether it be video + OSD or just OSD. "drmprime video plane", as in the plane used for hwdec video imported via drmprime. "draw surface size", as in the size of the surface used for the draw plane The new names are invariant whether or not hwdec_drmprime_drm is being used or not. The original naming was very confusing, as when doing regular rendering (swdec or vaapi) the video would be displayed on the "OSD plane", and the "Video plane" would remain unused.
* vo_gpu: hwdec_cuda: Guard GL and Vulkan headers properlyPhilip Langdale2018-11-181-0/+5
| | | | | | | | We are currently unnecessarily including vulkan headers even when not building with vulkan support. I also guarded the GL header inclusion even though this doesn't appear to break anything today. Fixes #6330.
* vo_gpu: opengl: disable compute shaders for old GLSLNiklas Haas2018-11-171-0/+6
| | | | Fixes #6272.
* vo_gpu: hwdec_cuda: Clean up init() error handlingPhilip Langdale2018-10-311-10/+15
| | | | | | | | | | | | Currently, the error paths in init() are a bit confusing, and we can end up trying to pop the current context when there is no context, which leads to distracting error messages. I also added an explicit path to return early if the GPU backend is not OpenGL or Vulkan. It's pointless to do any other cuda init after that point. (Of course, someone could write more interops.) Fixes #6256
* hwdec_drmprime_drm: Missing NULL-check on drm_atomic_context video_planeAnton Kindestam2018-10-251-0/+3
| | | | | | | | | Since 810acf32d6cf0dfbad66c602689ef1218fc0a6e3 video_plane can be NULL under some circumstances. While there is a check in init, init treats this as an error condition and would call uninit, which in turn calls disable_video_plane, which would then segfault. Fix this by including a NULL check inside disable_video_plane, so that it doesn't try to disable what isnt' there.
* vo_gpu: vulkan: hwdec_cuda: Add support for Vulkan interopPhilip Langdale2018-10-221-58/+285
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Despite their place in the tree, hwdecs can be loaded and used just fine by the vulkan GPU backend. In this change we add Vulkan interop support to the cuda/nvdec hwdec. The overall process is mostly straight forward, so the main observation here is that I had to implement it using an intermediate Vulkan buffer because the direct VkImage usage is blocked by a bug in the nvidia driver. When that gets fixed, I will revist this. Nevertheless, the intermediate buffer copy is very cheap as it's all device memory from start to finish. Overall CPU utilisiation is pretty much the same as with the OpenGL GPU backend. Note that we cannot use a single intermediate buffer - rather there is a pool of them. This is done because the cuda memcpys are not explicitly synchronised with the texture uploads. In the basic case, this doesn't matter because the hwdec is not asked to map and copy the next frame until after the previous one is rendered. In the interpolation case, we need extra future frames available immediately, so we'll be asked to map/copy those frames and vulkan will be asked to render them. So far, harmless right? No. All the vulkan rendering, including the upload steps, are batched together and end up running very asynchronously from the CUDA copies. The end result is that all the copies happen one after another, and only then do the uploads happen, which means all textures are uploaded the same, final, frame data. Whoops. Unsurprisingly this results in the jerky motion because every 3/4 frames are identical. The buffer pool ensures that we do not overwrite a buffer that is still waiting to be uploaded. The ra_buf_pool implementation automatically checks if existing buffers are available for use and only creates a new one if it really has to. It's hard to say for sure what the maximum number of buffers might be but we believe it won't be so large as to make this strategy unusable. The highest I've seen is 12 when using interpolation with tscale=bicubic. A future optimisation here is to synchronise the CUDA copies with respect to the vulkan uploads. This can be done with shared semaphores that would ensure the copy of the second frames only happens after the upload of the first frame, and so on. This isn't trivial to implement as I'd have to first adjust the hwdec code to use asynchronous cuda; without that, there's no way to use the semaphore for synchronisation. This should result in fewer intermediate buffers being required.
*