summaryrefslogtreecommitdiffstats
path: root/video/out/opengl/context_glx.c
Commit message (Collapse)AuthorAgeFilesLines
* vo_gpu: context_glx: Add X11 native resourcePhilip Langdale2019-11-161-0/+2
| | | | | | | Surprisingly, we've managed to get this far without context_glx ever adding the X11 display as a native resource. But with the recent change to attempt to enable vdpau when using EGL, the hwdec now requires the display to be added. So let's add it.
* vo_gpu: x11: remove special vdpau probing, use EGL by defaultwm42019-09-151-25/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Originally, vo_gpu/vo_opengl considered the case of Nvidia proprietary drivers, which required vdpau/GLX, and Intel open source drivers, which require vaapi/EGL. Since window creation and GPU context creation are inseparable in mpv's internal API, it had to pick the correct API very early, or hardware decoding wouldn't work. "x11probe" was introduced for this reason. It created a GLX context (without showing the window yet), and checked whether vdpau was available. If yes, it used GLX, if not, it continued probing x11/EGL. (Obviously it couldn't always fail on GLX without vdpau, which is why it was a separate "probe" backend.) Years passed, and now the situation is different. Vdpau is dead. Nvidia drivers and libavcodec now provide CUDA interop, which requires EGL, and fixes some of the vdpau problems. AMD drivers now provide vaapi, which generally works better than vdpau. Intel didn't change. In particular, vaapi provides working HEVC Main10 support. In theory, it should work on vdpau too, with quality reduction (no 10 bit surfaces), but I couldn't get it to work. So always prefer EGL. And suddenly hardware decoding works. This is actually rather important, because HEVC is unfortunately on the rise, despite shitty encoders and unoptimized decoders. The latter may mean that hardware decoding works better than libavcodec. This should have been done a long, long time ago.
* vo_gpu: glx: move OML sync code to an independent filewm42019-09-081-96/+5
| | | | | | | | | | | | | | | | | So the next commit can make EGL use it. EGL has a quite similar function, that practically works the same. Although it's relatively trivial, it's still tricky, and probably shouldn't end up as duplicated code. There are no functional changes, except initialization, and how failure of the glXGetSyncValues call is handled. Also, some comments mention the EGL extension. Note that there's no intention for this code to handle anything else than the very specific OML sync extension (and its EGL equivalent). This is just too weirdly specific to the weird idiosyncrasies of the extension, and it makes no sense to extend it to handle anything else. (Such as Wayland or DXGI presentation feedback.)
* vo, vo_gpu, glx: correct GLX_OML_sync_control usagewm42018-12-061-67/+95
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | I misunderstood how this extension works. If I understand it correctly now, it's worse than I thought. They key thing is that the (ust, msc, sbc) tripple is not for a single swap event. Instead, (ust, msc) run independently from sbc. Assuming a CFR display/compositor, this means you can at best know the vsync phase and frequency, but not the exact time a sbc changed value. There is GLX_INTEL_swap_event, which might work as expected, but it has no EGL equivalent (while GLX_OML_sync_control does, in theory). Redo the context_glx sync code. Now it's either more correct or less correct. I wanted to add proper skip detection (if a vsync gets skipped due to rendering taking too long and other problems), but it turned out to be too complex, so only some unused fields in vo.h are left of it. The "generic" skip detection has to do. The vsync_duration field is also unused by vo.c. Actually this seems to be an improvement. In cases where the flip call timing is off, but the real driver-level timing apparently still works, this will not report vsync skips or higher vsync jitter anymore. I could observe this with screenshots and fullscreen switching. On the other hand, maybe it just introduces an A/V offset or so. Why the fuck can't there be a proper API for retrieving these statistics? I'm not even asking for much.
* vo: use a struct for vsync feedback stuffwm42018-12-061-3/+3
| | | | So new useless stuff can be easily added.
* vo_gpu: glx: use GLX_OML_sync_control for better vsync reportingwm42018-12-061-0/+101
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the extension to compute the (hopefully correct) video delay and vsync phase. This is very fuzzy, because the latency will suddenly be applied after some frames have already been shown. This means there _will_ be "jumps" in the time accounting, which can lead to strange effects at start of playback (such as making initial "dropped" etc. frames worse). The only reasonable way to fix this would be running a few dummy frame swaps at start of playback until the latency is known. The same happens when unpausing. This only affects display-sync mode. Correct function was not confirmed. It only "looks right". I don't have the equipment to make scientifically correct measurements. A potentially bad thing is that we trust the timestamps we're receiving. Out of bounds timestamps could wreak havoc. On the other hand, this will probably cause the higher level code to panic and just disable DS. As a further caveat, this makes a bunch of assumptions about UST timestamps. If there are delayed frames (i.e. we skipped one or more vsyncs), the latency logic is mostly reset. There is no attempt to make the vo.c skipped vsync logic to use this. Also, the latency computation determines a vsync duration, and there's no effort to reconcile or share the vo.c logic for determining vsync duration.
* vo_opengl: refactor into vo_gpuNiklas Haas2017-09-211-0/+376
This is done in several steps: 1. refactor MPGLContext -> struct ra_ctx 2. move GL-specific stuff in vo_opengl into opengl/context.c 3. generalize context creation to support other APIs, and add --gpu-api 4. rename all of the --opengl- options that are no longer opengl-specific 5. move all of the stuff from opengl/* that isn't GL-specific into gpu/ (note: opengl/gl_utils.h became opengl/utils.h) 6. rename vo_opengl to vo_gpu 7. to handle window screenshots, the short-term approach was to just add it to ra_swchain_fns. Long term (and for vulkan) this has to be moved to ra itself (and vo_gpu altered to compensate), but this was a stop-gap measure to prevent this commit from getting too big 8. move ra->fns->flush to ra_gl_ctx instead 9. some other minor changes that I've probably already forgotten Note: This is one half of a major refactor, the other half of which is provided by rossy's following commit. This commit enables support for all linux platforms, while his version enables support for all non-linux platforms. Note 2: vo_opengl_cb.c also re-uses ra_gl_ctx so it benefits from the --opengl- options like --opengl-early-flush, --opengl-finish etc. Should be a strict superset of the old functionality. Disclaimer: Since I have no way of compiling mpv on all platforms, some of these ports were done blindly. Specifically, the blind ports included context_mali_fbdev.c and context_rpi.c. Since they're both based on egl_helpers, the port should have gone smoothly without any major changes required. But if somebody complains about a compile error on those platforms (assuming anybody actually uses them), you know where to complain.