summaryrefslogtreecommitdiffstats
path: root/video/out/opengl/common.c
Commit message (Collapse)AuthorAgeFilesLines
* vo_opengl: support HDR peak detectionNiklas Haas2017-07-241-0/+5
| | | | | | | | | | | | | | This is done via compute shaders. As a consequence, the tone mapping algorithms had to be rewritten to compute their known constants in GLSL (ahead of time), instead of doing it once. Didn't affect performance. Using shmem/SSBO atomics in this way is extremely fast on nvidia, but it might be slow on other platforms. Needs testing. Unfortunately, setting up the SSBO still requires OpenGL calls, which means I can't have it in video_shaders.c, where it belongs. But I'll defer worrying about that until the backend refactor, since then I'll be breaking up the video/video_shaders structure anyway.
* vo_opengl: support compute shadersNiklas Haas2017-07-241-1/+18
| | | | | | | | These can either be invoked as dispatch_compute to do a single computation, or finish_pass_fbo (after setting compute_size_minimum) to render to a new texture using a compute shader. To make this stuff all work transparently, we try really, really hard to make compute shaders as identical to fragment shaders as possible in their behavior.
* vo_opengl: add direct rendering supportwm42017-07-241-0/+8
| | | | | | | | | | | | | | | | | | | | Can be enabled via --vd-lavc-dr=yes. See manpage additions for what it does. This reminds of the MPlayer -dr flag, but the implementation is completely different. It's the same basic concept: letting the decoder render into a GPU buffer to avoid a copy. Unlike MPlayer, this doesn't try to go through filters (libavfilter doesn't support this anyway). Unless a filter can work in-place, DR will be silently disabled. MPlayer had very complex semantics about buffer types and management (which apparently nobody ever understood) and weird restrictions that mostly limited it to mpeg2 style codecs. The mpv code does not do any of this, and just lets the decoder allocate an arbitrary number of untyped images. (No MPlayer code was used.) Parts of the code based on work by atomnuker (starting point for the generic code) and haasn (some GL definitions, some basic PBO code, and correct fencing).
* vo_opengl: use glBufferSubData instead of glMapBufferRangeNiklas Haas2017-07-161-0/+1
| | | | | | | | | | | | | | | | | | | | Performance seems pretty much unchanged but I no longer get nasty spikes on NUMA systems, probably because glBufferSubData runs in the driver or something. As a simplification of the code, we also just size the PBO to always have the full size, even for cropped textures. This seems slower but not by relevant amounts, and only affects e.g. --vf=crop. It also slightly increases VRAM usage for textures with big strides. This new code path is especially nice because it no longer depends on GL_ARB_map_buffer_range, and no longer uses any functions that can possibly fail, thus simplifying control flow and seemingly deprecating the manpage's claim about possible image corruption. In theory we could also reduce NUM_PBO_BUFFERS since it doesn't seem like we're streaming uploads anyway, but leave it in there just in case some drivers disagree...
* vo_opengl: use textureGatherOffset for polar filtersNiklas Haas2017-07-051-2/+2
| | | | | | | | | | | | | | | | | | This is more efficient on my machine (nvidia), but only when applied to groups of exactly 4 texels. So we switch to the more efficient textureGather for groups of 4. Some notes: - textureGatherOffset seems to be faster than textureGather by a non-negligible amount, but for some reason, textureOffset is still slower than a straight-up texture - textureGather* requires GLSL 400; and at least on nvidia, this requires actually allocating a GL 4.0 context. - the code in opengl/common.c that clamped the GLSL version to 330 is deprecated, because the old user shader style has been removed completely in the meantime - To combat the growing complexity of the polar sampling code, we drop the antiringing functionality from EWA shaders completely, since it never really worked well for EWA to begin with. (Horrific artifacting)
* vo_opengl: drop TLS usagewm42017-05-111-0/+10
| | | | | | | | | | | TLS is a headache. We should avoid it if we can. The involved mechanism is unfortunately entangled with the unfortunate libmpv API for returning pointers to host API objects. This has to be kept until we change the API somehow. Practically untested out of pure laziness. I'm sure I'll get a bunch of reports if it's broken.
* vo_opengl: add option for caching shaders on diskwm42017-04-081-0/+10
| | | | | | | | | | | | | Mostly because of ANGLE (sadly). The implementation became unpleasantly big, but at least it's relatively self-contained. I'm not sure to what degree shaders from different drivers are compatible as in whether a driver would randomly misbehave if it's fed a binary created by another driver. The useless binayFormat parameter won't help it, as they can probably easily clash. As usual, OpenGL is pretty shit here.
* vo_opengl: add hw overlay support and use it for RPIwm42016-09-121-0/+1
| | | | | | | | | | | This overlay support specifically skips the OpenGL rendering chain, and uses GL rendering only for OSD/subtitles. This is for devices which don't have performant GL support. hwdec_rpi.c contains code ported from vo_rpi.c. vo_rpi.c is going to be deprecated. I left in the code for uploading sw surfaces (as it might be slightly more efficient for rendering sw decoded video), although it's dead code for now.
* vo_opengl: improve missing function warningwm42016-06-221-6/+8
| | | | | Mostly a cosmetic change. Handling bultin and extension functions separaterly makes more sense here too.
* vo_opengl: vdpau interop without RGB conversionwm42016-06-191-0/+1
| | | | | | | | | | | | | | | | | | | | | | Until now, we've always converted vdpau video surfaces to RGB, and then mapped the resulting RGB texture. Change this so that the surface is mapped as NV12 plane textures. The reason this wasn't done until now is because vdpau surfaces are mapped in an "interlaced" way as separate fields, even for progressive video. This requires messy reinterleraving. It turns out that even though it's an extra processing step, the result can be faster than going through the video mixer for RGB conversion. Other than some potential speed-gain, doing this has multiple other advantages. We can apply our own color conversion, which is important in more complex cases. We can correctly apply debanding and potentially other processing that requires chroma-specific or in-YUV handling. If deinterlacing is enabled, this switches back to the old RGB conversion method. Until we have at least a primitive deinterlacer in vo_opengl, this will stay this way. The d3d11 and vaapi code paths are similar. (Of course these don't require any crazy field reinterleaving.)
* vo_opengl: remove uniform buffer object routinesBin Jin2016-06-181-11/+0
|
* vo_opengl: use EXT_disjoint_timer_query for timersJames Ross-Gowan2016-06-151-0/+16
| | | | | | This is the ES equivalent to ARB_timer_query. It enables the performance timers on ANGLE. All the added functions should be identical in semantics to their desktop GL equivalents.
* vo_opengl: use standard functions to retrieve display depthwm42016-06-141-0/+1
| | | | | | | | | | | | | Until now, we've used system-specific API (GLX, EGL, etc.) to retrieve the depth of the default framebuffer. (We equal this to display depth and use the determined depth for dithering.) We can actually retrieve this value through standard GL API, and it works everywhere (except GLES 2 of course). This simplifies everything a great deal. egl_helpers.c is empty now. But I expect that some EGL boilerplate will be moved to it, so don't remove it yet.
* vo_opengl: add time queriesNiklas Haas2016-06-071-0/+17
| | | | | | | | | | | | | | | | | | To avoid blocking the CPU, we use 8 time objects and rotate through them, only blocking until the last possible moment (before we need access to them on the next iteration through the ring buffer). I tested it out on my machine and 4 query objects were enough to guarantee block-free querying, but the extra margin shouldn't hurt. Frame render times are just output at the end of each frame, via MP_DBG. This might be improved in the future. (In particular, I want to expose these numbers as properties so that users get some more visible feedback about render times) Currently, we measure pass_render_frame and pass_draw_to_screen separately because the former might be called multiple times due to interpolation. Doing it this way gives more faithful numbers. Same goes for frame upload times.
* vo_opengl: make ES float texture format checks stricterwm42016-05-231-6/+1
| | | | | | | | | | | | | | | Some of these checks became pointless after dropping ES 2.0 support for extended filtering. GL_EXT_texture_rg is part of core in ES 3.0, and we already check for this version, so testing for the extension is redundant. GL_OES_texture_half_float_linear is also always available, at least as far as our needs go. The functionality we need from GL_EXT_color_buffer_half_float is always available in ES 3.2, and we explicitly check for ES 3.2, so reject this extension if the ES version is new enough.
* vo_opengl: make PBOs work on GLES 3.xwm42016-05-231-1/+11
| | | | | | | | | | | | | For some reason, GLES has no glMapBuffer, only glMapBufferRange. GLES 2 has no buffer mapping at all, and GL 2.1 does not always have glMapBufferRange. On those PBOs remain unsupported (there's no reason to care about GL 2.1 without the extension). This doesn't actually work on ANGLE, and I have no idea why. (There are artifacts on OSD, as if parts of the OSD data weren't copied.) It works on desktop OpenGL and at least 1 other ES 3 implementation. Don't enable it on ANGLE, I guess.
* vo_opengl: remove unused glDrawBufferwm42016-05-231-1/+0
|
* vo_opengl: support framebuffer invalidationwm42016-05-231-0/+8
| | | | | | | | | | Not sure how much can be gained with this, as we can't use it properly yet. For now, this is used only before rendering, which probably does overwhelmingly nothing. In the future, this should be used after temporary passes, which could possibly reduce memory usage and even memory bandwidth usage, depending on the drivers.
* vo_opengl: slightly improve logging of loaded extensionswm42016-05-231-2/+2
| | | | | Only log when actual extensions are loaded, never log anything about builtins.
* vo_opengl: add detection for the ES texture_rg extensionwm42016-05-121-0/+6
|
* vo_opengl: reorganize texture format handlingwm42016-05-121-9/+29
| | | | | | | | | | | | | This merges all knowledge about texture format into a central table. Most of the work done here is actually identifying which formats exactly are supported by OpenGL(ES) under which circumstances, and keeping this information in the format table in a somewhat declarative way. (Although only to the extend needed by mpv.) In particular, ES and float formats are a horrible mess. Again this is a big refactor that might cause regression on "obscure" configurations.
* vo_opengl: angle: dump translated shaderswm42016-05-121-0/+7
| | | | Helpful for debugging and such.
* vo_opengl: support GL_EXT_texture_norm16 on GLESwm42016-04-271-0/+6
| | | | | | | | | | | | | | | This gives us 16 bit fixed-point integer texture formats, including ability to sample from them with linear filtering, and using them as FBO attachments. The integer texture format path is still there for the sake of ANGLE, which does not support GL_EXT_texture_norm16 yet. The change to pass_dither() is needed, because the code path using GL_R16 for the dither texture relies on glTexImage2D being able to convert from GL_FLOAT to GL_R16. GLES does not allow this. This could be trivially fixed by doing the conversion ourselves, but I'm too lazy to do this now.
* vo_opengl: fix an outdated commentwm42016-04-161-3/+1
| | | | This wasn't updated over multiple iterations.
* vo_opengl: log if glGetString(GL_VERSION) returns NULLwm42016-04-081-1/+3
| | | | | | Typically happens with some implementations if no context is currrent, or is otherwise broken. This is particularly relevant to the opengl_cb API, because the API user will have no other indication what went wrong.
* Change GPL/LGPL dual-licensed files to LGPLwm42016-01-191-12/+7
| | | | | | | | | | | Do this to make the license situation less confusing. This change should be of no consequence, since LGPL is compatible with GPL anyway, and making it LGPL-only does not restrict the use with GPL code. Additionally, the wording implies that this is allowed, and that we can just remove the GPL part.
* vo_opengl: split backend code from common.c to context.cwm42015-12-191-195/+1
| | | | | | | | Now common.c only contains the code for the function loader, while context.c contains the backend loader/dispatcher. Not calling it "backend.c", because the central struct is called MPGLContext.
* vo_opengl: add dxinterop backendJames Ross-Gowan2015-12-111-0/+20
| | | | | | | | | | | | | | | | | | | | | WGL_NV_DX_interop is widely supported by Nvidia and AMD drivers. It allows a texture to be shared between Direct3D and WGL, so that rendering can be done with WGL and presentation can be done with Direct3D. This should allow us to work around some persistent WGL issues, such as dropped frames with some driver/OS combos, drivers that buffer frames to increase performance at the cost of latency, and the inability to disable exclusive fullscreen mode when using WGL to render to a fullscreen window. The addition of a DX_interop backend might also enable some cool Direct3D-specific enhancements in the future, such as using the GetPresentStatistics API to get accurate frame presentation timestamps. Note that due to a driver bug, this backend is currently broken on Intel. It will appear to work as long as the window is not resized too often, but after a few changes of size it will be unable to share the newly created renderbuffer with GL. See: https://software.intel.com/en-us/forums/graphics-driver-bug-reporting/topic/562051
* vo_opengl: enable NNEDI3 prescaler on OpenGL ES 3.0Bin Jin2015-12-021-0/+1
| | | | | | | | | | | | | | It turns out that both UBO and intBitsToFloat() are supported in OpenGL ES 3.0[1][2], enable them so that NNEDI3 prescaler can be used in a wider range of backends. Also fixes some implicit int-to-float conversions so that the shader actually compiles on GLES. Tested on Linux desktop (nvidia 358.16) with "es" sub-option. [1]: https://www.khronos.org/opengles/sdk/docs/man3/html/glGetUniformBlockIndex.xhtml [2]: https://www.khronos.org/opengles/sdk/docs/manglsl/docbook4/xhtml/intBitsToFloat.xml
* vo_opengl: use ANGLE by default if available (except for "hq" preset)wm42015-11-211-5/+5
| | | | | | | | | Running mpv with default config will now pick up ANGLE by default. Since some think ANGLE is still not good enough for hq features, extend the "es" option to reject GLES backends, and add to to the opengl-hq preset. One consequence is that mpv will by default use libswscale to convert 10 bit video to 8 bit, before it reaches the VO.
* vo_opengl: support 3D textures on ANGLEwm42015-11-191-2/+10
| | | | | | | Unfortunately, color management can still not work, because no GLES version specified so far support fixed-point 16 bit textures. Maybe we could use integer textures, but these don't support filtering. Using float textures would be another possibility.
* vo_opengl: better check for float texture supportwm42015-11-191-7/+8
| | | | | | We don't only need float textures for advanced scaling - we also need them to be filterable with GL_LINEAR. On GLES, this is not supported until GLES 3.1, but some implementation expose them with extensions.
* vo_opengl: check shader string before sscanfing itKevin Mitchell2015-11-191-1/+1
|
* vo_opengl: fix ANGLE GLES3 modeJames Ross-Gowan2015-11-191-1/+8
| | | | | | Turns out glGetTexLevelParameter, which is missing in ANGLE, is a GLES3.1 function. Removing it from the list of core GLES3 functions makes ANGLE work in GLES3 mode.
* vo_opengl: add initial ANGLE supportJames Ross-Gowan2015-11-181-0/+4
| | | | | | | | | | | ANGLE is a GLES2 implementation for Windows that uses Direct3D 11 for rendering, enabling vo_opengl to work on systems with poor OpenGL drivers and bypassing some of the problems with native GL, such as VSync in fullscreen mode. Unfortunately, using GLES2 means that most of vo_opengl's advanced features will not work, however ANGLE is under rapid development and GLES3 support is supposed to be coming soon.
* vo_opengl: attempt to improve GLX vs. EGL backend detectionwm42015-11-161-3/+7
| | | | | | | | | | | | | | | For the sake of vaapi interop, we want to use EGL, but on the other hand, but because driver developers are full of shit, vdpau interop will not work on EGL (even if the driver supports EGL). The latter happens with both nvidia and AMD Mesa drivers. Additionally, EGL vaapi interop support can apparently only detected at runtime by actually using it. While hwdec_vaegl.c already does this, it would require initializing libva on _every_ system, which will cause libav to print an unpreventable bullshit message to the terminal. Try to counter these huge loads of bullshit by adding more fucking bullshit.
* vo_opengl: fix backend autoprobingwm42015-11-161-0/+9
| | | | | | | | | | | | | We want the following behavior: - VO probed, backend probed: only accept non-sw, fail completely otherwise - VO forced, backend probed: use the first non-sw, or if none is found, fall back to the first working sw backend - VO probed, backend forced: (I don't care about this case) - VO forced, backend forced: just use that backend Also, on backend probe failure the vo->probed field was left in its old state.
* vo_opengl: use glBlitFramebuffer to draw repeated frameswm42015-11-151-0/+1
| | | | | | | | | | | | | | | | | | | | | In the display-sync, non-interpolation case, and if the display refresh rate is higher than the video framerate, we duplicate display frames by rendering exactly the same screen again. The redrawing is cached with a FBO to speed up the repeat. Use glBlitFramebuffer() instead of another shader pass. It should be faster. For some reason, post-process was run again on each display refresh. Stop doing this, which should also be slightly faster. The only disadvantage is that temporal dithering will be run only once per video frame, but I can live with this. One aspect is messy: clearing the background is done at the start on the target framebuffer, so to avoid clearing twice and duplicating the code, only copy the part of the framebuffer that contains the rendered video. (Which also gets slightly messy - needs to compensate for coordinate system flipping.)
* vo_opengl: limit GLSL to version 3.3wm42015-11-101-0/+2
| | | | | Fixes custom shaders, which define their entrypoint as sample() function.
* vo_opengl: fix extension namewm42015-11-091-1/+1
|
* vo_opengl: simplify GLSL version detectionwm42015-11-091-10/+4
| | | | | | | | | | | Pick the correct GLSL version from the GL_SHADING_LANGUAGE_VERSION string. Might be somewhat questionable, as we expect the minor version number not to have leading 0s. Should help with cases when the reported GLSL version is much higher than the equivalent of the reported GL version. This problem was observed in combination with GL_ARB_uniform_buffer_object, which can't be used if the declared GLSL version is too low.
* vo_opengl: add DRM EGL backendrr-2015-11-081-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Notes: - Unfortunately the only way to talk to EGL from within DRM I could find involves linking with GBM (generic buffer management for Mesa.) Because of this, I'm pretty sure it won't work with proprietary NVidia drivers, but then again, last time I checked NVidia didn't offer proper screen resolution for VT. - VT switching doesn't seem to work at all. It's worth mentioning that using vo_drm before introduction of VT switcher had an anomaly where user could switch to another VT and input text to it, while video played on top of that VT. However, that isn't the case with drm_egl: I can't switch to other VT during playback like this. This makes me think that it's either a limitation coming from my firmware or from EGL/KMS itself rather than a bug with my code. Nonetheless, I still left (untestable) VT switching code in place, in case it's useful to someone else. - The mode_id, connector_id and device_path should be configurable for power users and people who wish to watch videos on nonprimary screen. Unfortunately I didn't see anything that would allow OpenGL backends to register their own set of options. At the same time, adding them to global namespace is pointless. - A few dozens of lines could be shared with vo_drm (setting up VT switching, most of code behind page flipping). I don't have any strong opinion on this. - Sometimes I get minor visual glitches. I'm not sure if there's a race condition of some sort, unitialized variable (doubtful), or if it's buggy driver. (I'm using integrated Intel HD Graphics 4400 with Mesa) - .config and .control are very minimal. Signed-off-by: wm4 <wm4@nowhere>
* vo_opengl: simplify function loader slightlywm42015-11-061-6/+0
| | | | | We don't use any functions that have been deprecated in any later GL or GLES functions. (This is a leftover of vo_opengl_old support.)
* vo_opengl: glBindBufferBase is not part of GL 2.1/GLES 2.0wm42015-11-061-1/+1
| | | | | | | | Commit 27dc834f added it as such. Also remove the check for glUniformBlockBinding() - it's part of an extension, and the check glGetUniformBlockIndex() already checks whether the extension is fully available.
* vo_opengl: implement NNEDI3 prescalerBin Jin2015-11-051-2/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement NNEDI3, a neural network based deinterlacer. The shader is reimplemented in GLSL and supports both 8x4 and 8x6 sampling window now. This allows the shader to be licensed under LGPL2.1 so that it can be used in mpv. The current implementation supports uploading the NN weights (up to 51kb with placebo setting) in two different way, via uniform buffer object or hard coding into shader source. UBO requires OpenGL 3.1, which only guarantee 16kb per block. But I find that 64kb seems to be a default setting for recent card/driver (which nnedi3 is targeting), so I think we're fine here (with default nnedi3 setting the size of weights is 9kb). Hard-coding into shader requires OpenGL 3.3, for the "intBitsToFloat()" built-in function. This is necessary to precisely represent these weights in GLSL. I tried several human readable floating point number format (with really high precision as for single precision float), but for some reason they are not working nicely, bad pixels (with NaN value) could be produced with some weights set. We could also add support to upload these weights with texture, just for compatibility reason (etc. upscaling a still image with a low end graphics card). But as I tested, it's rather slow even with 1D texture (we probably had to use 2D texture due to dimension size limitation). Since there is always better choice to do NNEDI3 upscaling for still image (vapoursynth plugin), it's not implemented in this commit. If this turns out to be a popular demand from the user, it should be easy to add it later. For those who wants to optimize the performance a bit further, the bottleneck seems to be: 1. overhead to upload and access these weights, (in particular, the shader code will be regenerated for each frame, it's on CPU though). 2. "dot()" performance in the main loop. 3. "exp()" performance in the main loop, there are various fast implementation with some bit tricks (probably with the help of the intBitsToFloat function). The code is tested with nvidia card and driver (355.11), on Linux. Closes #2230
* vo_opengl: add vsync-fences optionwm42015-10-301-0/+10
| | | | | | | | | | | | | |