From b196cadf9f9f6ea210db9236c2b26523a9a2719f Mon Sep 17 00:00:00 2001 From: Niklas Haas Date: Mon, 17 Jul 2017 21:39:06 +0200 Subject: vo_opengl: support HDR peak detection This is done via compute shaders. As a consequence, the tone mapping algorithms had to be rewritten to compute their known constants in GLSL (ahead of time), instead of doing it once. Didn't affect performance. Using shmem/SSBO atomics in this way is extremely fast on nvidia, but it might be slow on other platforms. Needs testing. Unfortunately, setting up the SSBO still requires OpenGL calls, which means I can't have it in video_shaders.c, where it belongs. But I'll defer worrying about that until the backend refactor, since then I'll be breaking up the video/video_shaders structure anyway. --- video/out/opengl/gl_headers.h | 5 +++++ 1 file changed, 5 insertions(+) (limited to 'video/out/opengl/gl_headers.h') diff --git a/video/out/opengl/gl_headers.h b/video/out/opengl/gl_headers.h index 8f201bb64c..a55749cbb7 100644 --- a/video/out/opengl/gl_headers.h +++ b/video/out/opengl/gl_headers.h @@ -83,6 +83,11 @@ #define GL_COMPUTE_SHADER 0x91B9 +// -- GL 4.3 or GL_ARB_shader_storage_buffer_object + +#define GL_SHADER_STORAGE_BUFFER 0x90D2 +#define GL_SHADER_STORAGE_BARRIER_BIT 0x00002000 + // --- GL_NV_vdpau_interop #define GLvdpauSurfaceNV GLintptr -- cgit v1.2.3