summaryrefslogtreecommitdiffstats
path: root/video/out/opengl/gl_headers.h
diff options
context:
space:
mode:
authorwm4 <wm4@nowhere>2017-08-11 20:55:14 +0200
committerwm4 <wm4@nowhere>2017-08-11 21:29:35 +0200
commit1d0bf4073b8c0ba3ae67c8d387e92a98072ddc99 (patch)
tree4ce2839aeaf612d31cd1cf216314e655855391f7 /video/out/opengl/gl_headers.h
parente7a9bd693741272ba55d0a6b34d0c32cf1bb64e6 (diff)
downloadmpv-1d0bf4073b8c0ba3ae67c8d387e92a98072ddc99.tar.bz2
mpv-1d0bf4073b8c0ba3ae67c8d387e92a98072ddc99.tar.xz
vo_opengl: handle probing GL texture formats better
Retrieve the depth for each component and internal texture format separately. Only for 8 bit per component textures we assume that all bits are used (or else we would in my opinion create too many probe textures). Assuming 8 bit components are always correct also fixes operation in GLES3, where we assumed that each component had -1 bits depth, and this all UNORM formats were considered unusable. On GLES, the function to check the real bit depth is not available. Since GLES has no 16 bit UNORM textures at all, except with the MPGL_CAP_EXT16 extension, just drop the special condition for it. (Of course GLES still manages to introduce a funny special case by allowing GL_LUMINANCE , but not defining GL_TEXTURE_LUMINANCE_SIZE.) Should fix #4749.
Diffstat (limited to 'video/out/opengl/gl_headers.h')
-rw-r--r--video/out/opengl/gl_headers.h3
1 files changed, 3 insertions, 0 deletions
diff --git a/video/out/opengl/gl_headers.h b/video/out/opengl/gl_headers.h
index 9f479dd42f..609cf53ff2 100644
--- a/video/out/opengl/gl_headers.h
+++ b/video/out/opengl/gl_headers.h
@@ -37,6 +37,9 @@
#define GL_RGBA12 0x805A
#define GL_RGBA16 0x805B
#define GL_TEXTURE_RED_SIZE 0x805C
+#define GL_TEXTURE_GREEN_SIZE 0x805D
+#define GL_TEXTURE_BLUE_SIZE 0x805E
+#define GL_TEXTURE_ALPHA_SIZE 0x805F
// --- GL 1.1 (removed from 3.0 core and not in GLES 2/3)