summaryrefslogtreecommitdiffstats
path: root/video/out/gl_common.h
diff options
context:
space:
mode:
authorwm4 <wm4@nowhere>2012-12-27 18:07:37 +0100
committerwm4 <wm4@nowhere>2012-12-28 14:23:29 +0100
commitd78bde15ab4be5f46a6fb5fc5a35d6acbc6c39cf (patch)
tree52c0cec2b97bb84a07774edadc9b5bb33b46be23 /video/out/gl_common.h
parent1e56e68701363f38ae008d2b243dc2476a2f4943 (diff)
downloadmpv-d78bde15ab4be5f46a6fb5fc5a35d6acbc6c39cf.tar.bz2
mpv-d78bde15ab4be5f46a6fb5fc5a35d6acbc6c39cf.tar.xz
vo_opengl_old: reject 9-15 bit formats if textures have less than 16 bit
For 9-15 bit material, cutting off the lower bits leads to significant quality reduction, because these formats leave the most significant bits unused (e.g. 10 bit padded to 16 bit, transferred as 8 bit -> only 2 bits left). 16 bit formats still can be played like this, as cutting the lower bits merely reduces quality in this case. This problem was encountered with the following GPU/driver combination: OpenGL vendor string: Intel Open Source Technology Center OpenGL renderer string: Mesa DRI Intel(R) 915GM x86/MMX/SSE2 OpenGL version string: 1.4 Mesa 9.0.1 It appears 16 bit support is rather common on GPUs, so testing the actual texture depth wasn't needed until now. (There are some other Mesa GPU/driver combinations which support 16 bit only when using RG textures instead of LUMINANCE_ALPHA. This is due to OpenGL driver bugs.)
Diffstat (limited to 'video/out/gl_common.h')
-rw-r--r--video/out/gl_common.h2
1 files changed, 2 insertions, 0 deletions
diff --git a/video/out/gl_common.h b/video/out/gl_common.h
index de893966df..4afc192343 100644
--- a/video/out/gl_common.h
+++ b/video/out/gl_common.h
@@ -318,6 +318,8 @@ struct GL {
void (GLAPIENTRY *EnableClientState)(GLenum);
void (GLAPIENTRY *DisableClientState)(GLenum);
GLenum (GLAPIENTRY *GetError)(void);
+ void (GLAPIENTRY *GetTexLevelParameteriv)(GLenum, GLint, GLenum, GLint *);
+
void (GLAPIENTRY *GenBuffers)(GLsizei, GLuint *);
void (GLAPIENTRY *DeleteBuffers)(GLsizei, const GLuint *);