diff options
author | wm4 <wm4@nowhere> | 2012-12-27 18:07:37 +0100 |
---|---|---|
committer | wm4 <wm4@nowhere> | 2012-12-28 14:23:29 +0100 |
commit | d78bde15ab4be5f46a6fb5fc5a35d6acbc6c39cf (patch) | |
tree | 52c0cec2b97bb84a07774edadc9b5bb33b46be23 /video/filter/vf_scale.c | |
parent | 1e56e68701363f38ae008d2b243dc2476a2f4943 (diff) | |
download | mpv-d78bde15ab4be5f46a6fb5fc5a35d6acbc6c39cf.tar.bz2 mpv-d78bde15ab4be5f46a6fb5fc5a35d6acbc6c39cf.tar.xz |
vo_opengl_old: reject 9-15 bit formats if textures have less than 16 bit
For 9-15 bit material, cutting off the lower bits leads to significant
quality reduction, because these formats leave the most significant bits
unused (e.g. 10 bit padded to 16 bit, transferred as 8 bit -> only
2 bits left). 16 bit formats still can be played like this, as cutting
the lower bits merely reduces quality in this case.
This problem was encountered with the following GPU/driver combination:
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) 915GM x86/MMX/SSE2
OpenGL version string: 1.4 Mesa 9.0.1
It appears 16 bit support is rather common on GPUs, so testing the
actual texture depth wasn't needed until now. (There are some other Mesa
GPU/driver combinations which support 16 bit only when using RG textures
instead of LUMINANCE_ALPHA. This is due to OpenGL driver bugs.)
Diffstat (limited to 'video/filter/vf_scale.c')
0 files changed, 0 insertions, 0 deletions