diff options
author | wm4 <wm4@nowhere> | 2015-01-29 19:53:49 +0100 |
---|---|---|
committer | wm4 <wm4@nowhere> | 2015-01-29 19:53:49 +0100 |
commit | c80a1b7aa9936802e64189f9474847fd3fe57e8d (patch) | |
tree | ba9baf58a55ae281aa5ec7889113991144adcfb8 /video/out/gl_hwdec_vda.c | |
parent | e0e06f0f0c9d6f00cb02b2c98b7286e231d65794 (diff) | |
download | mpv-c80a1b7aa9936802e64189f9474847fd3fe57e8d.tar.bz2 mpv-c80a1b7aa9936802e64189f9474847fd3fe57e8d.tar.xz |
vo_opengl: let hwdec driver report the exact image format
Hardware decoding/displaying with vo_opengl is done by replacing the
normal video textures with textures provided by the hardware decoding
API OpenGL interop code. Often, this changes the format (vaglx and vdpau
return RGBA, vda returns packed YUV).
If the format is changed, there was a chance (or at least a higher
potential for bugs) that the shader generation code could be confused by
the mismatch of formats, and would create incorrect conversions.
Simplify this by requiring the hwdec interop driver to set the format it
will return to us. This affects all fields, not just some (done by
replacing the format with the value of the converted_imgfmt field in
init_format), in particular fields like colorlevels.
Currently, no hwdec interop driver does anything sophisticated, and the
win is mostly from the mp_image_params_guess_csp() function, which will
reset fields like colorlevels to expected value if RGBA is used.
Diffstat (limited to 'video/out/gl_hwdec_vda.c')
-rw-r--r-- | video/out/gl_hwdec_vda.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/video/out/gl_hwdec_vda.c b/video/out/gl_hwdec_vda.c index d90b3419ae..bc18983d3d 100644 --- a/video/out/gl_hwdec_vda.c +++ b/video/out/gl_hwdec_vda.c @@ -97,8 +97,9 @@ static int create(struct gl_hwdec *hw) return 0; } -static int reinit(struct gl_hwdec *hw, const struct mp_image_params *params) +static int reinit(struct gl_hwdec *hw, struct mp_image_params *params) { + params->imgfmt = hw->driver->imgfmt; return 0; } |