summaryrefslogtreecommitdiffstats
path: root/video/decode
diff options
context:
space:
mode:
authorNiklas Haas <git@nand.wakku.to>2016-06-29 09:28:17 +0200
committerwm4 <wm4@nowhere>2016-07-03 19:42:52 +0200
commit923e3c7b20f0a238062b0ac538a751c6c363a8cb (patch)
treed86988c4fe3a603df877e7cf91216ccc20b09a27 /video/decode
parentd81fb97f4587f73f62a760b99f686139f9b8d966 (diff)
downloadmpv-923e3c7b20f0a238062b0ac538a751c6c363a8cb.tar.bz2
mpv-923e3c7b20f0a238062b0ac538a751c6c363a8cb.tar.xz
vo_opengl: generalize HDR tone mapping mechanism
This involves multiple changes: 1. Brightness metadata is split into nominal peak and signal peak. For a quick and dirty explanation: nominal peak is the brightest value that your color space can represent (i.e. the brightness of an encoded 1.0), and signal peak is the brightest value that actually occurs in the video (i.e. the brightest thing that's displayed). 2. vo_opengl uses a new decision logic to figure out the right nom_peak and sig_peak for all situations. It also does a better job of picking the right target gamut/colorspace to use for the OSD. (Which still is and still should be treated as sRGB). This change in logic also fixes #3293 en passant. 3. Since it was growing rapidly, the logic for auto-guessing / inferring the right colorimetry configuration (in pass_colormanage) was split from the logic for actually performing the adaptation (now pass_color_map). Right now, the new logic doesn't do a whole lot since HDR metadata is still ignored (but not for long).
Diffstat (limited to 'video/decode')
-rw-r--r--video/decode/lavc.h3
1 files changed, 3 insertions, 0 deletions
diff --git a/video/decode/lavc.h b/video/decode/lavc.h
index 689222d872..e76dff50bc 100644
--- a/video/decode/lavc.h
+++ b/video/decode/lavc.h
@@ -25,6 +25,9 @@ typedef struct lavc_ctx {
bool hwdec_failed;
bool hwdec_notified;
+ // For HDR side-data caching
+ int cached_hdr_peak;
+
struct mp_image **delay_queue;
int num_delay_queue;
int max_delay_queue;