summaryrefslogtreecommitdiffstats
path: root/video/out/gpu/video.c
diff options
context:
space:
mode:
authorNiklas Haas <git@haasn.xyz>2018-02-10 22:42:11 +0100
committerKevin Mitchell <kevmitch@gmail.com>2018-02-11 16:45:20 -0800
commit4c2edecd7dc83caaaa37c797d66d9077e105eaee (patch)
tree04cbd92fce8cf35c91245eec0f2dbc146909fa77 /video/out/gpu/video.c
parent20df21746add582e703b71b4ec63ce64c0e055d4 (diff)
downloadmpv-4c2edecd7dc83caaaa37c797d66d9077e105eaee.tar.bz2
mpv-4c2edecd7dc83caaaa37c797d66d9077e105eaee.tar.xz
vo_gpu: refactor HDR peak detection algorithm
The major changes are as follows: 1. Use `uint32_t` instead of `unsigned int` for the SSBO size calculation. This doesn't really matter, since a too-big buffer will still work just fine, but since `uint` is a 32-bit integer by definition this is the correct way to do it. 2. Pre-divide the frame_sum by the num_wg immediately at the end of a frame. This change was made to prevent overflow. At 4K screen size, this code is currently already very at risk of overflow, especially once I started playing with longer averaging sizes. Pre-dividing this out makes it just about fit into 32-bit even for worst-case PQ content. (It's technically also faster and easier this way, so I should have done it to begin with). Rename `frame_sum` to `frame_avg` to clearly signal the change in semantics. 3. Implement a scene transition detection algorithm. This basically compares the current frame's average brightness against the (averaged) value of the past frames. If it exceeds a threshold, which I experimentally configured, we reset the peak detection SSBO's state immediately - so that it just contains the current frame. This prevents annoying "eye adaptation"-like effects on scene transitions. 4. As a result of the previous change, we can now use a much larger buffer size by default, which results in a more stable and less flickery result. I experimented with values between 20 and 256 and settled on the new value of 64. (I also switched to a power-of-2 array size, because I like powers of two)
Diffstat (limited to 'video/out/gpu/video.c')
-rw-r--r--video/out/gpu/video.c18
1 files changed, 9 insertions, 9 deletions
diff --git a/video/out/gpu/video.c b/video/out/gpu/video.c
index 9bf7baeb77..c27004e63b 100644
--- a/video/out/gpu/video.c
+++ b/video/out/gpu/video.c
@@ -2448,13 +2448,13 @@ static void pass_colormanage(struct gl_video *p, struct mp_colorspace src, bool
bool detect_peak = p->opts.compute_hdr_peak >= 0 && mp_trc_is_hdr(src.gamma);
if (detect_peak && !p->hdr_peak_ssbo) {
struct {
- unsigned int counter;
- unsigned int frame_idx;
- unsigned int frame_num;
- unsigned int frame_max[PEAK_DETECT_FRAMES+1];
- unsigned int frame_sum[PEAK_DETECT_FRAMES+1];
- unsigned int total_max;
- unsigned int total_sum;
+ uint32_t counter;
+ uint32_t frame_idx;
+ uint32_t frame_num;
+ uint32_t frame_max[PEAK_DETECT_FRAMES+1];
+ uint32_t frame_sum[PEAK_DETECT_FRAMES+1];
+ uint32_t total_max;
+ uint32_t total_sum;
} peak_ssbo = {0};
struct ra_buf_params params = {
@@ -2479,9 +2479,9 @@ static void pass_colormanage(struct gl_video *p, struct mp_colorspace src, bool
"uint frame_idx;"
"uint frame_num;"
"uint frame_max[%d];"
- "uint frame_sum[%d];"
+ "uint frame_avg[%d];"
"uint total_max;"
- "uint total_sum;",
+ "uint total_avg;",
PEAK_DETECT_FRAMES + 1,
PEAK_DETECT_FRAMES + 1
);