diff options
author | Bin Jin <bjin1990@gmail.com> | 2015-10-28 01:37:55 +0000 |
---|---|---|
committer | wm4 <wm4@nowhere> | 2015-11-05 17:38:20 +0100 |
commit | 27dc834f37cd2427798c8cb582a574409865d1e7 (patch) | |
tree | fcc4fdfb0a4c8b20958ee110d5d8068439779848 /wscript_build.py | |
parent | 3f73d6352306d470821f3ea5078b7b7f8031f0d7 (diff) | |
download | mpv-27dc834f37cd2427798c8cb582a574409865d1e7.tar.bz2 mpv-27dc834f37cd2427798c8cb582a574409865d1e7.tar.xz |
vo_opengl: implement NNEDI3 prescaler
Implement NNEDI3, a neural network based deinterlacer.
The shader is reimplemented in GLSL and supports both 8x4 and 8x6
sampling window now. This allows the shader to be licensed
under LGPL2.1 so that it can be used in mpv.
The current implementation supports uploading the NN weights (up to
51kb with placebo setting) in two different way, via uniform buffer
object or hard coding into shader source. UBO requires OpenGL 3.1,
which only guarantee 16kb per block. But I find that 64kb seems to be
a default setting for recent card/driver (which nnedi3 is targeting),
so I think we're fine here (with default nnedi3 setting the size of
weights is 9kb). Hard-coding into shader requires OpenGL 3.3, for the
"intBitsToFloat()" built-in function. This is necessary to precisely
represent these weights in GLSL. I tried several human readable
floating point number format (with really high precision as for
single precision float), but for some reason they are not working
nicely, bad pixels (with NaN value) could be produced with some
weights set.
We could also add support to upload these weights with texture, just
for compatibility reason (etc. upscaling a still image with a low end
graphics card). But as I tested, it's rather slow even with 1D
texture (we probably had to use 2D texture due to dimension size
limitation). Since there is always better choice to do NNEDI3
upscaling for still image (vapoursynth plugin), it's not implemented
in this commit. If this turns out to be a popular demand from the
user, it should be easy to add it later.
For those who wants to optimize the performance a bit further, the
bottleneck seems to be:
1. overhead to upload and access these weights, (in particular,
the shader code will be regenerated for each frame, it's on CPU
though).
2. "dot()" performance in the main loop.
3. "exp()" performance in the main loop, there are various fast
implementation with some bit tricks (probably with the help of the
intBitsToFloat function).
The code is tested with nvidia card and driver (355.11), on Linux.
Closes #2230
Diffstat (limited to 'wscript_build.py')
-rw-r--r-- | wscript_build.py | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/wscript_build.py b/wscript_build.py index b6836a7beb..0bb0bc0620 100644 --- a/wscript_build.py +++ b/wscript_build.py @@ -57,6 +57,10 @@ def build(ctx): source = "sub/osd_font.otf", target = "sub/osd_font.h") + ctx.file2string( + source = "video/out/opengl/nnedi3_weights.bin", + target = "video/out/opengl/nnedi3_weights.inc") + lua_files = ["defaults.lua", "assdraw.lua", "options.lua", "osc.lua", "ytdl_hook.lua"] for fn in lua_files: @@ -324,6 +328,7 @@ def build(ctx): ( "video/out/opengl/hwdec_osx.c", "videotoolbox-gl" ), ( "video/out/opengl/hwdec_vdpau.c", "vdpau-gl-x11" ), ( "video/out/opengl/lcms.c", "gl" ), + ( "video/out/opengl/nnedi3.c", "gl" ), ( "video/out/opengl/osd.c", "gl" ), ( "video/out/opengl/superxbr.c", "gl" ), ( "video/out/opengl/utils.c", "gl" ), |