summaryrefslogtreecommitdiffstats
path: root/video/out/gl_video.h
Commit message (Collapse)AuthorAgeFilesLines
* vo_opengl: refactor shader generation (part 1)wm42015-03-121-2/+2
| | | | | | | | | | | | | | | | | | | The basic idea is to use dynamically generated shaders instead of a single monolithic file + a ton of ifdefs. Instead of having to setup every aspect of it separately (like compiling shaders, setting uniforms, perfoming the actual rendering steps, the GLSL parts), we generate the GLSL on the fly, and perform the rendering at the same time. The GLSL is regenerated every frame, but the actual compiled OpenGL-level shaders are cached, which makes it fast again. Almost all logic can be in a single place. The new code is significantly more flexible, which allows us to improve the code clarity, performance and add more features easily. This commit is incomplete. It drops almost all previous code, and readds only the most important things (some of them actually buggy). The next commit will complete it - it's separate to preserve authorship information.
* vo_opengl: add gamma-auto optionStefano Pigozzi2015-03-041-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | This automatically sets the gamma option depending on lighting conditions measured from the computer's ambient light sensor. sRGB – arguably the “sibling” to BT.709 for still images – has a reference viewing environment defined in its specification (IEC 61966-2-1:1999, see http://www.color.org/chardata/rgb/srgb.xalter). According to this data, the assumed ambient illuminance is 64 lux. This is the illuminance where the gamma that results from ICC color management is correct. On the other hand, BT.1886 formalizes that the gamma level for dim environments to be 2.40, and Apple resources (WWDC12: 2012 Session 523: Best practices for color management) define the BT.1886 dim at 16 lux. So the logic we apply is: * >= 64lux -> 1.961 gamma * =< 16lux -> 2.400 gamma * 16lux < x < 64lux -> logaritmic rescale of lux to gamma. The human perception of illuminance roughly follows a logaritmic scale of lux [1]. [1]: https://msdn.microsoft.com/en-us/library/windows/desktop/dd319008%28v=vs.85%29.aspx
* vo_opengl: add support for linear scaling without CMSNiklas Haas2015-02-061-0/+1
| | | | | | | | | | This introduces a new option linear-scaling, which is now implied by srgb, icc-profile and sigmoid-upscaling. Notably, this means (sigmoidized) linear upscaling is now enabled by default in opengl-hq mode. The impact should be negligible, and there has been no observation of negative side effects of sigmoidized scaling, so it feels safe to do so.
* vo_opengl: get rid of unused field approx_gammaNiklas Haas2015-02-061-1/+0
| | | | This was left over from 61f5a80.
* vo_opengl: redraw when pausing while showing an interpolated framewm42015-02-041-0/+1
| | | | | If smoothmotion is enabled, and the screen shows an interpolated frame the moment you pause, redraw a non-interpolated frame.
* vo: simplify VOs by adding generic screenshot supportwm42015-01-241-1/+0
| | | | | | | | | | | At the time screenshot support was added, images weren't refcounted yet, so screenshots required specialized implementations in the VOs. But now we can handle these things much simpler. Also see commit 5bb24980. If there are VOs in the future which can't do this (e.g. they need to write to the image passed to vo_driver->draw_image), this still could be disabled on a per-VO basis etc., so we lose no potential performance advantages.
* vo_opengl: add smoothmotion frame blendingStefano Pigozzi2015-01-231-1/+4
| | | | | | | | | | | | | | | | | | | SmoothMotion is a way to time and blend frames made popular by MadVR. It's intended behaviour is to remove stuttering caused by mismatches between the display refresh rate and the video fps, while preserving the video's original artistic qualities (no soap opera effect). It's supposed to make 24fps video playback on 60hz monitors as close as possible to a 24hz monitor. Instead of drawing a frame once once it's pts has passed the vsync time, we redraw at the display refresh rate, and if we detect the vsync is between two frames we interpolated them (depending on their position relative to the vsync). We actually interpolate as few frames as possible to avoid a blur effect as much as possible. For example, if we were to play back a 1fps video on a 60hz monitor, we would blend at most on 1 vsync for each frame (while the other 59 vsyncs would be rendered as is). Frame interpolation is always done before scaling and in linear light when possible (an ICC profile is used, or :srgb is used).
* vo_opengl: remove scale-sep and indirect optionsNiklas Haas2015-01-221-2/+0
| | | | | | | | | | | | These are now auto-detected sanely; and enabled whenever it would be a performance or quality gain (which is pretty much everything except bilinear/bilinear scaling). Perhaps notably, with the absence of scale_sep, there's no more way to use convolution filters on hardware without FBOs, but I don't think there's hardware in existence that doesn't have FBOs but is still fast enough to run the fallback (slow) 2D convolution filters, so I don't think it's a net loss.
* vo_opengl: implement naive anti-ringingNiklas Haas2015-01-221-0/+1
| | | | | | | | This is not quite the same thing as madVR's antiringing algorithm, but it essentially does something similar. Porting madVR's approach to elliptic coordinates will take some amount of thought.
* vo_opengl: remove cscale-down suboptionwm42015-01-201-1/+1
| | | | For an explanation see the additions to the manpage.
* video: Add sigmoidal upscaling to avoid ringing artifactsNiklas Haas2015-01-091-0/+3
| | | | | | | | | This avoids issues when upscaling directly in linear light, and is the recommended way to upscale images according to imagemagick. The default slope of 6.5 offers a reasonable compromise between ringing artifacts eliminated and ringing artifacts introduced by sigmoid-upscaling. Same goes for the default center of 0.75.
* vo_opengl_cb: implement equalizer controlswm42015-01-061-2/+3
| | | | | | | | | | | | | | | | This makes vo_opengl_cb respond to controls like "gamma" and "brightness". The commit includes an awkward refactor for vo_opengl to make it easier for vo_opengl_cb. One problem is a logical race condition. The set of supported controls depends on the pixelformat, which in turn is set by reconfig(). But the actual reconfig() call (on the renderer) happens asynchronously on the renderer thread. At the time it happens, the player most likely already tried to set some controls for command line options (see init_vo() in video.c). So setting this command line options will fail most of the time, though it could randomly succeed. This can't be fixed directly, because the player can't wait on the renderer thread, because the renderer thread might already wait on the player.
* vo_opengl: remove quadbuffer/anaglyph stereo 3D renderingwm42014-12-151-1/+0
| | | | | | | | Obscure feature, and I've never heard of anyone using it. The anaglyph effects can be reproduced with vf_stereo3d. The only thing that can't be reproduced with it is "quadbuffer", which requires special and expensive hardware.
* vo_opengl: make background color configurablewm42014-12-091-0/+3
| | | | | This mainly affects the black bars that are drawn if the window and video aspect ratios mismatch.
* client API: expose OpenGL rendererwm42014-12-091-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds API to libmpv that lets host applications use the mpv opengl renderer. This is a more flexible (and possibly more portable) option to foreign window embedding (via --wid). This assumes that methods like context sharing and multithreaded OpenGL rendering are infeasible, and that a way is needed to integrate it with an application that uses a single thread to render everything. Add an example that does this with QtQuick/qml. The example is relatively lazy, but still shows how relatively simple the integration is. The FBO indirection could probably be avoided, but would require more work (and would probably lead to worse QtQuick integration, because it would have to ignore transformations like rotation). Because this makes mpv directly use the host application's OpenGL context, there is no platform specific code involved in mpv, except for hw decoding interop. main.qml is derived from some Qt example. The following things are still missing: - a way to do better video timing - expose GL renderer options, allow changing them at runtime - support for color equalizer controls - support for screenshots
* vo_opengl: minor changeswm42014-12-021-1/+1
| | | | | | | | | | | | | | Always set the viewport on entry. The way the viewport is tracked is a bit complicated in my opinion, and in fact it doesn't even reduce the number of GL calls. Setting it on entry is actually redundant if video covers the screen fully, because the handle_pass() unconditionally sets it anyway, but avoiding it would complicate the cases gl->Clear() is actually needed. Add a fbo argument to gl_video_render_frame(). This allows you to render into a FBO rather than the default framebuffer. It will be useful for providing an API to render on an external GL context. (If that will actually be added.)
* vo_opengl: allow setting different filters for downscalingwm42014-11-141-0/+1
|
* cocoa: reintroduce async resizeStefano Pigozzi2014-10-181-0/+1
| | | | | | | After removing synchronous libdispatch calls, this looks like it doesn't deadlock anymore. I also experimented with pthread_mutex_trylock liek wm4 suggested, but it leads to some annoying black flickering. I will fallback to that only if some new deadlocks are discovered.
* cocoa: move to a simpler threading modelStefano Pigozzi2014-10-041-1/+0
| | | | | | | | | | | | | | Unfortunately using dispatch_sync for synchronization turned out to be really bad for us. It caused a wide array of race conditions, deadlocks, etc. Moving to a very simple mutex. It's not clear to me how to do liveresizing with this, for now it just flickers with is unacceptable (maybe I'll draw black instead). This should fix all the threading cocoa bugs. Reopen if it's not the case! Fixes #751 Fixes #1129
* vo_opengl: add radius options for filtersBin Jin2014-08-261-0/+1
| | | | | | | Add two new options, make it possible for user to set the radius for some of the filters with no fixed radius. Also add three new filters with the new radius parameter supported.
* vo_opengl: add cparam1 and cparam2 optionsBin Jin2014-08-261-1/+1
| | | | | | Although cscale is rarely used, it's possible that params of cscale are accidentally set to lparam1 and lparam2, which might cause unexpected results.
* vo_opengl: simplify redraw callback OSD handlingwm42014-06-161-1/+0
| | | | | | | | | OSD used to be not thread-safe at all, so a track was used to get it redrawn. This mostly reverts commit 6a2a8880, because OSD not being thread-safe was the non-trivial part of it. Mostly untested, because this code path is used on OSX only, and I don't have OSX.
* video/out: change aspects of OSD handlingwm42014-06-151-2/+2
| | | | | | | | | Let the VOs draw the OSD on their own, instead of making OSD drawing a separate VO driver call. Further, let it be the VOs responsibility to request subtitles with the correct PTS. We also basically allow the VO to request OSD/subtitles at any time. OSX changes untested.
* video/out: remove legacy colorspace stuffwm42014-03-291-1/+1
| | | | | | | | | Reduce most dependencies on struct mp_csp_details, which was a bad first attempt at dealing with colorspace stuff. Instead, consistently use mp_image_params. Code which retrieves colorspace matrices from csputils.c still uses this type, though.
* vo_opengl: Simplify and clarify color correction codeNiklas Haas2014-03-101-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit: - Changes some of the #define and variable names for clarification and adds comments where appropriate. - Unifies :srgb and :icc-profile, making them fit into the same step of the decoding process and removing the weird interactions between both of them. - Makes :icc-profile take precedence over :srgb (to significantly reduce the number of confusing and useless special cases) - Moves BT709 decompanding (approximate or actual) to the shader in all cases, making it happen before upscaling (instead of the old 0.45 gamma function). This is the simpler and more proper way to do it. - Enables the approx gamma function to work with :srgb as well due to this (since they now share the gamma expansion code). - Renames :icc-approx-gamma to :approx-gamma since it is no longer tied to the ICC options or LittleCMS. - Uses gamma 2.4 as input space for the actual 3DLUT, this is now a pretty arbitrary factor but I picked 2.4 mainly because a higher pure power value here seems to produce visually better results with wide gamut profiles, rather then the previous 1.95 or BT.709. - Adds the input gamma space to the 3dlut cache header in case we change it more in the future, or even make it user customizable (though I don't see why the latter would really be necessary). - Fixes the OSD's gamma when using :srgb, which was previously still using the old (0.45) approximation in all cases. - Updates documentation on :srgb, it was still mentioning the old behavior from circa a year ago. This commit should serve to both open up and make the CMS/shader code much more accessible and less confusing/error-prone and simultaneously also improve the performance of 3DLUTs with wide gamut color spaces. I would liked to have made it more modular but almost all of these changes are interdependent, save for the documentation updates. Note: Right now, the "3DLUT takes precedence over SRGB" logic is just coded into gl_lcms.c's compile_shaders function. Ideally, this should be done earlier, when parsing the options (by overriding the actual opts.srgb flag) and output a warning to the user. Note: I'm not sure how well this works together with real-world subtitles that may need to be color corrected as well. I'm not sure whether :approx-gamma needs to apply to subtitles as well. I'll need to test this on proper files later. Note: As of now, linear light scaling is still intrinsically tied to either :srgb or :icc-profile. It would be thinkable to have this as an extra option, :linear-scaling or similar, that could be used with or without the two color management options.
* vo_opengl: change gamma suboption to take a valuewm42014-02-271-1/+1
| | | | | | | | | | | | The previous version of the gamma suboption was pretty useless. It could be used to disable delayed gamma enabling, which is a mechanism to avoid having to adjust gamma in the shader by default. Repurpose the suboption and allow setting an exact gamma value with it. You can already override gamma with the --gamma option as well as the gamma input property, but these use a weird curve to create the impression of a linear perceived brightness change when changing the value. This suboption now allows setting an exact gamma value.
* vo_opengl: add support for rectangle textureswm42013-12-011-0/+1
| | | | | | | | | This allows vo_opengl to use GL_TEXTURE_RECTANGLE textures, either by enabling it with the 'rectangle-textures' sub-option, or by having a hwdec backend force it. By default it's off. The _only_ reason we're adding this is because VDA can export rectangle textures only.
* Rename sub.c/.h to osd.c/.hwm42013-11-241-1/+1
| | | | | This was way too misleading. osd.c merely calls the subtitle renderers, instead of actually dealing with subtitles.
* vo_opengl: add infrastructure for hardware decoding OpenGL interopwm42013-11-041-1/+3
| | | | | | | | | | | | Most hardware decoding APIs provide some OpenGL interop. This allows using vo_opengl, without having to read the video data back from GPU. This requires adding a backend for each hardware decoding API. (Each backend is an entry in gl_hwdec_vaglx[].) The backends expose video data as a set of OpenGL textures. Add infrastructure to support this. The next commit will add support for VA-API.
* m_config: refactor option defaults handlingwm42013-10-241-0/+1
| | | | | | | | | | | | | Keep track of the default values directly, instead of creating a new instance of the option struct just to get the defaults. Also get rid of the special handling of m_obj_desc.init_options. Instead, handle it purely by the option parser. Originally, I wanted to handle --vo=opengl-hq and --vo=direct3d_shaders with this (by making them aliases to the real VOs with a different preset), but since --vo =opengl-hq=help prints the wrong values (as consequence of the simplification), I'm not doing that, and instead use something different.
* vo_opengl: blend alpha components by defaultwm42013-09-191-1/+1
| | | | | | Improves display of images and video with alpha channel, especially if the transparent regions contain (supposed to be invisible) garbage color values.
* video: handle video output levels with mp_image_paramswm42013-08-241-1/+0
| | | | | | | | | | | | Until now, video output levels (obscure feature, like using TV screens that require RGB output in limited range, similar to YUY) still required handling of VOCTRL_SET_YUV_COLORSPACE. Simplify this, and use the new mp_image_params code. This gets rid of some code. VOCTRL_SET_YUV_COLORSPACE is not needed at all anymore in VOs that use the reconfig callback. The result of VOCTRL_GET_YUV_COLORSPACE is now used only used for the colormatrix related properties (basically, for display on OSD). For other VOs, VOCTRL_SET_YUV_COLORSPACE will be sent only once after config instead of twice.
* video/out: use new mp_msg stuff for vo.c and vo_openglwm42013-07-311-1/+1
| | | | The first step; also serves as example.
* vo_opengl: handle chroma locationwm42013-06-281-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the video decoder chroma location flags and render chroma locations other than centered. Until now, we've always used the intuitive and obvious centered chroma location, but H.264 uses something else. FFmpeg provides a small overview in libavcodec/avcodec.h: ----------- /** * X X 3 4 X X are luma samples, * 1 2 1-6 are possible chroma positions * X X 5 6 X 0 is undefined/unknown position */ enum AVChromaLocation{ AVCHROMA_LOC_UNSPECIFIED = 0, AVCHROMA_LOC_LEFT = 1, ///< mpeg2/4, h264 default AVCHROMA_LOC_CENTER = 2, ///< mpeg1, jpeg, h263 AVCHROMA_LOC_TOPLEFT = 3, ///< DV AVCHROMA_LOC_TOP = 4, AVCHROMA_LOC_BOTTOMLEFT = 5, AVCHROMA_LOC_BOTTOM = 6, AVCHROMA_LOC_NB , ///< Not part of ABI }; ----------- The visual difference is literally minimal, but since videophiles apparently consider this detail as quality mark of a video renderer, support it anyway. We don't bother with chroma locations other than centered and left, though. Not sure about correctness, but it's probably ok.
* video: add a new method to configure filters and VOswm42013-06-281-1/+1
| | | | | | | | | | | | | | | | | | The filter chain and the video ouputs have config() functions. They are strictly limited to transfering the video size and format. Other parameters (like color levels) have to be transferred separately. Improve upon this by introducing a separate set of reconfig() functions, which use mp_image_params to carry format parameters. This struct contains all image format related parameters from config(), plus additional parameters such as colorspace. Change vf_rotate to use it, as well as vo_opengl. vf_rotate is just an example/test case, but vo_opengl will need it later. The intention is also to get rid of VOCTRL_SET_YUV_COLORSPACE. This information is now handed to the VOs via reconfig(). The getter, VOCTRL_GET_YUV_COLORSPACE, will still be needed though.
* gl_video: improve ditheringwm42013-05-261-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | Use a different algorithm to generate the dithering matrix. This looks much better than the previous ordered dither matrix with its cross-hatch artifacts. The matrix generation algorithm as well as its implementation was contributed by Wessel Dankers aka Fruit. The code in dither.c is his implementation, reformatted and with static global variables removed by me. The new matrix is uploaded as float texture - before this commit, it was a normal integer fixed point matrix. This means dithering will be disabled on systems without float textures. The size of the dithering matrix can be configured, as the matrix is generated at runtime. The generation of the matrix can take rather long, and is already unacceptable with size 8. The default is at 6, which takes about 100 ms on a Core2 Duo system with dither.c compiled at -O2, which I consider just about acceptable. The old ordered dithering is still available and can be selected by putting the dither=ordered sub-option. The ordered dither matrix generation code was moved to dither.c. This function was originally written by Uoti Urpala.
* gl_video: add scaler-resizes-only sub-optionwm42013-05-261-0/+1
| | | | | This option disables the scaler set with lscale if the video image is not resized.
* add a way to resize window contents without VO resizewm42013-05-121-0/+1
| | | | | | | | | gl_video_resize_redraw() simply resizes and redraws (but without invoking swapGlBuffers()). The VO is not involved in any way, so this can simply be called from inside the mpgl lock from any thread. Requires a minor refactor of the GL OSD code in order to redraw without an OSD object.
* vo_opengl: add alpha outputwm42013-03-281-0/+1
| | | | | | | | | | | | | | | | | | | Allows playing video with alpha information on X11, as long as the video contains alpha and the window manager does compositing. See vo.rst. Whether a window can be transparent is decided by the choice of the X Visual used for window creation. Unfortunately, there's no direct way to request such a Visual through the GLX or the X API, and use of the XRender extension is required to find out whether a Visual implies a framebuffer with alpha used by XRender (see for example [1]). Instead of depending on the XRender wrapper library (which would require annoying configure checks, even though XRender is virtually always supported), use a simple heuristics to find out whether a Visual has alpha. Since getting it wrong just means an optional feature will not work as expected, we consider this ok. [1] http://stackoverflow.com/questions/4052940/how-to-make-an-opengl- rendering-context-with-transparent-background/9215724#9215724
* vo_opengl: split into multiple files, convert to new option APIwm42013-03-281-0/+71
gl_video.c contains all rendering code, gl_lcms.c the .icc loader and creation of 3D LUT (and all LittleCMS specific code). vo_opengl.c is reduced to interfacing between the various parts.