summaryrefslogtreecommitdiffstats
path: root/DOCS/man/vo.rst
Commit message (Collapse)AuthorAgeFilesLines
* vo_opengl: add DRM EGL backendrr-2015-11-081-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Notes: - Unfortunately the only way to talk to EGL from within DRM I could find involves linking with GBM (generic buffer management for Mesa.) Because of this, I'm pretty sure it won't work with proprietary NVidia drivers, but then again, last time I checked NVidia didn't offer proper screen resolution for VT. - VT switching doesn't seem to work at all. It's worth mentioning that using vo_drm before introduction of VT switcher had an anomaly where user could switch to another VT and input text to it, while video played on top of that VT. However, that isn't the case with drm_egl: I can't switch to other VT during playback like this. This makes me think that it's either a limitation coming from my firmware or from EGL/KMS itself rather than a bug with my code. Nonetheless, I still left (untestable) VT switching code in place, in case it's useful to someone else. - The mode_id, connector_id and device_path should be configurable for power users and people who wish to watch videos on nonprimary screen. Unfortunately I didn't see anything that would allow OpenGL backends to register their own set of options. At the same time, adding them to global namespace is pointless. - A few dozens of lines could be shared with vo_drm (setting up VT switching, most of code behind page flipping). I don't have any strong opinion on this. - Sometimes I get minor visual glitches. I'm not sure if there's a race condition of some sort, unitialized variable (doubtful), or if it's buggy driver. (I'm using integrated Intel HD Graphics 4400 with Mesa) - .config and .control are very minimal. Signed-off-by: wm4 <wm4@nowhere>
* vo_opengl: rename fancy-downscaling to correct-downscalingwm42015-11-071-2/+2
| | | | The old name was stupid. Very stupid.
* vo_opengl: fancy-downscaling: enable also for anamorphic clipsAvi Halachmi (:avih)2015-11-071-3/+3
|
* vo_opengl: implement NNEDI3 prescalerBin Jin2015-11-051-0/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement NNEDI3, a neural network based deinterlacer. The shader is reimplemented in GLSL and supports both 8x4 and 8x6 sampling window now. This allows the shader to be licensed under LGPL2.1 so that it can be used in mpv. The current implementation supports uploading the NN weights (up to 51kb with placebo setting) in two different way, via uniform buffer object or hard coding into shader source. UBO requires OpenGL 3.1, which only guarantee 16kb per block. But I find that 64kb seems to be a default setting for recent card/driver (which nnedi3 is targeting), so I think we're fine here (with default nnedi3 setting the size of weights is 9kb). Hard-coding into shader requires OpenGL 3.3, for the "intBitsToFloat()" built-in function. This is necessary to precisely represent these weights in GLSL. I tried several human readable floating point number format (with really high precision as for single precision float), but for some reason they are not working nicely, bad pixels (with NaN value) could be produced with some weights set. We could also add support to upload these weights with texture, just for compatibility reason (etc. upscaling a still image with a low end graphics card). But as I tested, it's rather slow even with 1D texture (we probably had to use 2D texture due to dimension size limitation). Since there is always better choice to do NNEDI3 upscaling for still image (vapoursynth plugin), it's not implemented in this commit. If this turns out to be a popular demand from the user, it should be easy to add it later. For those who wants to optimize the performance a bit further, the bottleneck seems to be: 1. overhead to upload and access these weights, (in particular, the shader code will be regenerated for each frame, it's on CPU though). 2. "dot()" performance in the main loop. 3. "exp()" performance in the main loop, there are various fast implementation with some bit tricks (probably with the help of the intBitsToFloat function). The code is tested with nvidia card and driver (355.11), on Linux. Closes #2230
* vo_opengl: add Super-xBR filter for upscalingBin Jin2015-11-051-0/+38
| | | | | | | | | | | Add the Super-xBR filter for image doubling, and the prescaling framework to support it. The shader code was ported from MPDN extensions project, with modification to process luma only. This commit is largely inspired by code from #2266, with `gl_transform_trans()` authored by @haasn taken directly.
* vo_opengl: win32: try to enable DwmFlush by defaultwm42015-11-011-2/+6
| | | | | | | | | | | | | | Enable it by default, but not unconditionally. Add an "auto" mode, which disable DwmFlush if the compositor is (probably) inactive. Let's see how this goes. Since I accidentally enabled DwmFlush always by default (more or less) in a previous commit touching this code, this is probably mostly just cargo-culting, and it's uncertain whether it does anything. Note that I still got bad vsync behavior when fullscreening mpv, and making another window visible on the same screen. This happens even if forcing DWM.
* vo_opengl: add vsync-fences optionwm42015-10-301-0/+9
| | | | | | | | | | | | | | | | | | | | | | Yet another relatively useless option that tries to make OpenGL's sync behavior somewhat sane. The results are not too encouraging. With a value of 1, vsync jitter is gone on nVidia, but there are frame drops (less than with glfinish). With 2, I get the usual vsync jitter _and_ frame drops. There's still some hope that it might prevent too deep queuing with some GPUs, I guess. The timeout for the wait call is 1 second. The value is pretty arbitrary; it should just not be too high to freeze the process (if the GPU is un-nice), and not too low to trigger the timeout in normal cases, even if the GPU load is very high. So I guess 1 second is ok as a timeout. The idea to use fences this way to control the queue depth was stolen from RetroArch: https://github.com/libretro/RetroArch/blob/df01279cf318e7ec90ace039d60515296e3de908/gfx/drivers/gl.c#L1856
* vo_opengl: make the default debanding settings less excessiveNiklas Haas2015-10-211-6/+6
| | | | | | | | | It's great that the new algorithm supports multiple placebo iterations and all, but it's really not necessary and hurts performance in the general case for the sake of the 0.1% that actually pause the screen and look for minute differences. Signed-off-by: wm4 <wm4@nowhere>
* manpage: edit recommended VO remarkswm42015-10-041-7/+4
|
* Revert "vo_x11: remove this video output"wm42015-09-301-0/+6
| | | | | | | | | | | | | | | This reverts commit d11184a256ed709a03fa94a4e3940eed1b76d76f. Unfortunately, there was a lot of unexpected resistance. Do note that this is still extremely slow, crappy, etc. Note that vo_x11.c was further edited. Compared to the removed vo_x11.c, an additional ~200 lines of code was removed in order to simplify it. I tried to strip it down as much as possible. In particular, support for odd non-32 bit formats (24, 16, 15, 8 bit) is dropped. Closes #2300.
* vo_opengl: remove sharpen scalers, add sharpen sub-optionwm42015-09-231-6/+12
| | | | | | | | | | | | This turns the old scalers (inherited from MPlayer) into a pre- processing step (after color conversion and before scaling). The code for the "sharpen5" scaler is reused for this. The main reason MPlayer implemented this as scalers was perhaps because FBOs were too expensive, and making it a scaler allowed to implement this in 1 pass. But unsharp masking is not really a scaler, and I would guess the result is more like combining bilinear scaling and unsharp masking.
* vo_opengl: implement debanding (and remove source-shader)Niklas Haas2015-09-091-14/+33
| | | | | | | | | | The removal of source-shader is a side effect, since this effectively replaces it - and the video-reading code has been significantly restructured to make more sense and be more readable. This means users no longer have to constantly download and maintain a separate deband.glsl installation alongside mpv, which was the only real use case for source-shader that we found either way.
* vo_opengl: restore single pass optimization as separate code pathwm42015-09-071-1/+14
| | | | | | | | | | | | | | | | | | | | | | The single path optimization, rendering the video in one shader pass and without FBO indirections, was removed soem commits ago. It didn't have a place in this code, and caused considerable complexity and maintenance issues. On the other hand, it still has some worth, such as for use with extremely crappy hardware (GLES only or OpenGL 2.1 without FBO extension). Ideally, these use cases would be handled by a separate VO (say, vo_gles). While cleaner, this would still cause code duplication and other complexity. The third option is making the single-pass optimization a completely separate code path, with most vo_opengl features disabled. While this does duplicate some functionality (such as "unpacking" the video data from textures), it's also relatively unintrusive, and the high quality code path doesn't need to take it into account at all. On another positive node, this "dumb-mode" could be forced in other cases where OpenGL 2.1 is not enough, and where we don't want to care about versions this old.
* vo_opengl: require FBOs and get rid of the single-pass optimizationNiklas Haas2015-09-071-9/+8
| | | | | | | This change makes vo_opengl slightly less compatible (ancient devices without FBOs will no longer work) and decreases performance in the simplest case (vo=opengl), in exchange for significantly reducing code complexity and making everything easier to reason about.
* vo_opengl: enable pbo by default with opengl-hqwm42015-09-021-1/+1
| | | | | | | | | Can significantly help with very large video resolutions on nvidia drivers. It doesn't seem to have negative effects on Intel drivers either. (Although it could have on Intel drivers for older hardware.) For now, this is only for --vo=opengl-hq. Maybe --vo=opengl should use it too, but it's still meant to be the crappy, fail-safe default.
* vo_opengl: add tscale-clamp optionNiklas Haas2015-08-201-0/+6
| | | | | | | | | | This significantly reduces the amount of noticeable flashing when using tscale kernels with negative lobes, by cutting them off completely. I'm not sure if this has any negative effects. It needs a bit of subjective testing over a period of time, so I just made it an option. Fixes #2155.
* vo_rpi: disable background by defaultwm42015-08-201-0/+5
| | | | And add an option to enable it.
* vo_opengl: add temporal-dither-period optionNiklas Haas2015-07-201-0/+5
| | | | | This was requested multiple times by users, and it's not hard to implement and/or maintain.
* vo_opengl: reimplement tscale=oversampleNiklas Haas2015-07-111-1/+1
| | | | Closes #2102.
* manpage: fix dwmflush parameterwm42015-07-031-1/+2
|
* vo_opengl: adjust interpolation code for the new video-sync mechanismNiklas Haas2015-07-011-5/+4
| | | | | | | | | | | | | This should make interpolation work much better in general, although there still might be some side effects for unusual framerates (eg. 35 Hz or 48 Hz). Most of the common framerates are tested and working fine. (24 Hz, 30 Hz, 60 Hz) The new code doesn't have support for oversample yet, so it's been removed (and will most likely be reimplemented in a cleaner way if there's enough demand). I would recommend using something like robidoux or mitchell instead of oversample, though - they're much smoother for the common cases.
* vo_x11: remove this video outputwm42015-06-261-6/+0
| | | | | | | It only causes additional maintenance work. Even if you wanted to have a fallback, it's probably better to use --vo=sdl or so.
* vo_drm: Expose mode ID option to usersMarcin Kurczewski2015-05-281-0/+4
|
* vo_opengl: CMS no longer implies linear scalingNiklas Haas2015-05-271-10/+6
| | | | | | | | | | | They're completely orthogonal concepts, merged in the past due to convenience and ease of implementing it in the old #ifdef hell renderer. Especially after the CMS stuff was generalized by 634b4a, this was a trivial change to implement and also means color management will be much higher quality when enabled with vo=opengl (which had quantization issues in the past due to the 8 bit FBO format and upscaling), since it can be done in a single pass now.
* vo_opengl: icc-profile overrides icc-profile-autoNiklas Haas2015-05-271-2/+2
| | | | Signed-off-by: wm4 <wm4@nowhere>
* vo_opengl: add support for custom shadersNiklas Haas2015-05-271-2/+70
|
* vo_null: add framerate emulationwm42015-05-241-0/+6
|
* vo_opengl: remove npot optionwm42015-05-211-4/+0
| | | | Completely useless.
* vo_xv: make number of buffers configurablewm42015-05-201-0/+6
|
* vo_opengl: change user options for requesting GLESwm42015-05-141-4/+7
| | | | | | | | Instead of having separate backends, make use of GLES a flag. This reduces the number of backends and the resulting annoyances. Also, nobody cares about using GLES, so there's no backward compatibility either.
* vo_opengl_cb: add a "block" framedrop mode and make it defaultwm42015-05-121-2/+4
| | | | | | | | | (I have no idea why there are different modes.) Instead of risking to drop frames too early, give it some margin. Since there are situations this could deadlock, wait with a timeout. This can happen if e.g. the API user is refusing to render anything, or if uninitialization is happening.
* vo_opengl: change default FBO formatwm42015-05-051-2/+2
| | | | | | | | Reduces (but likely does not remove) the danger of rounding intermediate values down to 8 bit. This is important for cscale, or any other processing that might store raw YUV values in framebuffers. Fixes #1918.
* vo_opengl: gl_lcms: replace icc-cache by icc-cache-dirNiklas Haas2015-05-011-4/+7
| | | | | | | | | | This now stores caches for multiple ICC profiles, potentially all the user has ever used. The big use case for this is for users with multiple monitors. The old logic would mandate recomputing the LUT and discarding the cache whenever dragging mpv from one screen to another. This also avoids having to save and check the ICC profile itself, since the file name already uniquely determines it.
* vo_drm: add missing documentationMarcin Kurczewski2015-04-161-0/+13
|
* vo_opengl: change dwmflush option valueswm42015-04-141-3/+4
| | | | | Use a choice instead of an integer. This is incompatible, but I'm not adding any compatibility since this option was added recently.
* vo_opengl: unify blend-subtitles-res and blend-subtitleswm42015-04-111-12/+5
|
* vo_opengl: add blend-subtitles-resNiklas Haas2015-04-101-0/+11
| | | | | This can be used to draw the subtitles at the video's native res, which can make them look more natural and increases performance.
* opengl: win32 - add option 'dwmflush' to sync in DWMAvi Halachmi (:avih)2015-04-091-0/+10
| | | | | | | This could help in cases where the DWM (Windows desktop compositor) adds another layer of bufferring and therefore the SwapBuffers timing could get messed up. Signed-off-by: wm4 <wm4@nowhere>
* vo_opengl: make csp options consistent with vf_formatNiklas Haas2015-04-041-9/+9
|
* csputils: add some missing colorspacesNiklas Haas2015-04-041-0/+14
| | | | | With target-prim and target-trc it makes sense to include some common colorspaces that aren't strictly speaking used for video.
* vo_opengl: make jinc presets resizableNiklas Haas2015-04-041-3/+0
| | | | No real reason this is disabled with the new configuration API.
* vo_opengl: add scale-wparam optionNiklas Haas2015-04-041-0/+13
| | | | This lets us tune the window parameter
* vo_opengl: refactor scaler configurationNiklas Haas2015-04-041-12/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This merges all of the scaler-related options into a single configuration struct, and also cleans up the way they're passed through the code. (For example, the scaler index is no longer threaded through pass_sample, just the scaler configuration itself, and there's no longer duplication of the params etc.) In addition, this commit makes scale-down more principled, and turns it into a scaler in its own right - so there's no longer an ugly separation between scale and scale-down in the code. Finally, the radius stuff has been made more proper - filters always have a radius now (there's no more radius -1), and get a new .resizable attribute instead for when it's tunable. User-visible changes: 1. scale-down has been renamed dscale and now has its own set of config options (dscale-param1, dscale-radius) etc., instead of reusing scale-param1 (which was arguably a bug). 2. The default radius is no longer fixed at 3, but instead uses that filter's preferred radius by default. (Scalers with a default radius other than 3 include sinc, gaussian, box and triangle) 3. scale-radius etc. now goes down to 0.5, rather than 1.0. 0.5 is the smallest radius that theoretically makes sense, and indeed it's used by at least one filter (nearest). Apart from that, it should just be internal changes only. Note that this sets up for the refactor discussed in #1720, which would be to merge scaler and window configurations (include parameters etc.) into a single, simplified string. In the code, this would now basically just mean getting rid of all the OPT_FLOATRANGE etc. lines related to scalers and replacing them by a single function that parses a string and updates the struct scaler_config as appropriate.
* vo_opengl: separate kernel and windowNiklas Haas2015-04-041-26/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This makes the core much more elegant, reusable, reconfigurable and also allows us to more easily add aliases for specific configurations. Furthermore, this lets us apply a generic blur factor / window function to arbitrary filters, so we can finally "mix and match" in order to fine-tune windowing functions. A few notes are in order: 1. The current system for configuring scalers is ugly and rapidly getting unwieldy. I modified the man page to make it a bit more bearable, but long-term we have to do something about it; especially since.. 2. There's currently no way to affect the blur factor or parameters of the window functions themselves. For example, I can't actually fine-tune the kaiser window's param1, since there's simply no way to do so in the current API - even though filter_kernels.c supports it just fine! 3. This removes some lesser used filters (especially those which are purely window functions to begin with). If anybody asks, you can get eg. the old behavior of scale=hanning by using scale=box:scale-window=hanning:scale-radius=1 (and yes, the result is just as terrible as that sounds - which is why nobody should have been using them in the first place). 4. This changes the semantics of the "triangle" scaler slightly - it now has an arbitrary radius. This can possibly produce weird results for people who were previously using scale-down=triangle, especially if in combination with scale-radius (for the usual upscaling). The correct fix for this is to use scale-down=bilinear_slow instead, which is an alias for triangle at radius 1. In regards to the last point, in future I want to make it so that filters have a filter-specific "preferred radius" (for the ones that are arbitrarily tunable), once the configuration system for filters has been redesigned (in particular in a way that will let us separate scale and scale-down cleanly). That way, "triangle" can simply have the preferred radius of 1 by default, while still being tunable. (Rather than the default radius being hard-coded to 3 always)
* vo_opengl: remove chroma-location suboptionwm42015-04-031-4/+0
| | | | Terribly obscure, and vf_format can do this for all VOs.
* RPI supportwm42015-03-291-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This requires FFmpeg git master for accelerated hardware decoding. Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav will also work. Most things work. Screenshots don't work with accelerated/opaque decoding (except using full window screenshot mode). Subtitles are very slow - even simple but huge overlays can cause frame drops. This always uses fullscreen mode. It uses dispmanx and mmal directly, and there are no window managers or anything on this level. vo_opengl also kind of works, but is pretty useless and slow. It can't use opaque hardware decoding (copy back can be used by forcing the option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend is preferred over the X11 ones in case you're trying on X11; but X11 is even more useless on RPI. This doesn't correctly reject extended h264 profiles and thus doesn't fallback to software decoding. The hw supports only up to the high profile, and will e.g. return garbage for Hi10P video. This sets a precedent of enabling hw decoding by default, but only if RPI support is compiled (which most hopefully it will be disabled on desktop Linux platforms). While it's more or less required to use hw decoding on the weak RPI, it causes more problems than it solves on real platforms (Linux has the Intel GPU problem, OSX still has some cases with broken decoding.) So I can live with this compromise of having different defaults depending on the platform. Raspberry Pi 2 is required. This wasn't tested on the original RPI, though at least decoding itself seems to work (but full playback was not tested).
* manpage: update warning on blend-subtitlesNiklas Haas2015-03-271-2/+6
|
* manpage: vo_opengl: blend-subtitles is brokenwm42015-03-271-0/+3
|
* manpage: fix typowm42015-03-261-1/+1
|
* vo_opengl: draw subtitles directly onto the videoNiklas Haas2015-03-261-0/+11
| | | | | | | | | | | | | | | | This has a number of user-visible changes: 1. A new flag blend-subtitles (default on for opengl-hq) to control this behavior. 2. The OSD itself will not be color managed or affected by gamma controls. To get subtitle CMS/gamma, blend-subtitles must be used. 3. When enabled, this will make subtitles be cleanly interpolated by :interpolation, and also dithered etc. (just like the normal output). Signed-off-by: wm4 <wm4@nowhere>
* vo_opengl: set cscale=spline36 as default for opengl-hqNiklas Haas2015-03-251-1/+1
| | | | | | | Bilinear scaling is not a suitable default for something named "hq"; the whole reason this was done in the past was because cscale used to be obscenely slow. This is no longer the case, with cscale being nearly free.
* manpage: remove "experimental" notice from dxva2 codewm42015-03-191-1/+1
| | | | | | | It's relatively stable now. Also fix a typo in an