summaryrefslogtreecommitdiffstats
path: root/video
Commit message (Collapse)AuthorAgeFilesLines
* cocoa-cb: fix crash when querying window stateAkemi2019-04-021-1/+2
| | | | Fixes #6489
* cocoa-cb: wakeup vo when new events are availableAkemi2019-04-021-0/+2
| | | | | | | new events were added but not fetched by the vo, because we didn't signal the vo that new events were available. actually wakeup the vo when new events are available.
* x11: fix cursor hiding initial statePhilip Sequeira2019-03-162-2/+4
| | | | | | | | | Regression from 8e3308d687c3acdd0d572015b06efd5b492d93ee. Broken cases were: * --no-cursor-autohide acted like --cursor-autohide=always. * --cursor-autohide-fs-only always hid the cursor if starting non-fullscreen; entering fullscreen at least once fixed it.
* vo_gpu: increase user shader size limitBin Jin2019-03-131-1/+1
| | | | | | | | | | | | | The old size limit was chosen before LUT texture was supported in user shader. At that time, the whole user shader will be compiled and run on GPU, which makes large user shader impractical to be used. With the introduction of LUT texture, the old size limit doesn't make any sense. For example, a 1024x1024 rgba16f LUT will cost 32MB shader size. Fix this by increasing the size limit to a value that's unlikely be reached.
* vo_libmpv: fix null pointer dereferencewnoun2019-03-111-3/+5
| | | | Closes: #6507
* Merge branch 'master' into pr6360Jan Ekström2019-03-1115-261/+389
|\ | | | | | | | | | | Manual changes done: * Merged the interface-changes under the already master'd changes. * Moved the hwdec-related option changes to video/decode/vd_lavc.c.
| * vo_gpu: add two useful operators to user shaderBin Jin2019-03-092-0/+7
| | | | | | | | | | | | | | | | modulo operator could be used to check if size is multiple of a certain number. equal operator could be used to verify if size of different textures aligns.
| * vo_gpu: make texture offset available to CHROMA hooksBin Jin2019-03-091-16/+25
| | | | | | | | | | | | | | | | | | | | Before this commit, texture offset is set after all source textures are finalized. Which means CHROMA hooks won't be able to align with luma planes. This could be problematic for chroma prescalers utilizing information from luma plane. Fix this by find the reference texture early, and set global texture offset early.
| * lcms: allow infinite contrastzc622019-03-091-1/+1
| | | | | | | | Fixes #5980
| * context_drm_egl: implement n-bufferingAnton Kindestam2019-02-251-59/+150
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows context_drm_egl to use as many buffers as libgbm or the swapchain_depth setting allows (whichever is smaller). On pause and on still images (cover art etc.) to make sure that output does not lag behind user input, the swapchain is drained and reverts to working in a dual buffered (equivalent to swapchain-depth=1) manner. When possible (swapchain-depth>=2), the wait on the page flip event is now not done immediately after queueing, but is deferred to the next invocation of swap_buffers. Which should give us more CPU time between invocations. Although, since gbm_surface_has_free_buffers() can only tell us a boolean value and not how many buffers we have left, we are forced to do this contortionist dance where we first overshoot until gbm_surface_has_free_buffers() reports 0, followed by immediately waiting so we can free a buffer, to be able to get the deferred wait on page flip rolling. With this commit we do not rely on the default vsync fences/latency emulation of video/out/opengl/context.c, but supply our own, since the places we create and wait for the fences needs to be somewhat different for best performance. Minor fixes: * According to GBM documentation all BO:s gotten with gbm_surface_lock_front_buffer must be released before gbm_surface_destroy is called on the surface. * We let the page flip handler function handle the waiting_for_flip flag.
| * opengl: Support GL_ARB_sync style fences on OpenGL ES 3.0Anton Kindestam2019-02-251-0/+1
| | | | | | | | | | OpenGL ES 3.0 and up has suppport for for GL_ARB_sync style fences. Make sure that mpv can use them.
| * vo_gpu: fix initial seeding of the peak detect ssboNiklas Haas2019-02-182-5/+7
| | | | | | | | | | | | | | This solves some edge cases when using files with very weird metadata (e.g. MaxCLL 10k and so forth). Instead of just blindly seeding it with the tagged metadata, forcibly set the initial state from the detected values.
| * vo_gpu: use dB units for scene change detectionNiklas Haas2019-02-183-11/+12
| | | | | | | | | | | | | | Rather than the linear cd/m^2 units, these (relative) logarithmic units lend themselves much better to actually detecting scene changes, especially since the scene averaging was changed to also work logarithmically.
| * vo_gpu: clamp sigmoid functionNiklas Haas2019-02-181-0/+2
| | | | | | | | Can explode on some clips otherwise
| * vo_gpu: tone map before gamut mappingNiklas Haas2019-02-181-5/+4
| | | | | | | | | | | | Gamut mapping can take very bright out-of-gamut colors into the negatives, which completely destroys the color balance (which tone mapping tries its best to preserve).
| * vo_gpu: make --gamut-warning warn on negative colorsNiklas Haas2019-02-181-1/+2
| | | | | | | | | | As is the case for actually out-of-gamut colors (rather than just too bright colors).
| * vo_gpu: improve numerical accuracy of PQ OETF constantNiklas Haas2019-02-181-1/+1
| | | | | | | | | | Not a huge deal, but we can do the division in C, which makes the float constant larger.
| * vo_gpu: allow color management in dumb modeNiklas Haas2019-02-181-5/+6
| | | | | | | | | | There's no point to disallow target-trc/prim in dumb mode, since they still work fine.
| * vo_gpu: improve accuracy of HDR brightness estimationNiklas Haas2019-02-182-10/+14
| | | | | | | | | | | | | | This change switches to a logarithmic mean to estimate the average signal brightness. This handles dark scenes with isolated highlights much more faithfully than the linear mean did, since the log of the signal roughly corresponds to the perceptual brightness.
| * vo_gpu: allow boosting dark scenes when tone mappingNiklas Haas2019-02-183-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | In theory our "eye adaptation" algorithm works in both ways, both darkening bright scenes and brightening dark scenes. But I've always just prevented the latter with a hard clamp, since I wanted to avoid blowing up dark scenes into looking funny (and full of noise). But allowing a tiny bit of over-exposure might be a good thing. I won't change the default just yet (better let users test), but a moderate value of 1.2 might be better than the current 1.0 limit. Needs testing especially on dark scenes.
| * vo_gpu: redesign peak detection algorithmNiklas Haas2019-02-183-77/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The previous approach of using an FIR with tunable hard threshold for scene changes had several problems: - the FIR involved annoying hard-coded buffer sizes, high VRAM usage, and the FIR sum was prone to numerical overflow which limited the number of frames we could average over. We also totally redesign the scene change detection. - the hard scene change detection was prone to both false positives and false negatives, each with their own (annoying) issues. Scrap this entirely and switch to a dual approach of using a simple single-pole IIR low pass filter to smooth out noise, while using a softer scene change curve (with tunable low and high thresholds), based on `smoothstep`. The IIR filter is extremely simple in its implementation and has an arbitrarily user-tunable cutoff frequency, while the smoothstep-based scene change curve provides a good, tunable tradeoff between adaptation speed and stability - without exhibiting either of the traditional issues associated with the hard cutoff. Another way to think about the new options is that the "low threshold" provides a margin of error within which we don't care about small fluctuations in the scene (which will therefore be smoothed out by the IIR filter).
| * vo_gpu: improve tone mapping desaturationNiklas Haas2019-02-184-76/+87
| | | | | | | | | | | | | | | | | | | | | | | | | | Instead of desaturating towards luma, we desaturate towards the per-channel tone mapped version. This essentially proves a smooth roll-off towards the "hollywood"-style (non-chromatic) tone mapping algorithm, which works better for bright content, while continuing to use the "linear" style (chromatic) tone mapping algorithm for primarily in-gamut content. We also split up the desaturation algorithm into strength and exponent, which allows users to use less aggressive desaturation settings without affecting the overall curve.
| * wayland_common: rename “shell” into “wm_base”Emmanuel Gil Peyrot2019-02-172-11/+11
| | | | | | | | | | | | | | | | This is the naming xdg-shell stable adopted, it doesn’t make much sense to keep using “shell” everywhere with all functions calling it “wm_base”. Finishes what 76211609e3c589dafe3ef9a36cacc06e8f56de09 started.
| * cocoa-cb: remove empty elements from dropped URLsAkemi2019-02-101-1/+2
| | | | | | | | Fixes #6241
| * cocoa-cb: add support for VOCTRL_GET_DISPLAY_NAMESAkemi2019-02-101-0/+14
| |
| * cocoa-cb: use Swift Extensions for convenienceAkemi2019-02-101-7/+4
| | | | | | | | preparations for the following commit.
| * cocoa-cb: fix side by side Split View againAkemi2019-01-231-3/+2
| | | | | | | | | | | | | | | | | | | | some safety mechanism for the async fs animation aren't needed anymore, due to possible improved logic and slightly different behaviour on new macOS versions. that safety fallback prevented the Split View because it always returned a rectangle of the whole screen, instead of just part/half of it. Fixes #6443
| * vo_gpu: allow resetting target-peak to the trc defaultKotori Itsuka2019-01-231-1/+2
| | | | | | | | | | | | | | | | | | Add "auto" the possible values of target-peak. The default value for target_peak is to calculate the target using mp_trc_nom_peak. Unfortunately, this default was outside the acceptable range of 10-10000 nits, which prevented its later reassignment. So add an "auto" choice to target-peak which lets clients and scripts go back to using the trc default after assigning a value.
| * vd_lavc: increase the possible length of the hwdec nameAkemi2019-01-231-1/+1
| | | | | | | | | | | | | | this lead to an unexpected videotoolbox-copy hwdec name due to the last two chars being cut off. since selection is also done by that name one had to use "videotoolbox-co" to explicitly use the copy mode of videotoolbox.
* | x11: don't hide cursor if window isn't focusedwm42018-12-062-20/+33
| | | | | | | | | | | | | | I found this sort of annoying. You could argue that the "frontend" should maybe contain this logic, but who cares.
* | vo, vo_gpu, glx: correct GLX_OML_sync_control usagewm42018-12-063-84/+128
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I misunderstood how this extension works. If I understand it correctly now, it's worse than I thought. They key thing is that the (ust, msc, sbc) tripple is not for a single swap event. Instead, (ust, msc) run independently from sbc. Assuming a CFR display/compositor, this means you can at best know the vsync phase and frequency, but not the exact time a sbc changed value. There is GLX_INTEL_swap_event, which might work as expected, but it has no EGL equivalent (while GLX_OML_sync_control does, in theory). Redo the context_glx sync code. Now it's either more correct or less correct. I wanted to add proper skip detection (if a vsync gets skipped due to rendering taking too long and other problems), but it turned out to be too complex, so only some unused fields in vo.h are left of it. The "generic" skip detection has to do. The vsync_duration field is also unused by vo.c. Actually this seems to be an improvement. In cases where the flip call timing is off, but the real driver-level timing apparently still works, this will not report vsync skips or higher vsync jitter anymore. I could observe this with screenshots and fullscreen switching. On the other hand, maybe it just introduces an A/V offset or so. Why the fuck can't there be a proper API for retrieving these statistics? I'm not even asking for much.
* | vo: use a struct for vsync feedback stuffwm42018-12-067-36/+52
| | | | | | | | So new useless stuff can be easily added.
* | vo_gpu: glx: use GLX_OML_sync_control for better vsync reportingwm42018-12-067-0/+142
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the extension to compute the (hopefully correct) video delay and vsync phase. This is very fuzzy, because the latency will suddenly be applied after some frames have already been shown. This means there _will_ be "jumps" in the time accounting, which can lead to strange effects at start of playback (such as making initial "dropped" etc. frames worse). The only reasonable way to fix this would be running a few dummy frame swaps at start of playback until the latency is known. The same happens when unpausing. This only affects display-sync mode. Correct function was not confirmed. It only "looks right". I don't have the equipment to make scientifically correct measurements. A potentially bad thing is that we trust the timestamps we're receiving. Out of bounds timestamps could wreak havoc. On the other hand, this will probably cause the higher level code to panic and just disable DS. As a further caveat, this makes a bunch of assumptions about UST timestamps. If there are delayed frames (i.e. we skipped one or more vsyncs), the latency logic is mostly reset. There is no attempt to make the vo.c skipped vsync logic to use this. Also, the latency computation determines a vsync duration, and there's no effort to reconcile or share the vo.c logic for determining vsync duration.
* | Merge commit '559a400ac36e75a8d73ba263fd7fa6736df1c2da' into ↵Anton Kindestam2018-12-053-31/+55
|\ \ | |/ |/| | | | | | | wm4-commits--merge-edition This bumps libmpv version to 1.103
| * vo: remove bogus #ifwm42018-05-241-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | If anyone happened to build with GL disabled, this could lead to option changes not always refreshing the screen. Since vo_gpu is always enabled now (just not necessarily any backend for it), we can drop the #if completely. (The way this works is a bit idiotic - the option cache exists only to grab the change notification, which will trigger a redraw and make vo_gpu update its own second copy of them. But at least it avoids some layering issues for now.)
| * player: get rid of mpv_global.optswm42018-05-242-8/+14
| | | | | | | | | | | | | | | | This was always a legacy thing. Remove it by applying an orgy of mp_get_config_group() calls, and sometimes m_config_cache_alloc() or mp_read_option_raw(). win32 changes untested.
| * vd_lavc: move hwdec opts to local config, don't use global MPOptswm42018-05-241-21/+41
| | | | | | | | | | | | | | The --hwdec* options are a good fit for the vd_lavc local option struct. This annoyingly requires manual prefixing of most of these options with --vd-lavc (could be avoided by using more sub-struct craziness, but let's not).
| * vd_lavc: minor simplification for get_format fallbackwm42018-05-241-7/+1
| | | | | | | | | | | | | | | | | | | | | | The default get_format does exactly do this, so we don't need to duplicate it. The only potential problem with this is that the logic doesn't entirely prevent that the avcodec_default_get_format hw_device_ctx path is triggered, which would probably work, but has unknown consequences and interactions. But the way the logic currently works it can't happen, provided the hwaccel metadata libavcodec provides is correct.
| * input: add a define for the number of mouse buttons and use itwm42018-05-241-0/+4
| | | | | | | | (Why the fuck are there up to 20 mouse buttons?)
* | spirv: remove --spirv-compiler=nvidiaNiklas Haas2018-12-012-62/+0
| | | | | | | | | | | | | | | | | | | | | | | | This option has been deprecated upstream for a long time, probably doesn't even work anymore, and won't work moving forwards as we replace the vulkan code by libplacebo wrappers. I haven't removed the option completely yet since in theory we could still add support for e.g. a native glslang wrapper in the future. But most likely the future of this code is deletion. As an aside, fix an issue where the man page didn't mention d3d11.
* | drm: rename plane options to better, invariant, namesAnton Kindestam2018-12-018-120/+125
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit bumps the libmpv version to 1.102 drm-osd-plane -> drm-draw-plane drm-video-plane -> drm-drmprime-video-plane drm-osd-size -> drm-draw-surface-size "draw plane", as in the plane that OpenGL draws to, whether it be video + OSD or just OSD. "drmprime video plane", as in the plane used for hwdec video imported via drmprime. "draw surface size", as in the size of the surface used for the draw plane The new names are invariant whether or not hwdec_drmprime_drm is being used or not. The original naming was very confusing, as when doing regular rendering (swdec or vaapi) the video would be displayed on the "OSD plane", and the "Video plane" would remain unused.
* | drm_atomic: Add general primary/overlay plane optionAnton Kindestam2018-12-013-18/+32
| | | | | | | | | | | | | | | | | | | | Add general primary/overlay plane option to drm-osd-plane-id and drm-video-plane-id, so that the user can just request any usable primary or overlay plane for either of these two options. This should be somewhat more user-friendly (especially as neither of these two options currently have a useful help function), as usually you would only be interested in the type of the plane, and not exactly which plane gets picked.
* | gpu: prefer wayland context on autodetectdudemanguy2018-11-191-3/+3
| |
* | vulkan: slightly improve vsync jitter measurementsNiklas Haas2018-11-191-0/+19
| | | | | | | | | | | |