summaryrefslogtreecommitdiffstats
path: root/video/out/opengl
Commit message (Collapse)AuthorAgeFilesLines
* hwdec_cuda: Use the non-deprecated CUDA-GL interop APIPhilip Langdale2016-09-101-12/+26
| | | | | | | | | | The nvidia examples use the old (as in CUDA 3.x) interop API which is deprecated, and I think not even functional on recent versions of CUDA for windows. As I was following the examples, I used this old API. So, let's update to the new API, and hopefully, it'll start working on windows too.
* vo_opengl: use dedicated image unref function in config casewm42016-09-081-1/+1
| | | | | | | Just another corner-caseish potential issue. Unlike unreffing the image manually, unref_current_image() also takes care of properly unmapping hwdec frames. (The corner-case part of this is that it's probably never mapped at this point, but it's apparently not entirely guaranteed.)
* vo_opengl: simplify a conditionwm42016-09-081-2/+1
| | | | | | | | | | | The " || vimg->mpi" part virtually never seems to trigger, but on the other hand could possibly create unintended corner cases (for example by trying to upload a NULL image, which would then be marked as an error and render a blue screen). I guess it's a leftover from over times, where a NULL image meant "redraw the current frame". This is now handled by actually passing along the current frame.
* hwdec/opengl: Add support for CUDA and cuvid/NvDecodePhilip Langdale2016-09-082-0/+286
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Nvidia's "NvDecode" API (up until recently called "cuvid" is a cross platform, but nvidia proprietary API that exposes their hardware video decoding capabilities. It is analogous to their DXVA or VDPAU support on Windows or Linux but without using platform specific API calls. As a rule, you'd rather use DXVA or VDPAU as these are more mature and well supported APIs, but on Linux, VDPAU is falling behind the hardware capabilities, and there's no sign that nvidia are making the investments to update it. Most concretely, this means that there is no VP8/9 or HEVC Main10 support in VDPAU. On the other hand, NvDecode does export vp8/9 and partial support for HEVC Main10 (more on that below). ffmpeg already has support in the form of the "cuvid" family of decoders. Due to the design of the API, it is best exposed as a full decoder rather than an hwaccel. As such, there are decoders like h264_cuvid, hevc_cuvid, etc. These decoders support two output paths today - in both cases, NV12 frames are returned, either in CUDA device memory or regular system memory. In the case of the system memory path, the decoders can be used as-is in mpv today with a command line like: mpv --vd=lavc:h264_cuvid foobar.mp4 Doing this will take advantage of hardware decoding, but the cost of the memcpy to system memory adds up, especially for high resolution video (4K etc). To avoid that, we need an hwdec that takes advantage of CUDA's OpenGL interop to copy from device memory into OpenGL textures. That is what this change implements. The process is relatively simple as only basic device context aquisition needs to be done by us - the CUDA buffer pool is managed by the decoder - thankfully. The hwdec looks a bit like the vdpau interop one - the hwdec maintains a single set of plane textures and each output frame is repeatedly mapped into these textures to pass on. The frames are always in NV12 format, at least until 10bit output supports emerges. The only slightly interesting part of the copying process is that CUDA works by associating PBOs, so we need to define these for each of the textures. TODO Items: * I need to add a download_image function for screenshots. This would do the same copy to system memory that the decoder's system memory output does. * There are items to investigate on the ffmpeg side. There appears to be a problem with timestamps for some content. Final note: I mentioned HEVC Main10. While there is no 10bit output support, NvDecode can return dithered 8bit NV12 so you can take advantage of the hardware acceleration. This particular mode requires compiling ffmpeg with a modified header (or possibly the CUDA 8 RC) and is not upstream in ffmpeg yet. Usage: You will need to specify vo=opengl and hwdec=cuda. Note that hwdec=auto will probably not work as it will try to use vdpau first. mpv --hwdec=cuda --vo=opengl foobar.mp4 If you want to use filters that require frames in system memory, just use the decoder directly without the hwdec, as documented above.
* vo_opengl: fix incorrect video rendering after vdpau preemption recoverywm42016-09-071-0/+1
| | | | | This could also trigger in certain other cases, whenever it falls back to dumb mode.
* vo_opengl: fix another potential vdpau preemption issuewm42016-09-071-1/+2
| | | | | | reinit() will change the image params fields, so give it a copy. Will fix potential crashes if preemption happens more than once.
* vo_opengl: simplify option handlingwm42016-09-063-45/+37
| | | | | | | | | | | | | Instead of copying the options around... just don't. video.c now has full control over when options are updated. (It still gets notified from outside, but it decides when the updated options are copied: when m_config_cache_update() is called.) So there's no need for tricky stuff, and it can be simplified a bit. Also change lcms.c. We could do it like video.c, and get the options from the global config store. But it seems simpler to just provide a pointer to an option struct, which is arbitrarily mutated from the outside (from the perspective of lcms.c).
* vo_opengl: fix --icc-profile initial behaviorwm42016-09-061-0/+1
| | | | | | | | | Setting --icc-profile had no effect, until a vo_opengl option was changed at runtime. We must initialize the renderer for the initial option state too. For some reason, the ICC profile gets loaded twice. The next commit happens to fix this.
* vo_opengl: deprecate sub-options, add them as global optionswm42016-09-022-66/+102
| | | | | | | | | | | | | | | | | | | | | | | | vo_opengl sub-option were always rather annoying to handle. It seems better to make them global options instead. This is simpler and easier to use. The only disadvantage we are aware of is that it's not clear that many/all of these new global options work with vo_opengl only. --vo=opengl-hq is also deprecated. There is extensive compatibility with the old behavior. One exception is that --vo-defaults will not apply to opengl-hq (though with opengl it still works). vo-cmdline is also dysfunctional and will be removed in a following commit. These changes also affect opengl-cb. The update mechanism is still rather inefficient: it requires syncing with the VO after each option change, rather than batching updates. There's also no granularity (video.c just updates "everything", and if auto-ICC profiles are enabled, vo_opengl.c will fetch them on each update). Most of the manpage changes were done by Niklas Haas <git@haasn.xyz>.
* vo_opengl: rename 3dlut-size to icc-3dlut-sizewm42016-09-021-1/+2
| | | | | Not documenting this yet, because a later commit will change all the options anyway.
* vo_opengl: minor renderer option access refactorwm42016-09-022-0/+15
| | | | | | | | | | | | | Reduce accesses to the renderer opts in vo_opengl.c, and instead add accessors for them to video.c. I suppose gamma and maybe icc-auto could be moved to vo_opengl.c options. Also, the output colorspace could probably be adjusted to what is really used, not just the options (although it's possible that this commit changes this, due to video.c mutating its own copy of the options according to actual renderer capapbilities). But don't deal with this now.
* vo_opengl: remove pre/post/scale-shadersNiklas Haas2016-09-022-70/+3
| | | | | | | | | | Deprecated in favor of user-shaders, which are functionally equivalent but superior. (Except in the case of scaler-shader, which has no direct replacement, but it turned out to be a very unpopular feature either way - most custom scalers don't fit into the mpv kernel infrastructure and are therefore implemented as user shaders either way) Signed-off-by: wm4 <wm4@nowhere>
* vo_opengl: explicitly check for GL errors around framebuffer depth checkwm42016-08-291-0/+4
| | | | | | | | It seems like many GL implementations (including Mesa) choke on this, while others are fine. We still think that this use of the GL API is allowed by the standard (at least in the Mesa case), so to reduce confusion, explicitly check the "controversial" calls, and use an appropriate error message.
* vo_opengl: angle: new opengl flag to control DirectCompositionAvi Halachmi (:avih)2016-08-252-2/+8
| | | | | On some systems DirectComposition might behave poorly. Add an opengl suboption flag 'dcomposition' (default=yes) which can disable it.
* wayland_common: fix fullscreen image switching bugRostislav Pehlivanov2016-07-301-2/+0
| | | | | | | | | | | The problem was that when in fullscreen, switching between images did not issue a resize event, causing none of the images to be rendered correctly. This fixes the problem by issuing a resize event with the screen width and height. This commit also moves the zeroing of the events field to when it gets retrieved by mpv rather than randomly after a resize in the vo/backend code.
* vo_opengl: remove the 3dlut-size npot2 restrictionNiklas Haas2016-07-252-1/+3
| | | | | | | | This requires changing the pixel upload alignment because the odd sizes might not be aligned to multiples of 4. Anyway, the restriction has no real benefit and the sizes in between 32 and 64 might be worth using, so just drop it.
* vo_opengl: reduce default 3dlut-size to 64x64x64Niklas Haas2016-07-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Following testing after ebe798a, this is a more than sufficient size to cover our use case. The old default was a drop of about 58 dB PSNR using the old code, and this new default is about 65 dB PSNR, so it's actually an improvement despite resulting in a smaller size. There was no outlier whatsoever when comparing sizes around the 64 neighbourhood (with every step corresponding to a PSNR drop of about 0.07 dB), so I picked this since it's a power of two and requires no change to the current 3dlut-size parsing logic. I also tested smaller sizes such as 32x32x32 which performed almost as well on colorful samples, but this results in noticeable black boost in the dark regions, which is pretty undesirable. Therefore, we should avoid going much further below 64x64x64. Either way, this new size is so fast to compute that the 3dlut cache is almost useless on my end. In fact, it might even be slower to load the profile from the cache than to recompute it from scratch. (For caches on a disk. For cache on a tmpfs, it makes no difference)
* vo_opengl: increase 3DLUT accuracy at lower LUT sizesNiklas Haas2016-07-251-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | This code had the exact same texture indexing bug that the original scaler code had before the introduction of the LUT_POS macro to fix it. We can re-use this same macro here, and the performance drop is virtually entirely negligible. The benefit is greatly improved LUT accuracy as the 3DLUT size decreases - in particular, the old LUT started introducing more and more black crush the lower your LUT size is (because the error was essentially an over-contrast bias, with a magnitude linearly related to the lut size). The new code improves black stability as the LUT size decreases, and only at very low values (16 and below) do black levels start noticeably getting affected (due to crude linearization of the nonlinear response curve). The default value of 3dlut-size is definitely generous enough for this to make no difference out of the box, but it also causes no performance drop at all on my machine so I see no harm in improving the logic. Furthermore, this means we could easily decrease the default 3dlut size in a future commit, perhaps even down to 64x64x64 as a default. (But more testing is warranted here)
* wayland: port to the new wakeup/wait_events frameworkRostislav Pehlivanov2016-07-211-2/+13
| | | | | | | | | | | | This fits natively into the vo/backend and allows to simplify the polling code. One new change is the fact that surface_handle_enter flags VO_EVENT_WIN_STATE and VO_EVENT_RESIZE instead of only VO_EVENT_WIN_STATE. Before this, the code hackily relied on the timeout and the loop in the wait_frame function to track and set the scaling factor. Instead, this triggers mpv to run a schedule_resize and adjust the new VO output dimensions immediately. This is also more accurate since surface_handle_enter() gets called when a surface is created, moved and resized, which is exactly what the rest of the player might be interested in.
* vo_opengl: add a tscale=linear direct implementationNiklas Haas2016-07-211-3/+10
| | | | | | | | | | This uses GLSL mix() instead of going through an indirect texture access. Easy to implement and might require less resources on some devices, since the oversample code was already essentially just a special case of this. Could be made the new default (as per issue #2685), but that should be done in a separate commit.
* x11: stop using vo.event_fdwm42016-07-202-0/+26
| | | | Instead let it do its own event loop wakeup handling.
* vo_opengl: allow backends to provide callbacks for custom event loopswm42016-07-201-0/+5
| | | | | | | Until now, this has been either handled over vo.event_fd (which should go away), or by putting event handling on a separate thread. The backends which do the latter do it for a reason and won't need this, but X11 and Wayland will, in order to get rid of event_fd.
* videotoolbox: add yuv420p to --videotoolbox-formatwm42016-07-151-0/+10
|
* vo_opengl: hwdec: reset hw_subfmt fieldwm42016-07-158-0/+9
| | | | | | In theory, mp_image_params with hw_subfmt set to non-0 if imgfmt is not a hwaccel format is invalid. (It worked fine because nothing checks this yet.)
* video: change hw_subfmt meaningwm42016-07-152-5/+4
| | | | | | | | | | | | | | | | | | The hw_subfmt field roughly corresponds to the field AVHWFramesContext.sw_format in ffmpeg. The ffmpeg one is of the type AVPixelFormat (instead of the underlying hardware format), so it's a good idea to switch to this too for preparation. Now the hw_subfmt field is an mp_imgfmt instead of an opaque/API- specific number. VDPAU and Direct3D11 already used mp_imgfmt, but Videotoolbox and VAAPI had to be switched. One somewhat user-visible change is that the verbose log will now always show the hw_subfmt as image format, instead of as nonsensical number. (In the end it would be good if we could switch to AVHWFramesContext completely, but the upstream API is incomplete and doesn't cover Direct3D11 and Videotoolbox.)
* vo_opengl: angle: use WARP if there are no hw adaptersJames Ross-Gowan2016-07-121-2/+45
| | | | | | | | | | | | | | | This should get mpv working on Windows 7 machines without hardware accelerated graphics adapters. It already worked on Windows 8 and up because those systems would silently fall back to WARP if there was no graphics hardware installed. The normal MPGL_CAP_SW flag is not set, so unlike other opengl backends, this will choose a software adapter even if opengl:sw is not specified. The reason for this is, unlike on Linux, where vo_xv and vo_x11 can be used, mpv on Windows does not have any VO to fall back on when hardware acceleration isn't available, so if software adapters are rejected, the user won't see any video output when using the default settings. WARP seems to perform quite well, so it should be used in this case.
* vo_opengl: angle: try D3D9 when D3D11 fails eglInitializeJames Ross-Gowan2016-07-111-7/+8
| | | | | This will happen when D3D11 is present on the machine but the supported feature level is too low.
* csp: document deviations from the references where they occurNiklas Haas2016-07-051-2/+22
| | | | | | | | | | These mostly happen in situations where the correct behavior is relatively new and not found in the wild (therefore not worth implementing) and/or extremely complicated (and thus not worth worrying about the potential edge cases and UI changes). Still, it's best to document these where they happen to guide the poor souls maintaining these files in the future.
* vo_opengl: angle: update the swapchain on resizeJames Ross-Gowan2016-07-041-1/+16
| | | | | | | | | | | | | This uses eglPostSubBufferNV to trigger ANGLE to check the window size and update the size of the swapchain to match, which is recommended here: https://groups.google.com/d/msg/angleproject/RvyVkjRCQGU/gfKfT64IAgAJ With the D3D11 backend, using eglPostSubBufferNV with a 0-sized update region will even skip the Present() call, meaning it won't block for a vsync period. Hopefully ANGLE will have a less hacky way of doing this in future. See the relevant ANGLE issue: http://anglebug.com/1438 Fixes #3301
* vo_opengl: error out gracefully when trying to use FBOs without FBO APIwm42016-07-041-0/+5
| | | | | | | | | | | This can for example happen with vo_opengl_cb, if it is used with a GL implementation that does not supports FBOs. (mpv itself should never attempt to use FBOs if they're not available.) Without this check it would trigger an assert() in our dummy glBindFramebuffer wrapper. Suspected cause of #3308, although it's still unlikely.
* vo_opengl: move eval_szexpr to user_shaders.cNiklas Haas2016-07-033-102/+124
| | | | | | This moves some of the bulky user-shader specific logic into the file dedicated to it. Rather than expose video.c state, variable lookup is now done via a simulated closure.
* vo_opengl: generalize HDR tone mapping mechanismNiklas Haas2016-07-033-77/+115
| | | | | | | | | | | | | | | | | | | | | | | | This involves multiple changes: 1. Brightness metadata is split into nominal peak and signal peak. For a quick and dirty explanation: nominal peak is the brightest value that your color space can represent (i.e. the brightness of an encoded 1.0), and signal peak is the brightest value that actually occurs in the video (i.e. the brightest thing that's displayed). 2. vo_opengl uses a new decision logic to figure out the right nom_peak and sig_peak for all situations. It also does a better job of picking the right target gamut/colorspace to use for the OSD. (Which still is and still should be treated as sRGB). This change in logic also fixes #3293 en passant. 3. Since it was growing rapidly, the logic for auto-guessing / inferring the right colorimetry configuration (in pass_colormanage) was split from the logic for actually performing the adaptation (now pass_color_map). Right now, the new logic doesn't do a whole lot since HDR metadata is still ignored (but not for long).
* mp_image: split colorimetry metadata into its own structNiklas Haas2016-07-032-16/+16
| | | | | | | | | | | | | | | | | | This has two reasons: 1. I tend to add new fields to this metadata, and every time I've done so I've consistently forgotten to update all of the dozens of places in which this colorimetry metadata might end up getting used. While most usages don't really care about most of the metadata, sometimes the intend was simply to “copy” the colorimetry metadata from one struct to another. With this being inside a substruct, those lines of code can now simply read a.color = b.color without having to care about added or removed fields. 2. It makes the type definitions nicer for upcoming refactors. In going through all of the usages, I also expanded a few where I felt that omitting the “young” fields was a bug.
* vo_opengl: don't constantly resize the output FBONiklas Haas2016-07-031-1/+1
| | | | | | | | | | Commit 883d3114 seems to have (accidentally?) dropped the FBOTEX_FUZZY from the output_fbo resize, which means that current master will keep resizing and resizing the FBO as you change the window size, introducing severe memory leaking after a while. (Not sure why that would cause memory leaks, but I blame nvidia) Either way, it's bad for performance too, so it's worth fixing.
* vo_opengl: remove caching GL_MAX_TEXTURE_SIZE valuewm42016-07-031-11/+15
| | | | | | | No real need to cache this, and we need fewer fields in the OSD part struct. Also add logging for when the OSD texture is reallocated.
* vo_opengl: use ringbuffer of PBOswm42016-07-032-7/+13
| | | | | | | | This is how PBOs are normally supposed to be used. Unfortunately I can't see an any absolute improvement on nVidia binary drivers and playing 4K material. Compared to the "old" PBO path with 1 buffer, the measured GL time decreases significantly, though.
* vo_opengl: support inconsistent negative strides per planewm42016-07-031-8/+10
| | | | | | | | | | | | | | | GL generally does not support flipping the image on upload, meaning negative strides are not supported. vo_opengl handles this by flipping rendering if the stride is inverted, and gl_pbo_upload() "ignores" negative strides by uploading without flipping the image. If individual planes had strides with different signs, this broke. The flipping affected the entire image, and only the sign of the first plane was respected. This is just a crazy corner case that will never happen, but it turns out this is quite simple to support, and actually improves the code somewhat.
* vo_opengl: move PBO upload handling to shared codewm42016-07-034-140/+91
| | | | | | | | | | | | | This introduces a gl_pbo_upload_tex() function, which works almost like our gl_upload_tex() glTexSubImage2D() wrapper, except it takes a struct which caches the PBO handles. It also takes the full texture size (to make allocating an ideal buffer size easier), and a parameter to disable PBOs (so that the caller doesn't have to duplicate the gl_upload_tex() call if PBOs are disabled or unavailable). This also removes warnings and fallbacks on PBO failure. We just silently try using PBOs on every frame, and if that fails at some point, revert to normal texture uploads. Probably doesn't matter.
* vo_opengl: remove OSD bitmap packingwm42016-07-012-75/+13
| | | | It's packed in the OSD common layer already.
* d3d: implement screenshots for --hwdec=d3d11vawm42016-06-281-0/+1
| | | | | | | | | | | | | | No method of taking a screenshot was implemented at all. vo_opengl lacked window screenshotting, because ANGLE doesn't allow reading the frontbuffer. There was no way to read back from a D3D11 texture either. Implement reading image data from D3D11 textures. This is a low-quality effort to get basic screenshots done. Eventually there will be a better implementation: once we use AVHWFramesContext natively, the readback implementation will be in libavcodec, and will be able to cache the staging texture correctly. Hopefully. (For now it doesn't even have a AVHWFramesContext for D3D11 yet. But the abstraction is more appropriate for this purpose.)
* d3d: merge angle_common.h into d3d.hwm42016-06-285-36/+9
| | | | | | | | OK, this was dumb. The file didn't have much to do with ANGLE, and the functionality can simply be moved to d3d.c. That file contains helpers for decoding, but can always be present (on Windows) since it doesn't access any D3D specific libavcodec APIs. Thus it doesn't need to be conditionally built like the actual hwaccel wrappers.
* vo_opengl: add output_size uniform to custom shaderMuhammad Faiz2016-06-281-0/+3
| | | | | | logically, scaler should know its input and output size Signed-off-by: wm4 <wm4@nowhere>
* vo_opengl: minor typo and coding style fixeswm42016-06-281-5/+5
|
* vo_opengl: revise the transfer curve logicNiklas Haas2016-06-281-17/+10
| | | | | | | | Instead of hard-coding a big list, move some of the functionality to csputils. Affects both the auto-guess blacklist and the peak estimation. Also update the comments.
* vo_opengl: revise the logic for picking the default color spaceNiklas Haas2016-06-281-11/+10
| | | | | | | Too many "exceptions" these days, it's easier to just hard-code a whitelist instead of a blacklist. And besides, it only really makes sense to avoid adaptation for BT.601 specifically, since that's the one we auto-guess based on the resolution.
*