summaryrefslogtreecommitdiffstats
path: root/video/out/vulkan/common.h
Commit message (Collapse)AuthorAgeFilesLines
* mac/vulkan: remove old deprecated VK_MVK_macos_surface extension remainsder richter2024-02-291-1/+0
|
* vo_gpu/vo_gpu_next: add vulkan support for macOSder richter2023-10-141-0/+4
| | | | | | | | add support for vulkan through metal and a translation layer like MoltenVK. also add the possibility to use different render timing modes for testing. i still consider this experimental atm.
* vo_gpu_next: vulkan: libplacebo: unify log prefixNiklas Haas2022-10-091-1/+0
| | | | | | | | | | | | | | | | | | The new status quo is simple: all messages coming from libplacebo are marked "vo/gpu{-next}/libplacebo", regardless of the backend API (vulkan vs opengl/d3d11). Messages coming from mpv's internal vulkan code will continue to come from "vo/gpu{-next}/vulkan", and messages coming from the vo module itself will be marked "vo/gpu{-next}". This is significantly better than the old status quo of vulkan messages coming from "vo/gpu{-next}/vulkan/libplacebo" whereas opengl/d3d11 messages simply came from "vo/gpu{-next}", even when those messages originated from libplacebo. (It's worth noting that the the destructor for the log is redundant because it's attached to the ctx which is freed on uninit anyway)
* libplacebo: switch to v4 naming conventionNiklas Haas2022-02-031-4/+4
| | | | | All of these const struct pointers got typedefs, clean up the code accordingly.
* libplacebo: update log helpersNiklas Haas2022-02-031-2/+2
| | | | | Use the pl_log APIs introduced in libplacebo v4, replacing the deprecated pl_context concept.
* vo_gpu: vulkan: expose swapchain to mpvk_ctxNiklas Haas2021-11-031-0/+1
| | | | So I can reuse it in vo_gpu_next.
* vo_gpu: vulkan: use libplacebo insteadNiklas Haas2019-04-211-51/+7
| | | | | | | | | | | | | | | | | | | | | | | | This commit rips out the entire mpv vulkan implementation in favor of exposing lightweight wrappers on top of libplacebo instead, which provides much of the same except in a more up-to-date and polished form. This (finally) unifies the code base between mpv and libplacebo, which is something I've been hoping to do for a long time. Note: The ra_pl wrappers are abstract enough from the actual libplacebo device type that we can in theory re-use them for other devices like d3d11 or even opengl in the future, so I moved them to a separate directory for the time being. However, the rest of the code is still vulkan-specific, so I've kept the "vulkan" naming and file paths, rather than introducing a new `--gpu-api` type. (Which would have been ended up with significantly more code duplicaiton) Plus, the code and functionality is similar enough that for most users this should just be a straight-up drop-in replacement. Note: This commit excludes some changes; specifically, the updates to context_win and hwdec_cuda are deferred to separate commits for authorship reasons.
* vo_gpu: vulkan: Add support for exporting buffer memoryPhilip Langdale2018-10-221-0/+4
| | | | | | | | | | | | | | | | | | | | The CUDA/Vulkan interop works on the basis of memory being exported from Vulkan and then imported by CUDA. To enable this, we add a way to declare a buffer as being intended for export, and then add a function to do the export. For now, we support the fd and Handle based exports on Linux and Windows respectively. There are others, which we can support when a need arises. Also note that this is just for exporting buffers, rather than textures (VkImages). Image import on the CUDA side is supposed to work, but it is currently buggy and waiting for a new driver release. Finally, at least with my nvidia hardware and drivers, everything seems to work even if we don't initialise the buffer with the right exportability options. Nevertheless I'm enforcing it so that we're following the spec.
* vo_gpu: vulkan: try enabling required featuresNiklas Haas2018-02-051-0/+1
| | | | | | | Instead of enabling every feature under the sun, make an effort to just whitelist the ones we actually might use. Turns out the extended storage format support is needed for some of the storage formats we use, in particular rgba16.
* vo_gpu: vulkan: support split command poolsNiklas Haas2017-12-251-3/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of using a single primary queue, we generate multiple vk_cmdpools and pick the right one dynamically based on the intent. This has a number of immediate benefits: 1. We can use async texture uploads 2. We can use the DMA engine for buffer updates 3. We can benefit from async compute on AMD GPUs Unfortunately, the major downside is that due to the lack of QF ownership tracking, we need to use CONCURRENT sharing for all resources (buffers *and* images!). In theory, we could try figuring out a way to get rid of the concurrent sharing for buffers (which is only needed for compute shader UBOs), but even so, the concurrent sharing mode doesn't really seem to have a significant impact over here (nvidia). It's possible that other platforms may disagree. Our deadlock-avoidance strategy is stupidly simple: Just flush the command every time we need to switch queues, and make sure all submission and callbacks happen in FIFO order. This required lifting the cmds_pending and cmds_queued out from vk_cmdpool to mpvk_ctx, and some functions died/got moved as a result, but that's a relatively minor change. On my hardware this is a fairly significant performance boost, mainly due to async transfers. (Nvidia doesn't expose separate compute queues anyway). On AMD, this should be a performance boost as well due to async compute.
* vo_gpu: vulkan: add a vk_signal abstractionNiklas Haas2017-12-251-0/+4
| | | | | | | | | This combines VkSemaphores and VkEvents into a common umbrella abstraction which can resolve to either. We aggressively try to prefer VkEvents over VkSemaphores whenever the conditions are met (1. we can unsignal the semaphore, i.e. it comes from the same frame; and 2. it comes from the same queue).
* vo_gpu: vulkan: refactor vk_cmdpoolNiklas Haas2017-12-251-1/+1
| | | | | | 1. No more static arrays (deps / callbacks / queues / cmds) 2. Allows safely recording multiple commands at the same time 3. Uses resources optimally by never over-allocating commands
* vo_gpu: vulkan: add support for WindowsJames Ross-Gowan2017-09-281-0/+3
|
* vo_gpu: vulkan: add support for waylandRostislav Pehlivanov2017-09-261-0/+3
|
* vo_gpu: vulkan: generalize SPIR-V compilerNiklas Haas2017-09-261-0/+1
| | | | | | | | | | | | | | In addition to the built-in nvidia compiler, we now also support a backend based on libshaderc. shaderc is sort of like glslang except it has a C API and is available as a dynamic library. The generated SPIR-V is now cached alongside the VkPipeline in the cached_program. We use a special cache header to ensure validity of this cache before passing it blindly to the vulkan implementation, since passing invalid SPIR-V can cause all sorts of nasty things. It's also designed to self-invalidate if the compiler gets better, by offering a catch-all `int compiler_version` that implementations can use as a cache invalidation marker.
* vo_gpu: vulkan: initial implementationNiklas Haas2017-09-261-0/+51
This time based on ra/vo_gpu. 2017 is the year of the vulkan desktop! Current problems / limitations / improvement opportunities: 1. The swapchain/flipping code violates the vulkan spec, by assuming that the presentation queue will be bounded (in cases where rendering is significantly faster than vsync). But apparently, there's simply no better way to do this right now, to the point where even the stupid cube.c examples from LunarG etc. do it wrong. (cf. https://github.com/KhronosGroup/Vulkan-Docs/issues/370) 2. The memory allocator could be improved. (This is a universal constant) 3. Could explore using push descriptors instead of descriptor sets, especially since we expect to switch descriptors semi-often for some passes (like interpolation). Probably won't make a difference, but the synchronization overhead might be a factor. Who knows. 4. Parallelism across frames / async transfer is not well-defined, we either need to use a better semaphore / command buffer strategy or a resource pooling layer to safely handle cross-frame parallelism. (That said, I gave resource pooling a try and was not happy with the result at all - so I'm still exploring the semaphore strategy) 5. We aggressively use pipeline barriers where events would offer a much more fine-grained synchronization mechanism. As a result of this, we might be suffering from GPU bubbles due to too-short dependencies on objects. (That said, I'm also exploring the use of semaphores as a an ordering tactic which would allow cross-frame time slicing in theory) Some minor changes to the vo_gpu and infrastructure, but nothing consequential. NOTE: For safety, all use of asynchronous commands / multiple command pools is currently disabled completely. There are some left-over relics of this in the code (e.g. the distinction between dev_poll and pool_poll), but that is kept in place mostly because this will be re-extended in the future (vulkan rev 2). The queue count is also currently capped to 1, because of the lack of cross-frame semaphores means we need the implicit synchronization from the same-queue semantics to guarantee a correct result.