summaryrefslogtreecommitdiffstats
path: root/video/fmt-conversion.c
Commit message (Collapse)AuthorAgeFilesLines
* video: don't define IMGFMT_VULKAN conditionallysfan52024-02-261-2/+0
| | | | | | We generally try to avoid that due to the #ifdef mess. The equivalent format is defined in ffmpeg 4.4 while our interop code requires a higher version, but that doesn't cause any problems.
* hwdec_vulkan: add Vulkan HW InteropPhilip Langdale2023-05-281-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Vulkan Video Decoding has finally become a reality, as it's now showing up in shipping drivers, and the ffmpeg support has been merged. With that in mind, this change introduces HW interop support for ffmpeg Vulkan frames. The implementation is functionally complete - it can display frames produced by hardware decoding, and it can work with ffmpeg vulkan filters. There are still various caveats due to gaps and bugs in drivers, so YMMV, as always. Primary testing has been done on Intel, AMD, and nvidia hardware on Linux with basic Windows testing on nvidia. Notable caveats: * Due to driver bugs, video decoding on nvidia does not work right now, unless you use the Vulkan Beta driver. It can be worked around, but requires ffmpeg changes that are not considered acceptable to merge. * Even if those work-arounds are applied, Vulkan filters will not work on video that was decoded by Vulkan, due to additional bugs in the nvidia drivers. The filters do work correctly on content decoded some other way, and then uploaded to Vulkan (eg: Decode with nvdec, upload with --vf=format=vulkan) * Vulkan filters can only be used with drivers that support VK_EXT_descriptor_buffer which doesn't include Intel ANV as yet. There is an MR outstanding for this. * When dealing with 1080p content, there may be some visual distortion in the bottom lines of frames due to chroma scaling incorporating the extra hidden lines at the bottom of the frame (1080p content is actually stored as 1088 lines), depending on the hardware/driver combination and the scaling algorithm. This cannot be easily addressed as the mechanical fix for it violates the Vulkan spec, and probably requires a spec change to resolve properly. All of these caveats will be fixed in either drivers or ffmpeg, and so will not require mpv changes (unless something unexpected happens) If you want to run on nvidia with the non-beta drivers, you can this ffmpeg tree with the work-around patches: * https://github.com/philipl/FFmpeg/tree/vulkan-nvidia-workarounds
* various: drop unused #include "config.h"Thomas Weißschuh2023-02-201-1/+0
| | | | | | Most sources don't need config.h. The inclusion only leads to lots of unneeded recompilation if the configuration is changed.
* filters/f_hwtransfer: remove VAAPI <-> Vulkan mapping for nowPhilip Langdale2022-10-291-1/+0
| | | | | | | | | This mapping isn't actually relevant until we have the Vulkan interop merged, and it requires a newer version of libavutil than our minimum requirement. So I'm going to remove it from master and put it in the interop PR. Fixes #10813
* f_hwtransfer: mp_image_pool: support HW -> HW mappingPhilip Langdale2022-09-211-0/+1
| | | | | | | | | | | | | | | | Certain combinations of hardware formats require the use of hwmap to transfer frames between the formats, rather than hwupload, which will fail if attempted. To keep the usage of vf_format for HW -> HW transfers as intuitive as possible, we should detect these cases and do the map operation instead of uploading. For now, the relevant cases are moving between VAAPI and Vulkan, and VAAPI and DRM Prime, in both directions. I have introduced the IMGFMT entry for Vulkan here so that I can put in the complete mapping table. It's actually not useless, as you can map to Vulkan, use a Vulkan filter and then map back to VAAPI for display output.
* video: alias IMGFMT_RGB30 to AV_PIX_FMT_X2RGB10wm42020-06-171-0/+4
| | | | | | | IMGFMT_RGB30 was added first; FFmpeg added AV_PIX_FMT_X2RGB10 later. This is exactly the same, so treat them as such. For some reason, libswscale still seems to output incompatible data - not sure what this is about, but I'm not going to debug it.
* video: drop NV24 aliaswm42020-02-181-1/+0
| | | | | | | Caused build failures with still supported FFmpeg versions. It's unreferenced, so it's not needed. Fixes: #7471
* Remove remains of Libav compatibilitywm42020-02-161-19/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Libav seems rather dead: no release for 2 years, no new git commits in master for almost a year (with one exception ~6 months ago). From what I can tell, some developers resigned themselves to the horrifying idea to post patches to ffmpeg-devel instead, while the rest of the developers went on to greener pastures. Libav was a better project than FFmpeg. Unfortunately, FFmpeg won, because it managed to keep the name and website. Libav was pushed more and more into obscurity: while there was initially a big push for Libav, FFmpeg just remained "in place" and visible for most people. FFmpeg was slowly draining all manpower and energy from Libav. A big part of this was that FFmpeg stole code from Libav (regular merges of the entire Libav git tree), making it some sort of Frankenstein mirror of Libav, think decaying zombie with additional legs ("features") nailed to it. "Stealing" surely is the wrong word; I'm just aping the language that some of the FFmpeg members used to use. All that is in the past now, I'm probably the only person left who is annoyed by this, and with this commit I'm putting this decade long problem finally to an end. I just thought I'd express my annoyance about this fucking shitshow one last time. The most intrusive change in this commit is the resample filter, which originally used libavresample. Since the FFmpeg developer refused to enable libavresample by default for drama reasons, and the API was slightly different, so the filter used some big preprocessor mess to make it compatible to libswresample. All that falls away now. The simplification to the build system is also significant.
* img_format: add alias for ffmpeg pal8 formatwm42020-02-101-0/+1
| | | | For the next commit.
* test: add dumping of img_format metadatawm42019-11-081-2/+2
| | | | | | | | | | | | | | | | | | | | This is fragile enough that it warrants getting "monitored". This takes the commented test program code from img_format.c, makes it output to a text file, and then compares it to a "ref" file stored in git. Originally, I wanted to do the comparison etc. in a shell or Python script. But why not do it in C. So mpv calls /usr/bin/diff as a sub-process now. This test will start producing different output if FFmpeg adds new pixel formats or pixel format flags, or if mpv adds new IMGFMT (either aliases to FFmpeg formats or own formats). That is unavoidable, and requires manual inspection of the results, and then updating the ref file. The changes in the non-test code are to guarantee that the format ID conversion functions only translate between valid IDs.
* build: lower required FFmpeg versionwm42019-10-201-0/+2
| | | | | | | | The FFmpeg version was last bumped a long time ago, except in commit 1638fa7b4663e4ad46ccd9750, where it was used for some obscure pixel format. This is pretty annoying, so make it optional.
* vo/gpu: hwdec_vdpau: Support direct mode for 4:4:4 contentPhilip Langdale2019-07-081-0/+1
| | | | | | | New releases of VDPAU support decoding 4:4:4 content, and that comes back as NV24 when using 'direct mode' in OpenGL Interop. That means we need to be a little bit smarter about how we set up the OpenGL textures.
* vo_gpu: make screenshots use the GL rendererwm42018-02-111-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | Using the GL renderer for color conversion will make sure screenshots will use the same conversion as normal video rendering. It can do this for all types of screenshots. The logic when to write 16 bit PNGs changes. To approximate the old behavior, we decide by looking whether the source video format has more than 8 bits per component. We apply this logic even for window screenshots. Also, 16 bit PNGs now always include an unused alpha channel. The reason is that FFmpeg has RGB48 and RGBA64 formats, but no RGB064. RGB48 is 3 bytes and usually not supported by GPUs for rendering, so we have to use RGBA64, which forces an alpha channel. Will break for users who use --target-trc and similar options. I considered creating a new gl_video context, but it could double GPU memory use, so I didn't. This uses FBOs instead of glGetTexImage(), because that increases the chance it could work on GLES (e.g. ANGLE). Untested. No support for the Vulkan and D3D11 backends yet. Fixes #5498. Also fixes #5240, because the code for reading back is not used with the new code path.
* video: rewrite filtering glue codewm42018-01-301-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.
* Add DRM_PRIME Format Handling and Display for RockChip MPP decodersLionel CHAZALLON2017-10-231-0/+3
| | | | | | | | | | | This commit allows to use the AV_PIX_FMT_DRM_PRIME newly introduced format in ffmpeg that allows decoders to provide an AVDRMFrameDescriptor struct. That struct holds dmabuf fds and information allowing zerocopy rendering using KMS / DRM Atomic. This has been tested on RockChip ROCK64 device.
* hwdec: add mediacodec hardware decoder for IMGFMT_MEDIACODEC framesAman Gupta2017-10-091-0/+3
|
* vaapi: use newer libavutil vaapi pixfmt namewm42017-09-291-1/+1
| | | | | AV_PIX_FMT_VAAPI_VLD is clearly deprecated - it just doesn't raise any compiler warnings.
* video: drop old D3D11/DXVA2 supportwm42017-09-261-4/+0
| | | | | | | | | Now you need FFmpeg git, or something. This also gets rid of the last real use of gpu_memcpy(). libavutil does that itself. (vaapi.c still used it, but it was essentially unused, because the code path isn't really in use anymore. It wasn't even included due to the d3d-hwaccel dependency in wscript.)
* vo_direct3d: remove non-working nv12 shader supportwm42017-06-301-2/+0
| | | | | | | | | It never worked. It relied on some obscure texture format to provide the equivalent of GL_RG or GL_LUMINANCE_ALPHA, but no hardware seemed to report support for it ever. No idea what's the correct way to do this. On D3D11 it exists, of course. (Actually I'd like to remove the whole VO.)
* video: get rid of swapped packed YUVwm42017-06-301-1/+0
| | | | | | Another legacy annoyance. The only place where packed YUV is still important is slightly older Apple hardware or drivers, which require it for efficient hardware decoding.
* video: drop some more IMGFMT aliaseswm42017-06-291-19/+0
| | | | | | | | | | | For vo_opengl and vo_direct3d, these are supported in a generic way. For vf_vapoursynth, we could probably map its VSFormat struct in a generic way, but for now do some bullshit. vf_eq.c actually loses support for these formats. We could add generic support too (anything that has 8 bit planes will work), but why bother. The filter is deprecated anyway.
* video: drop some unused IMGFMT aliaseswm42017-06-291-10/+0
| | | | | | These formats are supported in a generic way. To get rid of IMGFMT_NV21, remove support from vo_direct3d.c completely.
* video/fmt-conversion, img_format: change license to LGPLwm42017-06-181-7/+7
| | | | | | | | | | | | | | | | | | | | | | The problem with fmt-conversion.h is that "lucabe", who disagreed with LGPL, originally wrote it. But it was actually rewritten by "reimar" later. The original switch statement was replaced with a lookup table. No code other than the imgfmt2pixfmt() function signature survives. Neither the format pairs (PIXFMT<->IMGFMT), nor the concept of mapping them, can be copyrighted. So changing the license should be fine, because reimar and all other authors involved with the new code agreed to LGPL. We also don't consider format pairs added later as copyrightable. (The direct-mapping idea mentioned in the "Copyright" file seems attractive, and I might implement in later anyway.) Likewise, there might be some format names added to img_format.h, which are not covered by relicensing agreements. These all affect "later" additions, and they follow either the FFmpeg PIXFMT naming or some other pre-existing logic, so this should be fine.
* img_format: drop unused aliaseswm42017-06-181-4/+0
| | | | vo_opengl uses those automatically via pixdesc.
* d3d: add support for new libavcodec hwaccel APIwm42017-06-081-0/+5
| | | | | | Unfortunately quite a mess, in particular due to the need to have some compatibility with the old API. (The old API will be supported only in short term.)
* Remove compatibility thingswm42016-12-071-8/+0
| | | | | | Possible with bumped FFmpeg/Libav. These are just the simple cases.
* vo_opengl: hwdec_cuda: Support P016 output surfacesPhilip Langdale2016-11-221-0/+3
| | | | | | | | | The latest 375.xx nvidia drivers add support for P016 output surfaces. In combination with an ffmpeg change to return those surfaces, we can display them. The bulk of the work is related to knowing which format you're dealing with at the right time. Once you know, it's straight forward.
* img_format: remove some unneeded format definitionswm42016-09-281-9/+0
| | | | They're still supported, just that they have no IMGFMT_ alias.
* hwdec_cuda: Rename config variable to be more consistentPhilip Langdale2016-09-161-1/+1
| | | | | | 'cuda-gl' isn't right - you can turn this on without any GL and get some non-zero benefit (with the cuda-copy hwaccel). So 'cuda-hwaccel' seems more consistent with everything else.
* hwdec/opengl: Add support for CUDA and cuvid/NvDecodePhilip Langdale2016-09-081-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Nvidia's "NvDecode" API (up until recently called "cuvid" is a cross platform, but nvidia proprietary API that exposes their hardware video decoding capabilities. It is analogous to their DXVA or VDPAU support on Windows or Linux but without using platform specific API calls. As a rule, you'd rather use DXVA or VDPAU as these are more mature and well supported APIs, but on Linux, VDPAU is falling behind the hardware capabilities, and there's no sign that nvidia are making the investments to update it. Most concretely, this means that there is no VP8/9 or HEVC Main10 support in VDPAU. On the other hand, NvDecode does export vp8/9 and partial support for HEVC Main10 (more on that below). ffmpeg already has support in the form of the "cuvid" family of decoders. Due to the design of the API, it is best exposed as a full decoder rather than an hwaccel. As such, there are decoders like h264_cuvid, hevc_cuvid, etc. These decoders support two output paths today - in both cases, NV12 frames are returned, either in CUDA device memory or regular system memory. In the case of the system memory path, the decoders can be used as-is in mpv today with a command line like: mpv --vd=lavc:h264_cuvid foobar.mp4 Doing this will take advantage of hardware decoding, but the cost of the memcpy to system memory adds up, especially for high resolution video (4K etc). To avoid that, we need an hwdec that takes advantage of CUDA's OpenGL interop to copy from device memory into OpenGL textures. That is what this change implements. The process is relatively simple as only basic device context aquisition needs to be done by us - the CUDA buffer pool is managed by the decoder - thankfully. The hwdec looks a bit like the vdpau interop one - the hwdec maintains a single set of plane textures and each output frame is repeatedly mapped into these textures to pass on. The frames are always in NV12 format, at least until 10bit output supports emerges. The only slightly interesting part of the copying process is that CUDA works by associating PBOs, so we need to define these for each of the textures. TODO Items: * I need to add a download_image function for screenshots. This would do the same copy to system memory that the decoder's system memory output does. * There are items to investigate on the ffmpeg side. There appears to be a problem with timestamps for some content. Final note: I mentioned HEVC Main10. While there is no 10bit output support, NvDecode can return dithered 8bit NV12 so you can take advantage of the hardware acceleration. This particular mode requires compiling ffmpeg with a modified header (or possibly the CUDA 8 RC) and is not upstream in ffmpeg yet. Usage: You will need to specify vo=opengl and hwdec=cuda. Note that hwdec=auto will probably not work as it will try to use vdpau first. mpv --hwdec=cuda --vo=opengl foobar.mp4 If you want to use filters that require frames in system memory, just use the decoder directly without the hwdec, as documented above.
* build: merge d3d11va and dxva2 hwaccel checkswm42016-05-111-1/+1
| | | | | | We don't have any reason to disable either. Both are loaded dynamically at runtime anyway. There is also no reason why dxva2 would disappear from libavcodec any time soon.
* video: add IMGFMT_P010 aliaswm42016-04-291-0/+4
| | | | Gets rid of some silliness, and might be useful in the future.
* vd_lavc: add d3d11va hwdecKevin Mitchell2016-03-301-0/+3
| | | | | | This commit adds the d3d11va-copy hwdec mode using the ffmpeg d3d11va api. Functions in common with dxva2 are handled in a separate decode/d3d.c file. A future commit will rewrite decode/dxva2.c to share this code.
* video: remove some useless old RGB formatswm42016-01-251-13/+0
| | | | | | | | | | | | | Some VOs had support for these - remove them. Typically, these formats will have only some use in cases where using RGB software conversion with libswscale is faster than letting the VO/GPU do it (i.e. almost never). For the sake of testing this case, keep IMGFMT_RGB565. This is the least messy format, because it has no padding/alpha bits with unknown semantics. Note that decoding to these formats still works. We'll let libswscale repack the data to whatever the VO in use can take.
* sub: find GBRP format automatically when rendering to RGBwm42015-12-241-2/+0
| | | | | | | | | | | | | | | | This removes the need to define IMGFMT_GBRAP, which fixes compilation with the current Libav release. This also makes it automatically pick up a GBRP format with the same bit width. (Unfortunately, it seems libswscale does not support conversion to AV_PIX_FMT_GBRAP16, so our code falls back to 8 bit, removing precision for video covered by subtitles in cases this code is used.) Also, when the source video is e.g. 10 bit YUV, upsample to 16 bit. Whether this is good or bad, it fixes behavior with alpha. Although I'm not sure if the alpha range is really correct ([0,2^16-1] vs. [0,255*256]). Keep in mind that libswscale doesn't even agree with the way we do it.
* sub: better alpha blending when rendering to alpha surfaceswm42015-12-241-0/+1
| | | | | | | | | | | This actually treats destination alpha correctly, and gives much better results than before. I don't know if this is perfectly correct yet, though. Slight difference with vo_opengl behavior suggests it might not be. Note that this does not affect VOs with true alpha support. vo_opengl does not use this code at all, and does the alpha calculations in OpenGL instead.
* video: remove VDA supportwm42015-09-281-3/+0
| | | | | | | | | VideoToolbox is preferred. Now that FFmpeg released 2.8, there's no reason to support VDA anymore. In fact, we had a bug that made VDA not useable with older FFmpeg versions in some newer mpv releases. VideoToolbox is supported even on slightly older OSX versions, and if not, you still can run mpv without hw decoding.
* hwdec: add VideoToolbox supportSebastien Zwickert2015-08-051-0/+3
| | | | | | | | VDA is being deprecated in OS X 10.11 so this is needed to keep hwdec working. The code needs libavcodec support which was added recently (to FFmpeg git, libav doesn't support it). Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
* Update license headersMarcin Kurczewski2015-04-131-5/+4
| | | | Signed-off-by: wm4 <wm4@nowhere>
* RPI supportwm42015-03-291-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This requires FFmpeg git master for accelerated hardware decoding. Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav will also work. Most things work. Screenshots don't work with accelerated/opaque decoding (except using full window screenshot mode). Subtitles are very slow - even simple but huge overlays can cause frame drops. This always uses fullscreen mode. It uses dispmanx and mmal directly, and there are no window managers or anything on this level. vo_opengl also kind of works, but is pretty useless and slow. It can't use opaque hardware decoding (copy back can be used by forcing the option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend is preferred over the X11 ones in case you're trying on X11; but X11 is even more useless on RPI. This doesn't correctly reject extended h264 profiles and thus doesn't fallback to software decoding. The hw supports only up to the high profile, and will e.g. return garbage for Hi10P video. This sets a precedent of enabling hw decoding by default, but only if RPI support is compiled (which most hopefully it will be disabled on desktop Linux platforms). While it's more or less required to use hw decoding on the weak RPI, it causes more problems than it solves on real platforms (Linux has the Intel GPU problem, OSX still has some cases with broken decoding.) So I can live with this compromise of having different defaults depending on the platform. Raspberry Pi 2 is required. This wasn't tested on the original RPI, though at least decoding itself seems to work (but full playback was not tested).
* video: work around libswscale for PNG pixel formatswm42015-02-061-4/+5
| | | | | | | | The intention is that we can test vo_opengl with high bit depth PNGs better. This throws libswscale completely out of the loop, which before was needed in order to convert from big endian to little endian. Also apply a minimal cleanup to fmt-conversion.c (unrelated).
* vo_opengl: handle grayscale input better, add YA16 supportwm42015-01-211-0/+4
| | | | | | | | | | Simply clamp off the U/V components in the colormatrix, instead of doing something special in the shader. Also, since YA8/YA16 gave a plane_bits value of 16/32, and a colormatrix calculation overflowed with 32, add a component_bits field to the image format descriptor, which for YA8/YA16 returns 8/16 (the wrong value had no bad consequences otherwise).
* video: remove swapped-endian image format aliaseswm42014-11-051-53/+27
| | |