summaryrefslogtreecommitdiffstats
path: root/video/img_format.h
Commit message (Collapse)AuthorAgeFilesLines
* video: don't define IMGFMT_VULKAN conditionallysfan52024-02-261-2/+0
| | | | | | We generally try to avoid that due to the #ifdef mess. The equivalent format is defined in ffmpeg 4.4 while our interop code requires a higher version, but that doesn't cause any problems.
* csputils: replace mp_colorspace with pl_color_spaceKacper Michajłow2024-01-221-3/+3
|
* img_format: remove duplicated macrosNRK2023-10-231-6/+0
| | | | these are defined in osdep/endian.h already
* hwdec_vulkan: add Vulkan HW InteropPhilip Langdale2023-05-281-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Vulkan Video Decoding has finally become a reality, as it's now showing up in shipping drivers, and the ffmpeg support has been merged. With that in mind, this change introduces HW interop support for ffmpeg Vulkan frames. The implementation is functionally complete - it can display frames produced by hardware decoding, and it can work with ffmpeg vulkan filters. There are still various caveats due to gaps and bugs in drivers, so YMMV, as always. Primary testing has been done on Intel, AMD, and nvidia hardware on Linux with basic Windows testing on nvidia. Notable caveats: * Due to driver bugs, video decoding on nvidia does not work right now, unless you use the Vulkan Beta driver. It can be worked around, but requires ffmpeg changes that are not considered acceptable to merge. * Even if those work-arounds are applied, Vulkan filters will not work on video that was decoded by Vulkan, due to additional bugs in the nvidia drivers. The filters do work correctly on content decoded some other way, and then uploaded to Vulkan (eg: Decode with nvdec, upload with --vf=format=vulkan) * Vulkan filters can only be used with drivers that support VK_EXT_descriptor_buffer which doesn't include Intel ANV as yet. There is an MR outstanding for this. * When dealing with 1080p content, there may be some visual distortion in the bottom lines of frames due to chroma scaling incorporating the extra hidden lines at the bottom of the frame (1080p content is actually stored as 1088 lines), depending on the hardware/driver combination and the scaling algorithm. This cannot be easily addressed as the mechanical fix for it violates the Vulkan spec, and probably requires a spec change to resolve properly. All of these caveats will be fixed in either drivers or ffmpeg, and so will not require mpv changes (unless something unexpected happens) If you want to run on nvidia with the non-beta drivers, you can this ffmpeg tree with the work-around patches: * https://github.com/philipl/FFmpeg/tree/vulkan-nvidia-workarounds
* filters/f_hwtransfer: remove VAAPI <-> Vulkan mapping for nowPhilip Langdale2022-10-291-1/+0
| | | | | | | | | This mapping isn't actually relevant until we have the Vulkan interop merged, and it requires a newer version of libavutil than our minimum requirement. So I'm going to remove it from master and put it in the interop PR. Fixes #10813
* mp_imgfmt: move DRMPRIME format to end of enumAaron Boxer2022-10-261-1/+1
| | | | | | | dmabuf-wayland vo supports both DRMPRIME and VAAPI formats. If upload filter is needed, we want to try VAAPI hwdec first since most users will be using VAAPI, and there is a several second delay when DRMPRIME hwdec is tried and fails.
* f_hwtransfer: mp_image_pool: support HW -> HW mappingPhilip Langdale2022-09-211-0/+1
| | | | | | | | | | | | | | | | Certain combinations of hardware formats require the use of hwmap to transfer frames between the formats, rather than hwupload, which will fail if attempted. To keep the usage of vf_format for HW -> HW transfers as intuitive as possible, we should detect these cases and do the map operation instead of uploading. For now, the relevant cases are moving between VAAPI and Vulkan, and VAAPI and DRM Prime, in both directions. I have introduced the IMGFMT entry for Vulkan here so that I can put in the complete mapping table. It's actually not useless, as you can map to Vulkan, use a Vulkan filter and then map back to VAAPI for display output.
* img_format: expose another helperwm42020-05-211-0/+3
|
* video: remove useless mp_imgfmt_desc.avformat fieldwm42020-05-201-1/+0
| | | | Had 1 user; easily replaced.
* video: clean up pixel metadata stuff some morewm42020-05-201-32/+57
| | | | | | | | | | | | | | | A repeat of the previous useless commits. Pondered whether to use separate fields or just a flags integer for color and component types; the latter won for now. Functions like mp_imgfmt_get_component_type() are now discouraged, and mp_imgfmt_desc.flags is back for defining all information. Some days ago I felt like the opposite would be the better design. Fortunately, it doesn't matter. With this, I think all image format properties that mpv needs are exhaustively defined all in one place.
* video: shuffle imgfmt metadata code aroundwm42020-05-201-59/+49
| | | | | | | | | | | | | | | | | I guess I decided to stuff it all into mp_imgfmt_desc (the "old" struct). This is probably a mistake. At first I was afraid that this struct would get too fat (probably justified, and hereby happened), but on the other hand mp_imgfmt_get_desc() (which builds the struct) calls the former mp_imgfmt_get_layout(), and the separation doesn't make too much sense anyway. Just merge them. Still, try to keep out the extra info for packed YUV bullshit. I think the result is OK, and there's as much information as there was before. The test output changes a little. There's no independent bits[] array anymore, so formats which did not previously have set this now show it. (These formats are mpv-only and are still missing the metadata. To be added later). Also, the output for the cursed packed formats changes.
* video: add pixel component location metadatawm42020-05-181-0/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | I thought I'd probably want something like this, so the hardcoded stuff in repack.c can be removed eventually. Of course this has no purpose at all, and will not have any. (For now, this provides only metadata, and nothing uses it, apart from the "test" that dumps it as text.) This adds full support for AV_PIX_FMT_UYYVYY411 (probably out of spite, because the format is 100% useless). Support for some mpv-only formats is missing, ironically. The code goes through _lengths_ to try to make sense out of the FFmpeg AVPixFmtDescriptor data. Which is even more amazing that the new metadata basically mirrors pixdesc, and just adds to it. Considering code complexity and speed issues (it takes time to crunch through all this shit all the time), and especially the fact that pixdesc is very _incomplete_, it would probably better to have our own table to all formats. But then we'd not scramble every time FFmpeg adds a new format, which would be annoying. On the other hand, by using pixdesc, we get the excitement to see whether this code will work, or break everything in catastrophic ways. The data structure still sucks a lot. Maybe I'll redo it again. The text dump is weirdly differently formatted than the C struct - because I'm not happy with the representation. Maybe I'll redo it all over again. In summary: this commit does nothing.
* video: clean up some imgfmt related stuffwm42020-05-181-13/+5
| | | | | | | | | | | | | | | | Remove the vaguely defined plane_bits and component_bits fields from struct mp_imgfmt_desc. Add weird replacements for existing uses. Remove the bytes[] field, replace uses with bpp[]. Fix some potential alignment issues in existing code. As a compromise, split mp_image_pixel_ptr() into 2 functions, because I think it's a bad idea to implicitly round, but for some callers being slightly less strict is convenient. This shouldn't really change anything. In fact, it's a 100% useless change. I'm just cleaning up what I started almost 8 years ago (see commit 00653a3eb052). With this I've decided to keep mp_imgfmt_desc, just removing the weird parts, and keeping the saner parts.
* video: remove RGB32/BGR32 aliaseswm42020-05-111-9/+0
| | | | | | | They are sort of confusing, and they hide the fact that they have an alpha component. Using the actual formats directly is no problem, sicne these were useful only for big endian systems, something we can't test anyway.
* video: add yuv float formatswm42020-05-091-0/+16
| | | | | | | | | | | Adding all these so I can use them for obscure processing purposes (see later draw_bmp commit). There isn't really a reason why they should exist. On the other hand, they're just labels for formats that can be handled in a generic way, and this commit adds support for them in the zimg wrapper and vo_gpu just by making the formats exist. (Well, vo_gpu had to be fixed in the previous commit.)
* video: fix rgb30 component orderwm42020-05-091-1/+1
| | | | | | | | | Was broken with a zimg wrapper refucktor before the previous commit. In addition, it seems this didn't match the vo_drm format, or the format naming convention. So the order actually changes, and the format is redefined. (The img_format.h comment was probably wrong.) Change vo_gpu to the new format as well, so we can still test it.
* video: separate repacking code from zimg and make it independentwm42020-05-091-1/+2
| | | | | | | | | | | | | For whatever purpose. If anything, this makes the zimg wrapper cleaner. The added tests are not particular exhaustive, but nice to have. This also makes the scale_zimg.c test pretty useless, because it only tests repacking (going through the zimg wrapper). In theory, the repack_tests things could also be used on scalers, but I guess it doesn't matter. Some things are added over the previous zimg wrapper code. For example, some fringe formats can now be expanded to 8 bit per component for convenience.
* img_format: make component order in comment more logicalwm42020-05-091-1/+1
|
* img_format: add some mpv-only helper formatswm42020-04-231-0/+9
| | | | | | | | | | | | | | | | | Utterly useless, but the intention is to make dealing with corner case pixel formats (forced upon us by FFmpeg, very rarely) less of a pain. The zimg wrapper will use them. (It already supports these formats automatically, but it will help with its internals.) Y1 is considered RGB, even though gray formats are generally treated as YUV for various reasons. mpv will default all YUV formats to limited range internally, which makes no sense for a 1 bit format, so this is a problem. I wanted to avoid that mp_image_params_guess_csp() (which applies the default) explicitly checks for an image format, so although a bit janky, this seems to be a good solution, especially because I really don't give a shit about these formats, other than having to handle them. It's notable that AV_PIX_FMT_MONOBLACK (also 1 bit gray, just packed) already explicitly marked itself as RGB.
* video: change chroma_w/chroma_h fields to use shift instead of sizewm42020-04-231-2/+4
| | | | | | | | | | When I added mp_regular_imgfmt, I made the chroma subsampling use the actual chroma division factor, instead of a shift (log2 of the actual value). I had some ideas about how this was (probably?) more intuitive and general. But nothing ever uses non-power of 2 subsampling (except jpeg in rare cases apparently, because the world is a bad place). Change the fields back to use shifts and rename them to avoid mistakes.
* img_format: add format description table for mpv-only formatswm42020-04-231-10/+16
| | | | | | | Make this slightly less ad-hoc. Also correct the missing alpha flag for yap8/yap16. Despite reduced redundancy, the LOC is going up anyway... whatever.
* zimg: add support for big endian input and outputwm42020-04-131-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | One of the extremely annoying dumb things in ffmpeg is that most pixel formats are available as little endian and big endian variants. (The sane way would be having native endian formats only.) Usually, most of the real codecs use native formats only, while non-native formats are used by fringe raw codecs only. But the PNG encoders and decoders unfortunately use big endian formats, and since PNG it such a popular format, this causes problems for us. In particular, the current zimg wrapper will refuse to work (and mpv will fall back to sws) when writing non-8 bit PNGs. So add non-native endian support to zimg. This is done in a fairly "generic" way (which means lots of potential for bugs). If input is a "regular" format (and just byte-swapped), the rest happens automatically, which happens to cover all interesting formats. Some things could be more efficient; for example, unpacking is done on the data before it's passed to the unpacker. You could make endian swapping part of the actual unpacking process, which might be slightly faster. You could avoid copying twice in some cases (such as when there's no actual repacker, or if alignment needs to be corrected). But I don't really care. It's reasonably fast for the normal case. Not entirely sure whether this is correct. Some (but not many) formats are covered by the tests, some I tested manually. Some I can't even test, because libswscale doesn't support them (like nv20*).
* video: drop NV24 aliaswm42020-02-181-3/+0
| | | | | | | Caused build failures with still supported FFmpeg versions. It's unreferenced, so it's not needed. Fixes: #7471
* img_format: add alias for ffmpeg pal8 formatwm42020-02-101-0/+3
| | | | For the next commit.
* img_format: add gray/alpha planar formatswm42020-02-101-0/+4
| | | | | | | | | | | | The zimg wrapper "needs" these formats as intermediary when repacking the normal gray/alpha packed format. The packed format is used by the png decoder and encoder, and is thus interesting. Unfortunately, mpv-only formats are a mess right now, because all the existing code is focused around using the FFmpeg metadata for pixel formats. This should be improved, but not now, so make the mess worse. This commit doesn't add support for it to the zimg wrapper yet.
* img_format: remove some unneeded alpha flag handlingwm42019-11-081-3/+0
| | | | Don't know what this was for, but the result doesn't change.
* img_format: remove some unused format flagswm42019-11-031-16/+1
| | | | | | | | | | | | | | | | | They were used at some point, but then fell into disuse. In general, these old flags are all a bit fuzzy, so it's a good idea to remove them as much as possible. The comment about MP_IMGFLAG_PAL isn't true anymore. The old meaning was deprecated at some point, and the flag was removed from "pseudo paletted" formats. I think mpv at one point changed its own flag from AV_PIX_FMT_FLAG_PSEUDOPAL to AV_PIX_FMT_FLAG_PAL, when the former was deprecated, and it became unnecessary to allocate a palette for non-paletted formats. (The one who deprecated in FFmpeg was me, if you wonder.) MP_IMGFLAG_PLANAR was used in command.c, use a relatively similar flag as replacement.
* img_format: add function to find image format by layoutwm42019-11-021-0/+1
| | | | | | This is similar to mp_imgfmt_find(), but probably a bit saner. Used by the next commit. The previous commit is required to map this unambiguously between all formats.
* img_format: add mp_regular_imgfmt.forced_csp fieldwm42019-11-021-0/+5
| | | | | | As the code comment says, this is needed to disambiguate FFmpeg formats. This struct only describes the "physical" layout of a format, while FFmpeg also attaches part of the colorspace information to the format.
* img_format: add more explanations to component_pad fieldwm42019-11-021-0/+5
| | | | | Weird shit. I thought this was a clever way to elegantly handle two cases at once, but maybe it's just confusing.
* img_format: add RGB30 formatwm42019-10-201-0/+3
| | | | | | | FFmpeg does not support this from what I can see. This makes supporting it a bit awkward. Later commits use this format.
* img_format: document a minor guarantee for certain imgfmt metadatawm42019-10-201-0/+1
|
* vo/gpu: hwdec_vdpau: Support direct mode for 4:4:4 contentPhilip Langdale2019-07-081-0/+3
| | | | | | | New releases of VDPAU support decoding 4:4:4 content, and that comes back as NV24 when using 'direct mode' in OpenGL Interop. That means we need to be a little bit smarter about how we set up the OpenGL textures.
* img_format.h: cosmetics: fix whitespacewm42018-03-151-1/+1
|
* vo_gpu: make screenshots use the GL rendererwm42018-02-111-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | Using the GL renderer for color conversion will make sure screenshots will use the same conversion as normal video rendering. It can do this for all types of screenshots. The logic when to write 16 bit PNGs changes. To approximate the old behavior, we decide by looking whether the source video format has more than 8 bits per component. We apply this logic even for window screenshots. Also, 16 bit PNGs now always include an unused alpha channel. The reason is that FFmpeg has RGB48 and RGBA64 formats, but no RGB064. RGB48 is 3 bytes and usually not supported by GPUs for rendering, so we have to use RGBA64, which forces an alpha channel. Will break for users who use --target-trc and similar options. I considered creating a new gl_video context, but it could double GPU memory use, so I didn't. This uses FBOs instead of glGetTexImage(), because that increases the chance it could work on GLES (e.g. ANGLE). Untested. No support for the Vulkan and D3D11 backends yet. Fixes #5498. Also fixes #5240, because the code for reading back is not used with the new code path.
* video: rewrite filtering glue codewm42018-01-301-8/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.
* video: make IMGFMT_IS_HWACCEL() return 0 or 1wm42018-01-181-1/+1
| | | | Sometimes helps avoiding usage mistakes.
* video: add utility function to pick conversion image format from a listwm42018-01-181-0/+1
|
* Add DRM_PRIME Format Handling and Display for RockChip MPP decodersLionel CHAZALLON2017-10-231-0/+1
| | | | | | | | | | | This commit allows to use the AV_PIX_FMT_DRM_PRIME newly introduced format in ffmpeg that allows decoders to provide an AVDRMFrameDescriptor struct. That struct holds dmabuf fds and information allowing zerocopy rendering using KMS / DRM Atomic. This has been tested on RockChip ROCK64 device.
* hwdec: add mediacodec hardware decoder for IMGFMT_MEDIACODEC framesAman Gupta2017-10-091-0/+1
|
* vo_opengl: support float pixel formatswm42017-08-151-0/+11
| | | | Like AV_PIX_FMT_GBRPF32LE.
* img_format: fix a commentwm42017-07-151-3/+2
| | | | | This was changed a while ago. Part of it might still apply to the old D3D hwaccel glue code, though.
* img_format: drop some unused thingswm42017-06-301-5/+0
|
* vo_direct3d: remove non-working nv12 shader supportwm42017-06-301-3/+0
| | | | | | | | | It never worked. It relied on some obscure texture format to provide the equivalent of GL_RG or GL_LUMINANCE_ALPHA, but no hardware seemed to report support for it ever. No idea what's the correct way to do this. On D3D11 it exists, of course. (Actually I'd like to remove the whole VO.)
* video: get rid of swapped packed YUVwm42017-06-301-1/+0
| | | | | | Another legacy annoyance. The only place where packed YUV is still important is slightly older Apple hardware or drivers, which require it for efficient hardware decoding.
* video: drop some more IMGFMT aliaseswm42017-06-291-21/+0
| | | | | | | | | | | For vo_opengl and vo_direct3d, these are supported in a generic way. For vf_vapoursynth, we could probably map its VSFormat struct in a generic way, but for now do some bullshit. vf_eq.c actually loses support for these formats. We could add generic support too (anything that has 8 bit planes will work), but why bother. The filter is deprecated anyway.
* video: drop some unused IMGFMT aliaseswm42017-06-291-14/+1
| | | | | | These formats are supported in a generic way. To get rid of IMGFMT_NV21, remove support from vo_direct3d.c completely.
* vo_opengl: rely on FFmpeg pixdesc a bit morewm42017-06-291-0/+33
| | | | | | | | | | | | | Add something that allows is to extract the component order from various RGBA formats. In fact, also handle YUV, GBRP, and XYZ formats with this. It introduces a new struct mp_regular_imgfmt, that hopefully will eventually replace struct mp_imgfmt_desc. The latter is still needed by a lot of code though, especially generic code. Also vo_opengl still uses the old one, so this commit is sort of incomplete. Due to its genericness, it's also possible that this commit introduces rendering bugs, or accepts formats it shouldn't accept.
* video/fmt-conversion, img_format: change license to LGPLwm42017-06-181-7/+7
| | | | | | | | | | | | | | | | | | | | | | The problem with fmt-conversion.h is that "lucabe", who disagreed with LGPL, originally wrote it. But it was actually rewritten by "reimar" later. The original switch statement was replaced with a lookup table. No code other than the imgfmt2pixfmt() function signature survives. Neither the format pairs (PIXFMT<->IMGFMT), nor the concept of mapping them, can be copyrighted. So changing the license should be fine, because reimar and all other authors involved with the new code agreed to LGPL. We also don't consider format pairs added later as copyrightable. (The direct-mapping idea mentioned in the "Copyright" file seems attractive, and I might implement in later anyway.) Likewise, there might be some format names added to img_format.h, which are not covered by relicensing agreements. These all affect "later" additions, and they follow either the FFmpeg PIXFMT naming or some other pre-existing logic, so this should be fine.
* img_format: drop unused aliaseswm42017-06-181-5/