summaryrefslogtreecommitdiffstats
path: root/video/img_format.c
Commit message (Collapse)AuthorAgeFilesLines
* img_format: remove some unneeded alpha flag handlingwm42019-11-081-3/+0
| | | | Don't know what this was for, but the result doesn't change.
* test: add dumping of img_format metadatawm42019-11-081-95/+0
| | | | | | | | | | | | | | | | | | | | This is fragile enough that it warrants getting "monitored". This takes the commented test program code from img_format.c, makes it output to a text file, and then compares it to a "ref" file stored in git. Originally, I wanted to do the comparison etc. in a shell or Python script. But why not do it in C. So mpv calls /usr/bin/diff as a sub-process now. This test will start producing different output if FFmpeg adds new pixel formats or pixel format flags, or if mpv adds new IMGFMT (either aliases to FFmpeg formats or own formats). That is unavoidable, and requires manual inspection of the results, and then updating the ref file. The changes in the non-test code are to guarantee that the format ID conversion functions only translate between valid IDs.
* img_format: remove some unused format flagswm42019-11-031-9/+1
| | | | | | | | | | | | | | | | | They were used at some point, but then fell into disuse. In general, these old flags are all a bit fuzzy, so it's a good idea to remove them as much as possible. The comment about MP_IMGFLAG_PAL isn't true anymore. The old meaning was deprecated at some point, and the flag was removed from "pseudo paletted" formats. I think mpv at one point changed its own flag from AV_PIX_FMT_FLAG_PSEUDOPAL to AV_PIX_FMT_FLAG_PAL, when the former was deprecated, and it became unnecessary to allocate a palette for non-paletted formats. (The one who deprecated in FFmpeg was me, if you wonder.) MP_IMGFLAG_PLANAR was used in command.c, use a relatively similar flag as replacement.
* img_format: add function to find image format by layoutwm42019-11-021-0/+35
| | | | | | This is similar to mp_imgfmt_find(), but probably a bit saner. Used by the next commit. The previous commit is required to map this unambiguously between all formats.
* img_format: add mp_regular_imgfmt.forced_csp fieldwm42019-11-021-0/+2
| | | | | | As the code comment says, this is needed to disambiguate FFmpeg formats. This struct only describes the "physical" layout of a format, while FFmpeg also attaches part of the colorspace information to the format.
* img_format: add RGB30 formatwm42019-10-201-1/+21
| | | | | | | FFmpeg does not support this from what I can see. This makes supporting it a bit awkward. Later commits use this format.
* img_format: update test programwm42019-10-201-8/+16
| | | | | | | | | | | | The plane pointer checking assert() triggered at least on gray8, because that has a "pseudo palettes" in ffmpeg, which mpv refuses to allocate. Remove a strange duplicated printf(). Log the component type where available. (Why is this even here, I hate it when there are commented test programs in source files.)
* video: generally try to align image data on 64 byteswm42019-09-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Generally, using x86 SIMD efficiently (or crash-free) requires aligning all data on boundaries of 16, 32, or 64 (depending on instruction set used). 64 bytes is needed or AVX-512, 32 for old AVX, 16 for SSE. Both FFmpeg and zimg usually require aligned data for this reason. FFmpeg is very unclear about alignment. Yes, it requires you to align data pointers and strides. No, it doesn't tell you how much, except sometimes (libavcodec has a legacy-looking avcodec_align_dimensions2() API function, that requires a heavy-weight AVCodecContext as argument). Sometimes, FFmpeg will take a shit on YOUR and ITS OWN alignment. For example, vf_crop will randomly reduce alignment of data pointers, depending on the crop parameters. On the other hand, some libavfilter filters or libavcodec encoders may randomly crash if they get the wrong alignment. I have no idea how this thing works at all. FFmpeg usually doesn't seem to signal alignment internal anywhere, and usually leaves it to av_malloc() etc. to allocate with proper alignment. libavutil/mem.c currently has a ALIGN define, which is set to 64 if FFmpeg is built with AVX-512 support, or as low as 16 if built without any AVX support. The really funny thing is that a normal FFmpeg build will e.g. align tiny string allocations to 64 bytes, even if the machine does not support AVX at all. For zimg use (in a later commit), we also want guaranteed alignment. Modern x86 should actually not be much slower at unaligned accesses, but that doesn't help. zimg's dumb intrinsic code apparently randomly chooses between aligned or unaligned accesses (depending on compiler, I guess), and on some CPUs these can even cause crashes. So just treat the requirement to align as a fact of life. All this means that we should probably make sure our own allocations are 64 bit aligned. This still doesn't guarantee alignment in all cases, but it's slightly better than before. This also makes me wonder whether we should always override libavcodec's buffer pool, just so we have a guaranteed alignment. Currently, we only do that if --vd-lavc-dr is used (and if that actually works). On the other hand, it always uses DR on my machine, so who cares.
* video: remove libavutil PSEUDOPAL stuffwm42018-04-031-3/+1
| | | | Not needed anymore with newest libavutil.
* video: rewrite filtering glue codewm42018-01-301-10/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.
* video: add utility function to pick conversion image format from a listwm42018-01-181-0/+9
|
* img_format: remove some guards against old ffmpeg APIwm42017-11-061-5/+2
| | | | These are always present in ffmpeg-mpv, and never in Libav.
* img_format: AV_PIX_FMT_PAL8 is RGBwm42017-10-091-2/+5
| | | | | | | | | | | | | | | | | This partiular format is not marked as AV_PIX_FMT_FLAG_RGB in FFmpeg's pixdesc table, so mpv assumed it's a YUV format. This is a regression, since the old code in mp_imgfmt_get_desc() also treated this format specially to avoid this problem. Another format which was special-cased in the old code was AV_PIX_FMT_MONOBLACK, so make an exception for it as well. Maybe this problem could be avoided by mp_image_params_guess_csp() not forcing certain colorimetric parameters by the implied colorspace, but certainly that would cause other problems. At least there are mistagged files out there that would break. (Do we actually care?) Fixes #4965.
* video: fix green shitwm42017-09-301-1/+1
|
* img_format: #if -> #ifdefwm42017-08-161-1/+1
| | | | Oops.
* img_format: better exclusion of bayer formatswm42017-08-151-0/+5
| | | | | | | These are useless and shouldn't be confused with normal RGB formats. Replace the earlier hack checking the format name with a proper check. (Not sure when this flag was added. Libav won't have it anyway, but also no bayer formats.)
* vo_opengl: support float pixel formatswm42017-08-151-1/+24
| | | | Like AV_PIX_FMT_GBRPF32LE.
* img_format: drop some unused thingswm42017-06-301-11/+2
|
* vo_opengl: rely on FFmpeg pixdesc a bit morewm42017-06-291-0/+171
| | | | | | | | | | | | | Add something that allows is to extract the component order from various RGBA formats. In fact, also handle YUV, GBRP, and XYZ formats with this. It introduces a new struct mp_regular_imgfmt, that hopefully will eventually replace struct mp_imgfmt_desc. The latter is still needed by a lot of code though, especially generic code. Also vo_opengl still uses the old one, so this commit is sort of incomplete. Due to its genericness, it's also possible that this commit introduces rendering bugs, or accepts formats it shouldn't accept.
* video/fmt-conversion, img_format: change license to LGPLwm42017-06-181-7/+7
| | | | | | | | | | | | | | | | | | | | | | The problem with fmt-conversion.h is that "lucabe", who disagreed with LGPL, originally wrote it. But it was actually rewritten by "reimar" later. The original switch statement was replaced with a lookup table. No code other than the imgfmt2pixfmt() function signature survives. Neither the format pairs (PIXFMT<->IMGFMT), nor the concept of mapping them, can be copyrighted. So changing the license should be fine, because reimar and all other authors involved with the new code agreed to LGPL. We also don't consider format pairs added later as copyrightable. (The direct-mapping idea mentioned in the "Copyright" file seems attractive, and I might implement in later anyway.) Likewise, there might be some format names added to img_format.h, which are not covered by relicensing agreements. These all affect "later" additions, and they follow either the FFmpeg PIXFMT naming or some other pre-existing logic, so this should be fine.
* img_format: minor simplificationwm42017-06-181-3/+1
|
* img_format: drop legacy name mappingswm42017-06-181-18/+1
| | | | Not needed anymore.
* img_format: stop setting some fields to dummy values for hwaccel formatswm42017-02-211-6/+7
| | | | | Flags like MP_IMGFLAG_YUV were meaningless for hwaccel formats, and setting fields like component_bits made even less sense.
* Remove compatibility thingswm42016-12-071-12/+5
| | | | | | Possible with bumped FFmpeg/Libav. These are just the simple cases.
* video: remove d3d11 video processor use from OpenGL interopwm42016-05-291-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We now have a video filter that uses the d3d11 video processor, so it makes no sense to have one in the VO interop code. The VO uses it for formats not directly supported by ANGLE (so the video data is converted to a RGB texture, which ANGLE can take in). Change this so that the video filter is automatically inserted if needed. Move the code that maps RGB surfaces to its own inteorp backend. Add a bunch of new image formats, which are used to enforce the new constraints, and to automatically insert the filter only when needed. The added vf mechanism to auto-insert the d3d11vpp filter is very dumb and primitive, and will work only for this specific purpose. The format negotiation mechanism in the filter chain is generally not very pretty, and mostly broken as well. (libavfilter has a different mechanism, and these mechanisms don't match well, so vf_lavfi uses some sort of hack. It only works because hwaccel and non-hwaccel formats are strictly separated.) The RGB interop is now only used with older ANGLE versions. The only reason I'm keeping it is because it's relatively isolated (uses only existing mechanisms and adds no new concepts), and because I want to be able to compare the behavior of the old code with the new one for testing. It will be removed eventually. If ANGLE has NV12 interop, P010 is now handled by converting to NV12 with the video processor, instead of converting it to RGB and using the old mechanism to import that as a texture.
* vo_opengl: refactor pass_read_video and texture bindingNiklas Haas2016-03-051-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a pretty major rewrite of the internal texture binding mechanic, which makes it more flexible. In general, the difference between the old and current approaches is that now, all texture description is held in a struct img_tex and only explicitly bound with pass_bind. (Once bound, a texture unit is assumed to be set in stone and no longer tied to the img_tex) This approach makes the code inside pass_read_video significantly more flexible and cuts down on the number of weird special cases and spaghetti logic. It also has some improvements, e.g. cutting down greatly on the number of unnecessary conversion passes inside pass_read_video (which was previously mostly done to cope with the fact that the alternative would have resulted in a combinatorial explosion of code complexity). Some other notable changes (and potential improvements): - texture expansion is now *always* handled in pass_read_video, and the colormatrix never does this anymore. (Which means the code could probably be removed from the colormatrix generation logic, modulo some other VOs) - struct fbo_tex now stores both its "physical" and "logical" (configured) size, which cuts down on the amount of width/height baggage on some function calls - vo_opengl can now technically support textures with different bit depths (e.g. 10 bit luma, 8 bit chroma) - but the APIs it queries inside img_format.c doesn't export this (nor does ffmpeg support it, really) so the status quo of using the same tex_mul for all planes is kept. - dumb_mode is now only needed because of the indirect_fbo being in the main rendering pipeline. If we reintroduce p->use_indirect and thread a transform through the entire program this could be skipped where unnecessary, allowing for the removal of dumb_mode. But I'm not sure how to do this in a clean way. (Which is part of why it got introduced to begin with) - It would be trivial to resurrect source-shader now (it would just be one extra 'if' inside pass_read_video).
* img_format: fix padding calculation with P010wm42016-01-081-1/+1
| | | | This was broken during refactoring commit e2d90b38 before pushing it.
* img_format: fix compilation on older libavutil releaseswm42016-01-071-1/+1
| | | | | | | | | | AVComponentDescriptor.offset was introduced relatively recently. On older releases, you have to use AVComponentDescriptor.offset_plus1, which is now deprecated. Instead of adding ifdeffery, assume AV_PIX_FMT_NV21 is the only format for which this applies (and will remain the only case), which is probably true enough.
* img_format: add a generic flag for semi-planar formatswm42016-01-071-4/+26
|
* img_format: take care of pixfmts that declare paddingwm42016-01-071-2/+9
| | | | | | | | | | A format could declare that some or all LSBs in a component are padding bits by setting a non-0 AVComponentDescriptor.shift value. This means we would interpret it incorrectly, because until now we always assumed all regular formats have the padding in the MSBs. Not a single format that does this actually exists, though. But a NV12 variant will be added later in FFmpeg.
* sub: find GBRP format automatically when rendering to RGBwm42015-12-241-3/+3
| | | | | | | | | | | | | | | | This removes the need to define IMGFMT_GBRAP, which fixes compilation with the current Libav release. This also makes it automatically pick up a GBRP format with the same bit width. (Unfortunately, it seems libswscale does not support conversion to AV_PIX_FMT_GBRAP16, so our code falls back to 8 bit, removing precision for video covered by subtitles in cases this code is used.) Also, when the source video is e.g. 10 bit YUV, upsample to 16 bit. Whether this is good or bad, it fixes behavior with alpha. Although I'm not sure if the alpha range is really correct ([0,2^16-1] vs. [0,255*256]). Keep in mind that libswscale doesn't even agree with the way we do it.
* vo_opengl: fix issues with some obscure pixel formatswm42015-12-071-1/+5
| | | | | | | | | | | | | | | | | | | The computation of the tex_mul variable was broken in multiple ways. This variable is used e.g. by debanding for moving expansion of 10 bit fixed-point input to normalized range to another stage of processing. One obvious bug was that the rgb555 pixel format was broken. This format has component_bits=5, but obviously it's already sampled in normalized range, and does not need expansion. The tex_mul-free code path avoids this by not using the colormatrix. (The code was originally designed to work around dealing with the generally complicated pixel formats by only using the colormatrix in the YUV case.) Another possible bug was with 10 bit input. It expanded the input by bringing the [0,2^10) range to [0,1], and then treating the expanded input as 16 bit input. I didn't bother to check what this actually computed, but it's somewhat likely it was wrong anyway. Now it uses mp_get_csp_mul(), and disables expansion when computing the YUV matrix.
* video: fix playback of pal8wm42015-11-011-1/+2
| | | | | | | | | | | PAL8 is the only format that is RGB, has only 1 component, is byte- aligned. It was accidentally detected by the GBRP case as planar RGB. (It would have been ok if it were gray; what ruins it is that it's actually paletted, and the color values do not correspond to colors (but palette entries). Pseudo-pal formats are ok; in fact AV_PIX_FMT_GRAY is rightfully marked as MP_IMGFLAG_YUV_P.
* vo_opengl: support all kinds of GBRP formatswm42015-10-181-3/+9
| | | | | | | | Adds support for AV_PIX_FMT_GBRP9, AV_PIX_FMT_GBRP10, AV_PIX_FMT_GBRP12, AV_PIX_FMT_GBRP14, AV_PIX_FMT_GBRP16, AV_PIX_FMT_GBRAP, and AV_PIX_FMT_GBRAP16. (Not that it matters, because nobody uses these anyway.)
* video: remove VDA supportwm42015-09-281-1/+0
| | | | | | | | | VideoToolbox is preferred. Now that FFmpeg released 2.8, there's no reason to support VDA anymore. In fact, we had a bug that made VDA not useable with older FFmpeg versions in some newer mpv releases. VideoToolbox is supported even on slightly older OSX versions, and if not, you still can run mpv without hw decoding.
* video: do not use deprecated libavutil pixdesc fieldswm42015-09-101-5/+15
| | | | | | These were normalized and are saner now. We want to use the new fields, and also get rid of the deprecation warnings, so use them. There's no release yet which uses these, so some ifdeffery is unfortunately needed.
* hwdec: add VideoToolbox supportSebastien Zwickert2015-08-051-0/+1
| | | | | | | | VDA is being deprecated in OS X 10.11 so this is needed to keep hwdec working. The code needs libavcodec support which was added recently (to FFmpeg git, libav doesn't support it). Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
* Update license headersMarcin Kurczewski2015-04-131-5/+4
| | | | Signed-off-by: wm4 <wm4@nowhere>
* vo_opengl: move minor helper to common codewm42015-03-091-0/+3
| | | | | The generic image format code should cary most of the "knowledge" about image formats.
* video: try to keep implied alpha when using conversion filterswm42015-01-211-1/+1
| | | | | Don't just discard alpha. This probably does the right thing, in the rare situations when alpha matters at all.
* vo_opengl: handle grayscale input better, add YA16 supportwm42015-01-211-2/+6
| | | | | | | | | | Simply clamp off the U/V components in the colormatrix, instead of doing something special in the shader. Also, since YA8/YA16 gave a plane_bits value of 16/32, and a colormatrix calculation overflowed with 32, add a component_bits field to the image format descriptor, which for YA8/YA16 returns 8/16 (the wrong value had no bad consequences otherwise).
* vf_scale: replace ancient fallback image format selectionwm42015-01-211-0/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If video output and VO don't support the same format, a conversion filter needs to be insert. Since a VO can support multiple formats, and the filter chain also can deal with multiple formats, you basically have to pick from a huge matrix of possible conversions. The old MPlayer code had a quite naive algorithm: it first checked whether any conversion from the list of preferred conversions matched, and if not, it was falling back on checking a hardcoded list of output formats (more or less sorted by quality). This had some unintended side- effects, like not using obvious "replacement" formats, selecting the wrong colorspace, selecting a bit depth that is too high or too low, and more. Use avcodec_find_best_pix_fmt_of_list() provided by FFmpeg instead. This function was made for this purpose, and should select the "best" format. Libav provides a similar function, but with a different name - there is a function with the same name in FFmpeg, but it has different semantics (I'm not sure if Libav or FFmpeg fucked up here). This also removes handling of VFCAP_CSP_SUPPORTED vs. VFCAP_CSP_SUPPORTED_BY_HW, which has no meaning anymore, except possibly for filter chains with multiple scale filters. Fixes #1494.
* command: change properties added in previous commitwm42015-01-101-1/+3
| | | | | | | | | | | | | Make their meaning more exact, and don't pretend that there's a reasonable definition for "bits-per-pixel". Also make unset fields unavailable. average_depth still might be inconsistent: for example, 10 bit 4:2:0 is identified as 24 bits, but RGB 4:4:4 as 12 bits. So YUV formats seemingly drop the per-component padding, while RGB formats do not. Internally it's consistent though: 10 bit YUV components are read as 16 bit, and the padding must be 0 (it's basically like an odd fixed- point representation, rather than a bitfield).
* video: remove swapped-endian image format aliaseswm42014-11-051-1/+1
| | | | | Like the previous commit, this removes names only, not actual support for these formats.
* video: add image format test programwm42014-11-051-0/+63
|
* video: passthrough unknown AVPixelFormatswm42014-11-051-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a rather radical change: instead of maintaining a whitelist of FFmpeg formats we support, we automatically support all formats. In general, a format which doesn't have an explicit IMGFMT_* name will be converted to a known format through libswscale, or will be handled by code which can treat pixel formats in a generic way using the pixel format description, like vo_opengl. AV_PIX_FMT_UYYVYY411 is a special-case. It's packed YUV with chroma subsampling by 4 in both directions. Its component order is documented as "Cb Y0 Y1 Cr Y2 Y3", meaning there's one UV sample for 4 Y samples. This means each pixel uses 1.5 bytes (4 pixels have 1 UV sample, so 4 bytes + 2 bytes). FFmpeg can actually handle this format with its generic mechanism in an extremely awkward way, but it doesn't work for us. Blacklist it, and hope no similar formats will be added in the future. Currently, the AV_PIX_FMT_*s allowed are limited to a numeric value of 500. More is not allowed, and there are some fixed size arrays that need to contain any possible format (look for IMGFMT_END dependencies). We could have this simpler by repl