summaryrefslogtreecommitdiffstats
path: root/video/img_format.c
Commit message (Collapse)AuthorAgeFilesLines
* img_format: drop some unused thingswm42017-06-301-11/+2
|
* vo_opengl: rely on FFmpeg pixdesc a bit morewm42017-06-291-0/+171
| | | | | | | | | | | | | Add something that allows is to extract the component order from various RGBA formats. In fact, also handle YUV, GBRP, and XYZ formats with this. It introduces a new struct mp_regular_imgfmt, that hopefully will eventually replace struct mp_imgfmt_desc. The latter is still needed by a lot of code though, especially generic code. Also vo_opengl still uses the old one, so this commit is sort of incomplete. Due to its genericness, it's also possible that this commit introduces rendering bugs, or accepts formats it shouldn't accept.
* video/fmt-conversion, img_format: change license to LGPLwm42017-06-181-7/+7
| | | | | | | | | | | | | | | | | | | | | | The problem with fmt-conversion.h is that "lucabe", who disagreed with LGPL, originally wrote it. But it was actually rewritten by "reimar" later. The original switch statement was replaced with a lookup table. No code other than the imgfmt2pixfmt() function signature survives. Neither the format pairs (PIXFMT<->IMGFMT), nor the concept of mapping them, can be copyrighted. So changing the license should be fine, because reimar and all other authors involved with the new code agreed to LGPL. We also don't consider format pairs added later as copyrightable. (The direct-mapping idea mentioned in the "Copyright" file seems attractive, and I might implement in later anyway.) Likewise, there might be some format names added to img_format.h, which are not covered by relicensing agreements. These all affect "later" additions, and they follow either the FFmpeg PIXFMT naming or some other pre-existing logic, so this should be fine.
* img_format: minor simplificationwm42017-06-181-3/+1
|
* img_format: drop legacy name mappingswm42017-06-181-18/+1
| | | | Not needed anymore.
* img_format: stop setting some fields to dummy values for hwaccel formatswm42017-02-211-6/+7
| | | | | Flags like MP_IMGFLAG_YUV were meaningless for hwaccel formats, and setting fields like component_bits made even less sense.
* Remove compatibility thingswm42016-12-071-12/+5
| | | | | | Possible with bumped FFmpeg/Libav. These are just the simple cases.
* video: remove d3d11 video processor use from OpenGL interopwm42016-05-291-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We now have a video filter that uses the d3d11 video processor, so it makes no sense to have one in the VO interop code. The VO uses it for formats not directly supported by ANGLE (so the video data is converted to a RGB texture, which ANGLE can take in). Change this so that the video filter is automatically inserted if needed. Move the code that maps RGB surfaces to its own inteorp backend. Add a bunch of new image formats, which are used to enforce the new constraints, and to automatically insert the filter only when needed. The added vf mechanism to auto-insert the d3d11vpp filter is very dumb and primitive, and will work only for this specific purpose. The format negotiation mechanism in the filter chain is generally not very pretty, and mostly broken as well. (libavfilter has a different mechanism, and these mechanisms don't match well, so vf_lavfi uses some sort of hack. It only works because hwaccel and non-hwaccel formats are strictly separated.) The RGB interop is now only used with older ANGLE versions. The only reason I'm keeping it is because it's relatively isolated (uses only existing mechanisms and adds no new concepts), and because I want to be able to compare the behavior of the old code with the new one for testing. It will be removed eventually. If ANGLE has NV12 interop, P010 is now handled by converting to NV12 with the video processor, instead of converting it to RGB and using the old mechanism to import that as a texture.
* vo_opengl: refactor pass_read_video and texture bindingNiklas Haas2016-03-051-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a pretty major rewrite of the internal texture binding mechanic, which makes it more flexible. In general, the difference between the old and current approaches is that now, all texture description is held in a struct img_tex and only explicitly bound with pass_bind. (Once bound, a texture unit is assumed to be set in stone and no longer tied to the img_tex) This approach makes the code inside pass_read_video significantly more flexible and cuts down on the number of weird special cases and spaghetti logic. It also has some improvements, e.g. cutting down greatly on the number of unnecessary conversion passes inside pass_read_video (which was previously mostly done to cope with the fact that the alternative would have resulted in a combinatorial explosion of code complexity). Some other notable changes (and potential improvements): - texture expansion is now *always* handled in pass_read_video, and the colormatrix never does this anymore. (Which means the code could probably be removed from the colormatrix generation logic, modulo some other VOs) - struct fbo_tex now stores both its "physical" and "logical" (configured) size, which cuts down on the amount of width/height baggage on some function calls - vo_opengl can now technically support textures with different bit depths (e.g. 10 bit luma, 8 bit chroma) - but the APIs it queries inside img_format.c doesn't export this (nor does ffmpeg support it, really) so the status quo of using the same tex_mul for all planes is kept. - dumb_mode is now only needed because of the indirect_fbo being in the main rendering pipeline. If we reintroduce p->use_indirect and thread a transform through the entire program this could be skipped where unnecessary, allowing for the removal of dumb_mode. But I'm not sure how to do this in a clean way. (Which is part of why it got introduced to begin with) - It would be trivial to resurrect source-shader now (it would just be one extra 'if' inside pass_read_video).
* img_format: fix padding calculation with P010wm42016-01-081-1/+1
| | | | This was broken during refactoring commit e2d90b38 before pushing it.
* img_format: fix compilation on older libavutil releaseswm42016-01-071-1/+1
| | | | | | | | | | AVComponentDescriptor.offset was introduced relatively recently. On older releases, you have to use AVComponentDescriptor.offset_plus1, which is now deprecated. Instead of adding ifdeffery, assume AV_PIX_FMT_NV21 is the only format for which this applies (and will remain the only case), which is probably true enough.
* img_format: add a generic flag for semi-planar formatswm42016-01-071-4/+26
|
* img_format: take care of pixfmts that declare paddingwm42016-01-071-2/+9
| | | | | | | | | | A format could declare that some or all LSBs in a component are padding bits by setting a non-0 AVComponentDescriptor.shift value. This means we would interpret it incorrectly, because until now we always assumed all regular formats have the padding in the MSBs. Not a single format that does this actually exists, though. But a NV12 variant will be added later in FFmpeg.
* sub: find GBRP format automatically when rendering to RGBwm42015-12-241-3/+3
| | | | | | | | | | | | | | | | This removes the need to define IMGFMT_GBRAP, which fixes compilation with the current Libav release. This also makes it automatically pick up a GBRP format with the same bit width. (Unfortunately, it seems libswscale does not support conversion to AV_PIX_FMT_GBRAP16, so our code falls back to 8 bit, removing precision for video covered by subtitles in cases this code is used.) Also, when the source video is e.g. 10 bit YUV, upsample to 16 bit. Whether this is good or bad, it fixes behavior with alpha. Although I'm not sure if the alpha range is really correct ([0,2^16-1] vs. [0,255*256]). Keep in mind that libswscale doesn't even agree with the way we do it.
* vo_opengl: fix issues with some obscure pixel formatswm42015-12-071-1/+5
| | | | | | | | | | | | | | | | | | | The computation of the tex_mul variable was broken in multiple ways. This variable is used e.g. by debanding for moving expansion of 10 bit fixed-point input to normalized range to another stage of processing. One obvious bug was that the rgb555 pixel format was broken. This format has component_bits=5, but obviously it's already sampled in normalized range, and does not need expansion. The tex_mul-free code path avoids this by not using the colormatrix. (The code was originally designed to work around dealing with the generally complicated pixel formats by only using the colormatrix in the YUV case.) Another possible bug was with 10 bit input. It expanded the input by bringing the [0,2^10) range to [0,1], and then treating the expanded input as 16 bit input. I didn't bother to check what this actually computed, but it's somewhat likely it was wrong anyway. Now it uses mp_get_csp_mul(), and disables expansion when computing the YUV matrix.
* video: fix playback of pal8wm42015-11-011-1/+2
| | | | | | | | | | | PAL8 is the only format that is RGB, has only 1 component, is byte- aligned. It was accidentally detected by the GBRP case as planar RGB. (It would have been ok if it were gray; what ruins it is that it's actually paletted, and the color values do not correspond to colors (but palette entries). Pseudo-pal formats are ok; in fact AV_PIX_FMT_GRAY is rightfully marked as MP_IMGFLAG_YUV_P.
* vo_opengl: support all kinds of GBRP formatswm42015-10-181-3/+9
| | | | | | | | Adds support for AV_PIX_FMT_GBRP9, AV_PIX_FMT_GBRP10, AV_PIX_FMT_GBRP12, AV_PIX_FMT_GBRP14, AV_PIX_FMT_GBRP16, AV_PIX_FMT_GBRAP, and AV_PIX_FMT_GBRAP16. (Not that it matters, because nobody uses these anyway.)
* video: remove VDA supportwm42015-09-281-1/+0
| | | | | | | | | VideoToolbox is preferred. Now that FFmpeg released 2.8, there's no reason to support VDA anymore. In fact, we had a bug that made VDA not useable with older FFmpeg versions in some newer mpv releases. VideoToolbox is supported even on slightly older OSX versions, and if not, you still can run mpv without hw decoding.
* video: do not use deprecated libavutil pixdesc fieldswm42015-09-101-5/+15
| | | | | | These were normalized and are saner now. We want to use the new fields, and also get rid of the deprecation warnings, so use them. There's no release yet which uses these, so some ifdeffery is unfortunately needed.
* hwdec: add VideoToolbox supportSebastien Zwickert2015-08-051-0/+1
| | | | | | | | VDA is being deprecated in OS X 10.11 so this is needed to keep hwdec working. The code needs libavcodec support which was added recently (to FFmpeg git, libav doesn't support it). Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
* Update license headersMarcin Kurczewski2015-04-131-5/+4
| | | | Signed-off-by: wm4 <wm4@nowhere>
* vo_opengl: move minor helper to common codewm42015-03-091-0/+3
| | | | | The generic image format code should cary most of the "knowledge" about image formats.
* video: try to keep implied alpha when using conversion filterswm42015-01-211-1/+1
| | | | | Don't just discard alpha. This probably does the right thing, in the rare situations when alpha matters at all.
* vo_opengl: handle grayscale input better, add YA16 supportwm42015-01-211-2/+6
| | | | | | | | | | Simply clamp off the U/V components in the colormatrix, instead of doing something special in the shader. Also, since YA8/YA16 gave a plane_bits value of 16/32, and a colormatrix calculation overflowed with 32, add a component_bits field to the image format descriptor, which for YA8/YA16 returns 8/16 (the wrong value had no bad consequences otherwise).
* vf_scale: replace ancient fallback image format selectionwm42015-01-211-0/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If video output and VO don't support the same format, a conversion filter needs to be insert. Since a VO can support multiple formats, and the filter chain also can deal with multiple formats, you basically have to pick from a huge matrix of possible conversions. The old MPlayer code had a quite naive algorithm: it first checked whether any conversion from the list of preferred conversions matched, and if not, it was falling back on checking a hardcoded list of output formats (more or less sorted by quality). This had some unintended side- effects, like not using obvious "replacement" formats, selecting the wrong colorspace, selecting a bit depth that is too high or too low, and more. Use avcodec_find_best_pix_fmt_of_list() provided by FFmpeg instead. This function was made for this purpose, and should select the "best" format. Libav provides a similar function, but with a different name - there is a function with the same name in FFmpeg, but it has different semantics (I'm not sure if Libav or FFmpeg fucked up here). This also removes handling of VFCAP_CSP_SUPPORTED vs. VFCAP_CSP_SUPPORTED_BY_HW, which has no meaning anymore, except possibly for filter chains with multiple scale filters. Fixes #1494.
* command: change properties added in previous commitwm42015-01-101-1/+3
| | | | | | | | | | | | | Make their meaning more exact, and don't pretend that there's a reasonable definition for "bits-per-pixel". Also make unset fields unavailable. average_depth still might be inconsistent: for example, 10 bit 4:2:0 is identified as 24 bits, but RGB 4:4:4 as 12 bits. So YUV formats seemingly drop the per-component padding, while RGB formats do not. Internally it's consistent though: 10 bit YUV components are read as 16 bit, and the padding must be 0 (it's basically like an odd fixed- point representation, rather than a bitfield).
* video: remove swapped-endian image format aliaseswm42014-11-051-1/+1
| | | | | Like the previous commit, this removes names only, not actual support for these formats.
* video: add image format test programwm42014-11-051-0/+63
|
* video: passthrough unknown AVPixelFormatswm42014-11-051-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a rather radical change: instead of maintaining a whitelist of FFmpeg formats we support, we automatically support all formats. In general, a format which doesn't have an explicit IMGFMT_* name will be converted to a known format through libswscale, or will be handled by code which can treat pixel formats in a generic way using the pixel format description, like vo_opengl. AV_PIX_FMT_UYYVYY411 is a special-case. It's packed YUV with chroma subsampling by 4 in both directions. Its component order is documented as "Cb Y0 Y1 Cr Y2 Y3", meaning there's one UV sample for 4 Y samples. This means each pixel uses 1.5 bytes (4 pixels have 1 UV sample, so 4 bytes + 2 bytes). FFmpeg can actually handle this format with its generic mechanism in an extremely awkward way, but it doesn't work for us. Blacklist it, and hope no similar formats will be added in the future. Currently, the AV_PIX_FMT_*s allowed are limited to a numeric value of 500. More is not allowed, and there are some fixed size arrays that need to contain any possible format (look for IMGFMT_END dependencies). We could have this simpler by replacing IMGFMT_* with AV_PIX_FMT_* through the whole codebase. But for now, this is better, because we can compensate for formats missing in Libav or older FFmpeg versions, like AV_PIX_FMT_RGB0 and others.
* video: handle endian detection in a more generic waywm42014-11-051-7/+21
| | | | | | | | | | | | | FFmpeg has only a AV_PIX_FMT_FLAG_BE flag, not a LE one, which causes problems for us: we want to have the LE flag too, so code can actually detect whether a format is non-native endian. Basically, we want to reconstruct the LE/BE suffix all AV_PIX_FMT_*s have. Doing this is hard due to the (messed up) way AVPixFmtDescriptor works. The worst is AV_PIX_FMT_RGB444: this group of formats describe an endian-independent access (since no component actually spans 2 bytes, you only need byte accesses with a fixed offset), so we have to go through some pain.
* video: get hwaccel flag from pixdescwm42014-11-051-1/+5
|
* Move compat/ and bstr/ directory contents somewhere elsewm42014-08-291-2/+0
| | | | | | | | | bstr.c doesn't really deserve its own directory, and compat had just a few files, most of which may as well be in osdep. There isn't really any justification for these extra directories, so get rid of them. The compat/libav.h was empty - just delete it. We changed our approach to API compatibility, and will likely not need it anymore.
* video: cosmetics: reformat image format names tablewm42014-06-141-25/+17
|
* video: synchronize mpv rgb pixel format names with ffmpeg nameswm42014-06-141-15/+1
| | | | | | | | | | | This affects packed RGB formats up to 16 bits per pixel. The old mplayer names used LSB-to-MSB order, while FFmpeg (and some other libraries) use MSB-to-LSB. Nothing should change with this commit, i.e. no bit order or endian bugs should be added or fixed. In some cases, the name stays the same, even though the byte order changes, e.g. RGB8->BGR8 and BGR8->RGB8, and this affects the user-visible names too; this might cause confusion.
* video: automatically strip "le" and "be" suffix from pixle format nameswm42014-06-141-16/+22
| | | | | | | | | | | | | These suffixes are annoying when they're redundant, so strip them automatically. On little endian machines, always strip the "le" suffix, and on big endian machines vice versa (although I don't think anyone ever tried to run mpv on a big endian machine). Since pixel format strings are returned by a certain function and we can't just change static strings, use a trick to pass a stack buffer transparently. But this also means the string can't be permanently stored by the caller, so vf_dlopen.c has to be updated. There seems to be no other case where this is done, though.
* vdpau: move RGB surface management out of the VOwm42014-05-221-1/+17
| | | | | | | | | | Integrate it with the existing surface allocator in vdpau.c. The changes are a bit violent, because the vdpau API is so non-orthogonal: compared to video surfaces, output surfaces use a different ID type, different format types, and different API functions. Also, introduce IMGFMT_VDPAU_OUTPUT for VdpOutputSurfaces wrapped in mp_image, rather than hacking it. This is a bit cleaner.
* video: change image format names, prefer mostly FFmpeg nameswm42014-04-141-81/+58
| | | | | | | | | | | | | | | The most user visible change is that "420p" is now displayed as "yuv420p". This is what FFmpeg uses (almost), and is also less confusing since "420p" is often confused with "420 pixels vertical resolution". In general, we return the FFmpeg pixel format name. We still use our own old mechanism to keep a list of exceptions to provide compatibility for a while. Also, never return NULL for image format names. If the format is unset (0/IMGFMT_NONE), return "none". If the format has no name (probably never happens, FFmpeg seems to guarantee that a name is set), return "unknown".
* vdpau: remove legacy pixel formatswm42014-03-171-6/+0
| | | | | | They were used by ancient libavcodec versions. This also removes the need to distinguish vdpau image formats at all (since there is only one), and some code can be simplified.
* video: change image format from unsigned int to int in some placeswm42014-03-171-2/+2
| | | | | | Image formats used to be FourCCs, so unsigned int was better. But now it's annoying and the only difference is that unsigned int is more to type than int.
* img_format: AV_PIX_FMT_FLAG_ALPHA is always availablewm42014-03-171-5/+0
| | | | We no more support ancient libavutil versions.
* img_format: drop message about unknown pixel formatswm42013-12-211-7/+1
| | | | Too bad.
* compat: add compatibility kludge for Libav 9wm42013-12-081-8/+14
| | | | | | | | Libav 9 still uses the unprefixed PIX_FMT_... symbols, but they will probably be removed some time in the future. There are some other deprecations we have yet to take care of, but there are no clear replacements yet.
* mp_image: deal with FFmpeg PSEUDOPAL braindeathwm42013-12-011-0/+5
| | | | | | | | | | We got a crash in libavutil when encoding with Y8 (GRAY8). The reason was that libavutil was copying an Y8 image allocated by us, and expected a palette. This is because GRAY8 is a PSEUDOPAL format. It's not clear what PSEUDOPAL means, and it makes literally no sense at all. However, it does expect a palette allocated for some formats that are not paletted, and libavutil crashed when trying to access the non-existent palette.
* Take care of some libavutil deprecations, drop support for FFmpeg 1.0wm42013-11-291-6/+4
| | | | | | | | | | | | | | PIX_FMT_* -> AV_PIX_FMT_* (except some pixdesc constants) enum PixelFormat -> enum AVPixelFormat Losen some version checks in certain newer pixel formats. av_pix_fmt_descriptors -> av_pix_fmt_desc_get This removes support for FFmpeg 1.0.x, which is even older than Libav 9.x. Support for it probably was already broken, and its libswresample was rejected by our build system anyway because it's broken. Mostly untested; it does compile with Libav 9.9.
* vaapi: remove unused hw image formats, simplifywm42013-11-291-2/+0
| | | | | | | | | | PIX_FMT_VDA_VLD and PIX_FMT_VAAPI_VLD were never used anywhere. I'm not sure why they were even added, and they sound like they are just for compatibility with XvMC-style decoding, which sucks anyway. Now that there's only a single vaapi format, remove the IMGFMT_IS_VAAPI() macro. Also get rid of IMGFMT_IS_VDA(), which was unused.
* video: make IMGFMT_RGB0 etc. exist even if libavutil doesn't support itwm42013-11-051-17/+15
| | | | | | | | These formats are helpful for distinguishing surfaces with and without alpha. Unfortunately, Libav and older version of FFmpeg don't support them, so code will break. Fix this by treating these formats specially on the mpv side, mapping them to RGBA on Libav, and unseting the alpha bit in the mp_imgfmt_desc struct.
* video: add vda decode support (with hwaccel) and direct renderingStefano Pigozzi2013-08-221-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Decoding H264 using Video Decode Acceleration used the custom 'vda_h264_dec' decoder in FFmpeg. The Good: This new implementation has some advantages over the previous one: - It works with Libav: vda_h264_dec never got into Libav since they prefer client applications to use the hwaccel API. - It is way more efficient: in my tests this implementation yields a reduction of CPU usage of roughly ~50% compared to using `vda_h264_dec` and ~65-75% compared to h264 software decoding. This is mainly because `vo_corevideo` was adapted to perform direct rendering of the `CVPixelBufferRefs` created by the Video Decode Acceleration API Framework. The Bad: - `vo_corevideo` is required to use VDA decoding acceleration. - only works with versions of ffmpeg/libav new enough (needs reference refcounting). That is FFmpeg 2.0+ and Libav's git master currently. The Ugly: VDA was hardcoded to use UYVY (2vuy) for the uploaded video texture. One one end this makes the code simple since Apple's OpenGL implementation actually supports this out of the box. It would be nice to support other output image formats and choose the best format depending on the input, or at least making it configurable. My tests indicate that CPU usage actually increases with a 420p IMGFMT output which is not what I would have expected. NOTE: There is a small memory leak with old versions of FFmpeg and with Libav since the CVPixelBufferRef is not automatically released when the AVFrame is deallocated. This can cause leaks inside libavcodec for decoded frames that are discarded before mpv wraps them inside a refcounted mp_image (this only happens on seeks). For frames that enter mpv's refcounting facilities, this is not a problem since we rewrap the CVPixelBufferRef in our mp_image that properly forwards CVPixelBufferRetain/CvPixelBufferRelease calls to the underying CVPixelBufferRef. So, for FFmpeg use something more recent than `b3d63995` for Libav the patch was posted to the dev ML in July and in review since, apparently, the proposed fix is rather hacky.
* video: add vaapi decode and output supportwm42013-08-121-0/+3
| | | | | | | | | |