summaryrefslogtreecommitdiffstats
path: root/video/img_format.c
Commit message (Collapse)AuthorAgeFilesLines
* vo_opengl: move minor helper to common codewm42015-03-091-0/+3
| | | | | The generic image format code should cary most of the "knowledge" about image formats.
* video: try to keep implied alpha when using conversion filterswm42015-01-211-1/+1
| | | | | Don't just discard alpha. This probably does the right thing, in the rare situations when alpha matters at all.
* vo_opengl: handle grayscale input better, add YA16 supportwm42015-01-211-2/+6
| | | | | | | | | | Simply clamp off the U/V components in the colormatrix, instead of doing something special in the shader. Also, since YA8/YA16 gave a plane_bits value of 16/32, and a colormatrix calculation overflowed with 32, add a component_bits field to the image format descriptor, which for YA8/YA16 returns 8/16 (the wrong value had no bad consequences otherwise).
* vf_scale: replace ancient fallback image format selectionwm42015-01-211-0/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If video output and VO don't support the same format, a conversion filter needs to be insert. Since a VO can support multiple formats, and the filter chain also can deal with multiple formats, you basically have to pick from a huge matrix of possible conversions. The old MPlayer code had a quite naive algorithm: it first checked whether any conversion from the list of preferred conversions matched, and if not, it was falling back on checking a hardcoded list of output formats (more or less sorted by quality). This had some unintended side- effects, like not using obvious "replacement" formats, selecting the wrong colorspace, selecting a bit depth that is too high or too low, and more. Use avcodec_find_best_pix_fmt_of_list() provided by FFmpeg instead. This function was made for this purpose, and should select the "best" format. Libav provides a similar function, but with a different name - there is a function with the same name in FFmpeg, but it has different semantics (I'm not sure if Libav or FFmpeg fucked up here). This also removes handling of VFCAP_CSP_SUPPORTED vs. VFCAP_CSP_SUPPORTED_BY_HW, which has no meaning anymore, except possibly for filter chains with multiple scale filters. Fixes #1494.
* command: change properties added in previous commitwm42015-01-101-1/+3
| | | | | | | | | | | | | Make their meaning more exact, and don't pretend that there's a reasonable definition for "bits-per-pixel". Also make unset fields unavailable. average_depth still might be inconsistent: for example, 10 bit 4:2:0 is identified as 24 bits, but RGB 4:4:4 as 12 bits. So YUV formats seemingly drop the per-component padding, while RGB formats do not. Internally it's consistent though: 10 bit YUV components are read as 16 bit, and the padding must be 0 (it's basically like an odd fixed- point representation, rather than a bitfield).
* video: remove swapped-endian image format aliaseswm42014-11-051-1/+1
| | | | | Like the previous commit, this removes names only, not actual support for these formats.
* video: add image format test programwm42014-11-051-0/+63
|
* video: passthrough unknown AVPixelFormatswm42014-11-051-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a rather radical change: instead of maintaining a whitelist of FFmpeg formats we support, we automatically support all formats. In general, a format which doesn't have an explicit IMGFMT_* name will be converted to a known format through libswscale, or will be handled by code which can treat pixel formats in a generic way using the pixel format description, like vo_opengl. AV_PIX_FMT_UYYVYY411 is a special-case. It's packed YUV with chroma subsampling by 4 in both directions. Its component order is documented as "Cb Y0 Y1 Cr Y2 Y3", meaning there's one UV sample for 4 Y samples. This means each pixel uses 1.5 bytes (4 pixels have 1 UV sample, so 4 bytes + 2 bytes). FFmpeg can actually handle this format with its generic mechanism in an extremely awkward way, but it doesn't work for us. Blacklist it, and hope no similar formats will be added in the future. Currently, the AV_PIX_FMT_*s allowed are limited to a numeric value of 500. More is not allowed, and there are some fixed size arrays that need to contain any possible format (look for IMGFMT_END dependencies). We could have this simpler by replacing IMGFMT_* with AV_PIX_FMT_* through the whole codebase. But for now, this is better, because we can compensate for formats missing in Libav or older FFmpeg versions, like AV_PIX_FMT_RGB0 and others.
* video: handle endian detection in a more generic waywm42014-11-051-7/+21
| | | | | | | | | | | | | FFmpeg has only a AV_PIX_FMT_FLAG_BE flag, not a LE one, which causes problems for us: we want to have the LE flag too, so code can actually detect whether a format is non-native endian. Basically, we want to reconstruct the LE/BE suffix all AV_PIX_FMT_*s have. Doing this is hard due to the (messed up) way AVPixFmtDescriptor works. The worst is AV_PIX_FMT_RGB444: this group of formats describe an endian-independent access (since no component actually spans 2 bytes, you only need byte accesses with a fixed offset), so we have to go through some pain.
* video: get hwaccel flag from pixdescwm42014-11-051-1/+5
|
* Move compat/ and bstr/ directory contents somewhere elsewm42014-08-291-2/+0
| | | | | | | | | bstr.c doesn't really deserve its own directory, and compat had just a few files, most of which may as well be in osdep. There isn't really any justification for these extra directories, so get rid of them. The compat/libav.h was empty - just delete it. We changed our approach to API compatibility, and will likely not need it anymore.
* video: cosmetics: reformat image format names tablewm42014-06-141-25/+17
|
* video: synchronize mpv rgb pixel format names with ffmpeg nameswm42014-06-141-15/+1
| | | | | | | | | | | This affects packed RGB formats up to 16 bits per pixel. The old mplayer names used LSB-to-MSB order, while FFmpeg (and some other libraries) use MSB-to-LSB. Nothing should change with this commit, i.e. no bit order or endian bugs should be added or fixed. In some cases, the name stays the same, even though the byte order changes, e.g. RGB8->BGR8 and BGR8->RGB8, and this affects the user-visible names too; this might cause confusion.
* video: automatically strip "le" and "be" suffix from pixle format nameswm42014-06-141-16/+22
| | | | | | | | | | | | | These suffixes are annoying when they're redundant, so strip them automatically. On little endian machines, always strip the "le" suffix, and on big endian machines vice versa (although I don't think anyone ever tried to run mpv on a big endian machine). Since pixel format strings are returned by a certain function and we can't just change static strings, use a trick to pass a stack buffer transparently. But this also means the string can't be permanently stored by the caller, so vf_dlopen.c has to be updated. There seems to be no other case where this is done, though.
* vdpau: move RGB surface management out of the VOwm42014-05-221-1/+17
| | | | | | | | | | Integrate it with the existing surface allocator in vdpau.c. The changes are a bit violent, because the vdpau API is so non-orthogonal: compared to video surfaces, output surfaces use a different ID type, different format types, and different API functions. Also, introduce IMGFMT_VDPAU_OUTPUT for VdpOutputSurfaces wrapped in mp_image, rather than hacking it. This is a bit cleaner.
* video: change image format names, prefer mostly FFmpeg nameswm42014-04-141-81/+58
| | | | | | | | | | | | | | | The most user visible change is that "420p" is now displayed as "yuv420p". This is what FFmpeg uses (almost), and is also less confusing since "420p" is often confused with "420 pixels vertical resolution". In general, we return the FFmpeg pixel format name. We still use our own old mechanism to keep a list of exceptions to provide compatibility for a while. Also, never return NULL for image format names. If the format is unset (0/IMGFMT_NONE), return "none". If the format has no name (probably never happens, FFmpeg seems to guarantee that a name is set), return "unknown".
* vdpau: remove legacy pixel formatswm42014-03-171-6/+0
| | | | | | They were used by ancient libavcodec versions. This also removes the need to distinguish vdpau image formats at all (since there is only one), and some code can be simplified.
* video: change image format from unsigned int to int in some placeswm42014-03-171-2/+2
| | | | | | Image formats used to be FourCCs, so unsigned int was better. But now it's annoying and the only difference is that unsigned int is more to type than int.
* img_format: AV_PIX_FMT_FLAG_ALPHA is always availablewm42014-03-171-5/+0
| | | | We no more support ancient libavutil versions.
* img_format: drop message about unknown pixel formatswm42013-12-211-7/+1
| | | | Too bad.
* compat: add compatibility kludge for Libav 9wm42013-12-081-8/+14
| | | | | | | | Libav 9 still uses the unprefixed PIX_FMT_... symbols, but they will probably be removed some time in the future. There are some other deprecations we have yet to take care of, but there are no clear replacements yet.
* mp_image: deal with FFmpeg PSEUDOPAL braindeathwm42013-12-011-0/+5
| | | | | | | | | | We got a crash in libavutil when encoding with Y8 (GRAY8). The reason was that libavutil was copying an Y8 image allocated by us, and expected a palette. This is because GRAY8 is a PSEUDOPAL format. It's not clear what PSEUDOPAL means, and it makes literally no sense at all. However, it does expect a palette allocated for some formats that are not paletted, and libavutil crashed when trying to access the non-existent palette.
* Take care of some libavutil deprecations, drop support for FFmpeg 1.0wm42013-11-291-6/+4
| | | | | | | | | | | | | | PIX_FMT_* -> AV_PIX_FMT_* (except some pixdesc constants) enum PixelFormat -> enum AVPixelFormat Losen some version checks in certain newer pixel formats. av_pix_fmt_descriptors -> av_pix_fmt_desc_get This removes support for FFmpeg 1.0.x, which is even older than Libav 9.x. Support for it probably was already broken, and its libswresample was rejected by our build system anyway because it's broken. Mostly untested; it does compile with Libav 9.9.
* vaapi: remove unused hw image formats, simplifywm42013-11-291-2/+0
| | | | | | | | | | PIX_FMT_VDA_VLD and PIX_FMT_VAAPI_VLD were never used anywhere. I'm not sure why they were even added, and they sound like they are just for compatibility with XvMC-style decoding, which sucks anyway. Now that there's only a single vaapi format, remove the IMGFMT_IS_VAAPI() macro. Also get rid of IMGFMT_IS_VDA(), which was unused.
* video: make IMGFMT_RGB0 etc. exist even if libavutil doesn't support itwm42013-11-051-17/+15
| | | | | | | | These formats are helpful for distinguishing surfaces with and without alpha. Unfortunately, Libav and older version of FFmpeg don't support them, so code will break. Fix this by treating these formats specially on the mpv side, mapping them to RGBA on Libav, and unseting the alpha bit in the mp_imgfmt_desc struct.
* video: add vda decode support (with hwaccel) and direct renderingStefano Pigozzi2013-08-221-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Decoding H264 using Video Decode Acceleration used the custom 'vda_h264_dec' decoder in FFmpeg. The Good: This new implementation has some advantages over the previous one: - It works with Libav: vda_h264_dec never got into Libav since they prefer client applications to use the hwaccel API. - It is way more efficient: in my tests this implementation yields a reduction of CPU usage of roughly ~50% compared to using `vda_h264_dec` and ~65-75% compared to h264 software decoding. This is mainly because `vo_corevideo` was adapted to perform direct rendering of the `CVPixelBufferRefs` created by the Video Decode Acceleration API Framework. The Bad: - `vo_corevideo` is required to use VDA decoding acceleration. - only works with versions of ffmpeg/libav new enough (needs reference refcounting). That is FFmpeg 2.0+ and Libav's git master currently. The Ugly: VDA was hardcoded to use UYVY (2vuy) for the uploaded video texture. One one end this makes the code simple since Apple's OpenGL implementation actually supports this out of the box. It would be nice to support other output image formats and choose the best format depending on the input, or at least making it configurable. My tests indicate that CPU usage actually increases with a 420p IMGFMT output which is not what I would have expected. NOTE: There is a small memory leak with old versions of FFmpeg and with Libav since the CVPixelBufferRef is not automatically released when the AVFrame is deallocated. This can cause leaks inside libavcodec for decoded frames that are discarded before mpv wraps them inside a refcounted mp_image (this only happens on seeks). For frames that enter mpv's refcounting facilities, this is not a problem since we rewrap the CVPixelBufferRef in our mp_image that properly forwards CVPixelBufferRetain/CvPixelBufferRelease calls to the underying CVPixelBufferRef. So, for FFmpeg use something more recent than `b3d63995` for Libav the patch was posted to the dev ML in July and in review since, apparently, the proposed fix is rather hacky.
* video: add vaapi decode and output supportwm42013-08-121-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is based on the MPlayer VA API patches. To be exact it's based on a very stripped down version of commit f1ad459a263f8537f6c from git://gitorious.org/vaapi/mplayer.git. This doesn't contain useless things like benchmarking hacks and the demo code for GLX interop. Also, unlike in the original patch, decoding and video output are split into separate source files (the separation between decoding and display also makes pixel format hacks unnecessary). On the other hand, some features not present in the original patch were added, like screenshot support. VA API is rather bad for actual video output. Dealing with older libva versions or the completely broken vdpau backend doesn't help. OSD is low quality and should be rather slow. In some cases, only either OSD or subtitles can be shown at the same time (because OSD is drawn first, OSD is prefered). Also, libva can't decide whether it accepts straight or premultiplied alpha for OSD sub-pictures: the vdpau backend seems to assume premultiplied, while a native vaapi driver uses straight. So I picked straight alpha. It doesn't matter much, because the blending code for straight alpha I added to img_convert.c is probably buggy, and ASS subtitles might be blended incorrectly. Really good video output with VA API would probably use OpenGL and the GL interop features, but at this point you might just use vo_opengl. (Patches for making HW decoding with vo_opengl have a chance of being accepted.) Despite these issues, decoding seems to work ok. I still got tearing on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also tested with the vdpau vaapi wrapper on a nvidia system; however this was rather broken. (Fortunately, there is no reason to use mpv's VAAPI support over native VDPAU.)
* vdpau: split off decoder parts, use "new" libavcodec vdpau hwaccel APIwm42013-07-281-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the decoder parts from vo_vdpau.c to a new file vdpau_old.c. This file is named so because because it's written against the "old" libavcodec vdpau pseudo-decoder (e.g. "h264_vdpau"). Add support for the "new" libavcodec vdpau support. This was recently added and replaces the "old" vdpau parts. (In fact, Libav is about to deprecate and remove the "old" API without deprecation grace period, so we have to support it now. Moreover, there will probably be no Libav release which supports both, so the transition is even less smooth than we could hope, and we have to support both the old and new API.) Whether the old or new API is used is checked by a configure test: if the new API is found, it is used, otherwise the old API is assumed. Some details might be handled differently. Especially display preemption is a bit problematic with the "new" libavcodec vdpau support: it wants to keep a pointer to a specific vdpau API function (which can be driver specific, because preemption might switch drivers). Also, surface IDs are now directly stored in AVFrames (and mp_images), so they can't be forced to VDP_INVALID_HANDLE on preemption. (This changes even with older libavcodec versions, because mp_image always uses the newer representation to make vo_vdpau.c simpler.) Decoder initialization in the new code tries to deal with codec profiles, while the old code always uses the highest profile per codec. Surface allocation changes. Since the decoder won't call config() in vo_vdpau.c on video size change anymore, we allow allocating surfaces of arbitrary size instead of locking it to what the VO was configured. The non-hwdec code also has slightly different allocation behavior now. Enabling the old vdpau special decoders via e.g. --vd=lavc:h264_vdpau doesn't work anymore (a warning suggesting the --hwdec option is printed instead).
* img_format: fix broken conditionwm42013-05-061-1/+1
| | | | | | This caused all formats with fewer than 8 bits per component marked as little endian. (Normally, only some messed up packed RGB formats are endian-specific in this case.)
* video: fix setting XYZ flagwm42013-05-041-4/+4
| | | | Commit 9e0b68a didn't really do this correctly, failure at basic logic.
* video: add XYZ supportwm42013-05-011-0/+3
| | | | Needed for the ffmpeg j2k decoder.
* fix clang compiler warningsStefano Pigozzi2013-03-031-1/+1
|
* img_format: add pixel format name for IMGFMT_MONO_Wwm42013-02-261-0/+1
| | | | This should have been in commit 90efe7c.
* options: also accept ffmpeg pixel format nameswm42013-01-171-6/+14
| | | | | | | | | | | | Options that take pixel format names now also accept ffmpeg names. mpv internal names are preferred. We leave this undocumented intentionally, and may be removed once libswscale stops printing ffmpeg pixel format names to the terminal (or if we stop passing the SWS_PRINT_INFO flag to it, which makes it print these). (We insist on keeping the mpv specific names instead of dropping them in favor of ffmpeg's name due to NIH, and also because ffmpeg always appends the endian suffixes "le" and "be".)
* img_format: do not mark hwaccel formats as planar yuv formatswm42013-01-141-7/+8
|
* mp_image: remove mp_image.bppwm42013-01-131-8/+0
| | | | | | | | This field contained the "average" bit depth per pixel. It serves no purpose anymore. Remove it. Only vo_opengl_old still uses this in order to allocate a buffer that is shared between all planes.
* imgfmt: add more ffmpeg pixel formatswm42013-01-131-0/+22
| | | | | | Most of these probably don't have much actual use, but at least allow images of these formats to be handed to swscale, should any decoder output them.
* img_format: change meaning of MP_IMGFLAG_PLANARwm42013-01-131-1/+1
| | | | | | | | | | | | | This used to mean that there is more than one plane. This is not very useful: IMGFMT_Y8 was not considered planar, but it's just a Y plane, and could be treated the same as actual planar formats. On the other hand, IMGFMT_NV12 is partially packed, and usually requires special handling, but was considered planar. Change its meaning. Now the flag is set if the format has a separate plane for each component. IMGFMT_Y8 is now planar, IMGFMT_NV12 is not. As an odd special case, IMGFMT_MONO (1 bit per pixel) is like a planar RGB format with a single plane.
* mp_image: add mp_image_crop()wm42013-01-131-0/+6
| | | | Actually stolen from draw_bmp.c.
* video: remove img_format compat hackswm42013-01-131-88/+3
| | | | | | | | | | | | | | | | | Remove the strange things the old mp_image_setfmt() code did to the image format parameters. This includes setting chroma shift to 31 for gray (Y8) formats and more. Y8 + vo_opengl_old didn't actually work for unknown reasons (regression in this branch). Fix this. The difference is that Y8 is now interpreted as gray RGB (LUMINANCE texture) instead of involving YUV (and levels) conversion. Get rid of distinguishing RGB and BGR. There wasn't really any good reason for this. Remove mp_get_chroma_shift() and IMGFMT_IS_YUVP16*(). mp_imgfmt_desc gives the same information and more.
* draw_bmp: better way to find 444 formatwm42013-01-131-0/+15
| | | | | | | | | | | | | Even though #ifdef ACCURATE is removed, the result should be about the same. The fallback is only used by packed YUV formats (YUYV, NV12), and doing 16 bit for them instead of 8 bit is not useful. A side effect is that Y8 (gray) is not converted drawing subs, and for alpha formats, the alpha plane is not removed. This means the number of planes after upsampling can be 1-4 (1: gray, 2: gray+alpha, 3: planar, 4: planar+alpha). The code has to be adjusted accordingly to work on the color planes only. Also remove the workaround for the chroma shift 31 hack.
* video: decouple internal pixel formats from FourCCswm42013-01-131-131/+86
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mplayer's video chain traditionally used FourCCs for pixel formats. For example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the string 'YV12' interpreted as unsigned int. Additionally, it used to encode information into the numeric values of some formats. The RGB formats had their bit depth and endian encoded into the least significant byte. Extended planar formats (420P10 etc.) had chroma shift, endian, and component bit depth encoded. (This has been removed in recent commits.) Replace the FourCC mess with a simple enum. Remove all the redundant formats like YV12/I420/IYUV. Replace some image format names by something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P. Add img_fourcc.h, which contains the old IDs for code that actually uses FourCCs. Change the way demuxers, that output raw video, identify the video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to request the rawvideo decoder, and sh_video->imgfmt specifies the pixel format. Like the previous hack, this is supposed to avoid the need for a complete codecs.cfg entry per format, or other lookup tables. (Note that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT raw video, but this is still considered better than adding a raw video decoder - even if trivial, it would be full of annoying lookup tables.) The TV code has not been tested. Some corrective changes regarding endian and other image format flags creep in.
* mp_image: change how palette is handledwm42013-01-131-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | According to DOCS/OUTDATED-tech/colorspaces.txt, the following formats are supposed to be palettized: IMGFMT_BGR8 IMGFMT_RGB8, IMGFMT_BGR4_CHAR IMGFMT_RGB4_CHAR IMGFMT_BGR4 IMGFMT_RGB4 Of these, only BGR8 and RGB8 are actually treated as palettized in some way. ffmpeg has only one palettized format (AV_PIX_FMT_PAL8), and IMGFMT_BGR8 was inconsistently mapped to packed non-palettized RGB formats too (AV_PIX_FMT_BGR8). Moreover, vf_scale.c contained messy hacks to generate a palette when AV_PIX_FMT_BGR8 is output. (libswscale does not support AV_PIX_FMT_PAL8 output in the first place.) Get rid of all of this, and introduce IMGFMT_PAL8, which directly maps to AV_PIX_FMT_PAL8. Remove the palette creation code from vf_scale.c. IMGFMT_BGR8 maps to AV_PIX_FMT_RGB8 (don't ask me why it's swapped), without any palette use. Enabling it in vo_x11 or using it as vf_scale input seems to give correct results.
* video: use libavutil pixel format descriptorswm42013-01-131-111/+201
| | | | | | | Replace the internal pixel format stuff with code that queries the libavutil list of pixel format descriptors. Trying to map IMGFMT_IS_RGB() etc. turned out extremely hacky.
* video: add support for 12 and 14 bit YUV pixel formatsStephen Hutchinson2012-12-031-0/+24
| | | | | | | | | | | | Based on a patch by qyot27. Add the missing parts in mp_get_chroma_shift(), which allow allocation of such images, and which make vo_opengl automatically accept the new formats. Change the IMGFMT_IS_YUVP16_LE/BE macros to properly report IMGFMT_444P14 as supported: this pixel format has the highest numerical bit width identifier (0x55), which is not covered by the mask ~0xfc. Remove 1 bit from the mask (makes it 0xf8) so that IMGFMT_IS_YUVP16(IMGFMT_444P14) is 1. This is slightly risky, as the organization of the image format IDs (actually FourCCs + mplayer internal IDs) is messy at best, but it should be ok.
* video: add IMGFMT_Y16/PIX_FMT_GRAY16wm42012-11-141-0/+9
| | | | | | | | | This pixel format is sometimes used with yuv4mpeg. vo_direct3d used its own IMGFMT_Y16 internally for some reason. vo