summaryrefslogtreecommitdiffstats
path: root/video/mp_image.c
Commit message (Collapse)AuthorAgeFilesLines
* talloc: change talloc destructor signaturewm42013-10-131-4/+2
| | | | | | | | | | | | | Change talloc destructor so that they can never signal failure, and don't return a status code. This makes our talloc copy even more incompatible to upstream talloc, but on the other hand this is preparation for getting rid of talloc entirely. (The talloc replacement in the next commit won't allow the talloc_free equivalent to fail, and the destructor return value would be useless. But I don't want to change any mpv code either; the idea is that the talloc replacement commit can be reverted for some time in order to test whether the talloc replacement introduced a regression.)
* gl_video: handle non-mod-2 4:2:0 YUV video correctlywm42013-08-061-0/+4
| | | | | | | | | | | | | | | | Allocate textures big enough to include the bottom/right borders (so the chroma texture sizes are rounded up instead of down). Make the texture large enough to include the additional luma border. Conceptually, we pretend that the video frame is fully aligned, and then crop away the unwanted borders. Filtering (even just bilinear) will access the borders anyway, so it's possible that we might need to switch to "harder" cropping instead, but at least pixels not close to the border should be displayed correctly now. Add a comment to mp_image.c about this luma border. These semantics are kind of subtle, and the image allocation code handle this in a subtle way too, so it's better to document this explicitly. The libavutil image allocation code does similar things.
* mp_image: make reference counting thread-safewm42013-07-281-3/+27
| | | | | | | | | | This hasn't been done yet, because pthreads is still an optional dependency, so this is a bit annoying. Now doing it anyway, because maybe we will need this capability in the future. We keep it as simple as possible. We (probably) don't need anything more sophisticated, and keeping it simple avoids introducing weird bugs. So, no atomic instructions, no fine grained locks, no cleverness.
* mp_image: pass through colorspace info to libavfilterwm42013-07-281-0/+8
| | | | | | | | | | | | | This change affects vf_lavfi. Until recently, libavfilter was not colorspace aware at all. This changed with the addition of colorspace fields to AVFrame. libavfilter's vf_scale picks them up (as of recent ffmpeg git). Since this support is still kind of wonky and not part of the normal format negotiation, this won't set the correct output colorspace, though. Not adding a separate test for HAVE_AVFRAME_COLORSPACE. This is slightly unclean, but on the other hand adding an explicit test seems like a waste of effort.
* video: support setting libswscale chroma positionwm42013-07-251-1/+3
|
* mp_image: create AVBuffers for all planes when converting to AVFramewm42013-07-241-1/+9
| | | | | | | | | | | | | | | | It appears the API requires you to cover all plane data with AVBuffers (that is, one AVBuffer per plane in the most general case), because certain code can make certain assumptions about this. (Insert rant about how this is barely useful and increases complexity and potential bugs.) I don't know any cases where the current code actually fails, but we want to follow the API, so do it anyway. Note that we don't really know whether or not planes are from a single memory allocation, so we have to assume the most general case and create an AVBuffer for each plane. We simply assume that the data is padded to the full stride in the last image line. All these extra dummy references are stupid, but the code might become much simpler once we only support libavcodec versions with refcounting and can use AVFrame directly.
* img_format: add a mask for color classwm42013-07-181-2/+1
| | | | | | | Using the term "color class" to avoid confusion with the other colorspace related concepts. Also get rid of MP_IMGFLAG_FMT_MASK, since it was unused.
* mp_image: one utility function to set image parameterswm42013-07-181-0/+11
|
* sws_utils: refactor swscale wrapper codewm42013-07-181-13/+9
| | | | | | | | | | This splits the monolithic mp_image_swscale() function into a bunch of functions and a context struct. This means it's possible to set arbitrary parameters (e.g. even obscure ones without getting in the way), and you don't have to create the context on every call. This code is preparation for removing duplicated libswscale API usage from other parts of the code.
* video: redo how colorspaces are handledwm42013-07-161-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of handling colorspaces with VFCTRLs/VOCTRLs, make them part of the normal video format negotiation. The colorspace is passed down like other video params with config/reconfig calls. Forcing colorspaces (via the --colormatrix options and properties) is handled differently too: if it's changed, completely reinit the video chain. This is slower and requires a precise seek to the same position to perform an update, but it's simpler and less bug-prone. Considering switching the colorspace at runtime by user-interaction is a rather obscure feature, this is a good change. The colorspace VFCTRLs and VOCTRLs are still kept. The VOs rely on it, and would have to be changed to get rid of them. We'll do that later, and convert them incrementally instead of in one go. Note that controlling the output range now always works on VO level. Basically, this means you can't get vf_scale to output full-range YUV for whatever reason. If that is really wanted, it should be a vf_scale option. the previous behavior didn't make too much sense anyway. This commit fixes a few bugs (such as playing RGB video and converting that to YUV with vf_scale - a recent commit broke this and forced the VO to display YUV as RGB if possible), and might introduce some new ones.
* mp_image: explicitly forbid using RGB colorspace with YUV formatswm42013-07-151-0/+9
| | | | This probably has more potential for breakage than it would be of use.
* mp_image: refactor colorspace guessing/fallbackwm42013-07-151-9/+44
| | | | This actually handles XYZ too.
* csputils.h: don't recursively include libavcodec headerwm42013-06-281-0/+1
| | | | | | | | Some functions (avcol_spc_to_mp_csp() etc.) used libavcodec enum types as parameters. Remove these in order to get rid of the avcodec.h include statement. This prevents that avcodec.h is recursively included by dozens of files. Fix mp_image.c, which used the header without explicitly including avcodec.h.
* vo_opengl: handle chroma locationwm42013-06-281-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the video decoder chroma location flags and render chroma locations other than centered. Until now, we've always used the intuitive and obvious centered chroma location, but H.264 uses something else. FFmpeg provides a small overview in libavcodec/avcodec.h: ----------- /** * X X 3 4 X X are luma samples, * 1 2 1-6 are possible chroma positions * X X 5 6 X 0 is undefined/unknown position */ enum AVChromaLocation{ AVCHROMA_LOC_UNSPECIFIED = 0, AVCHROMA_LOC_LEFT = 1, ///< mpeg2/4, h264 default AVCHROMA_LOC_CENTER = 2, ///< mpeg1, jpeg, h263 AVCHROMA_LOC_TOPLEFT = 3, ///< DV AVCHROMA_LOC_TOP = 4, AVCHROMA_LOC_BOTTOMLEFT = 5, AVCHROMA_LOC_BOTTOM = 6, AVCHROMA_LOC_NB , ///< Not part of ABI }; ----------- The visual difference is literally minimal, but since videophiles apparently consider this detail as quality mark of a video renderer, support it anyway. We don't bother with chroma locations other than centered and left, though. Not sure about correctness, but it's probably ok.
* video: add a new method to configure filters and VOswm42013-06-281-3/+2
| | | | | | | | | | | | | | | | | | The filter chain and the video ouputs have config() functions. They are strictly limited to transfering the video size and format. Other parameters (like color levels) have to be transferred separately. Improve upon this by introducing a separate set of reconfig() functions, which use mp_image_params to carry format parameters. This struct contains all image format related parameters from config(), plus additional parameters such as colorspace. Change vf_rotate to use it, as well as vo_opengl. vf_rotate is just an example/test case, but vo_opengl will need it later. The intention is also to get rid of VOCTRL_SET_YUV_COLORSPACE. This information is now handed to the VOs via reconfig(). The getter, VOCTRL_GET_YUV_COLORSPACE, will still be needed though.
* mp_image: copy palette only if allocatedwm42013-06-281-1/+2
| | | | | | Normally, we assume that IMGFMT_PAL8 always has a palette allocated in plane 1. But there may be corner cases in ffmpeg where it doesn't (namely pseudo-pal stuff).
* mp_image: align image allocation heightwm42013-05-171-1/+2
| | | | | | | | | | | | vo_vdpau actually reads past the image allocation when displaying a non-mod 2 420p image. The vdpau API specifies that VdpVideoSurfacePutBitsYCbCr() requires a height that is a multiple of 4, and surface allocations are automatically rounded. So allocate video images with rounded height. libavutil does the same, so images coming directly from the decoder or from libavfilter are no problem. (libavutil does this alginment explicitly, not just because the decoded image size is aligned to macroblocks.)
* mp_image: provide function to convert mp_image to AVFramewm42013-04-211-0/+50
| | | | | | Note that this does not pass through QP information (qscale field). The only filter for which this matters is vf_pp, and we have this natively.
* video: use new method to get QP tablewm42013-03-151-5/+9
| | | | | | | | | | | | | | | | | | | | This only matters for those who want to use vf_pp. The old API is marked as deprecated, and doesn't work on Libav. It was broken on FFmpeg, but has recently started working again - the fields in question were not un- deprecated though. Instead you're supposed to use a new API, which does exactly the same thing (what...?). Also don't pass the QP table with mp_image_copy_attributes() - it probably does more harm than it's useful. By the way, with -vf=dlopen=TOOLS/vf_dlopen/showqscale.so, it appears the table as output by recent FFmpeg is offset by 1 macroblock in X direction and 2 macroblocks in Y direction, which most likely will interfere with normal vf_pp operation. However, this is not my problem. The only real reason for this commit is that we can finally get rid of all libav* related deprecation warnings. (Though they are constantly deprecating APIs, so this will not last long.)
* video: make use of libavcodec refcountingwm42013-03-131-0/+28
| | | | Now lavc_dr1.c is not used anymore if libavcodec is recent enough.
* video: prepare for libavcodec refcountingwm42013-03-131-15/+38
| | | | | Some minor refactoring and moving code around. There should be no functional changes.
* vf_flip: move flipping code to mp_image.cwm42013-03-011-0/+8
|
* mp_image: remove mp_image.bppwm42013-01-131-1/+0
| | | | | | | | This field contained the "average" bit depth per pixel. It serves no purpose anymore. Remove it. Only vo_opengl_old still uses this in order to allocate a buffer that is shared between all planes.
* vf_expand: support more image formatswm42013-01-131-40/+38
| | | | | | | | | | | This did random things with some image formats, including 10 bit formats. Fixes the mp_image_clear() function too. This still has some caveats: - doesn't clear alpha to opaque (too hard with packed RGB, and is rarely needed) - sets luma to 0 for mpeg-range YUV, instead of the lowest possible value (e.g. 16 for 8 bit)
* mp_image: add mp_image_crop()wm42013-01-131-0/+22
| | | | Actually stolen from draw_bmp.c.
* video: remove img_format compat hackswm42013-01-131-11/+7
| | | | | | | | | | | | | | | | | Remove the strange things the old mp_image_setfmt() code did to the image format parameters. This includes setting chroma shift to 31 for gray (Y8) formats and more. Y8 + vo_opengl_old didn't actually work for unknown reasons (regression in this branch). Fix this. The difference is that Y8 is now interpreted as gray RGB (LUMINANCE texture) instead of involving YUV (and levels) conversion. Get rid of distinguishing RGB and BGR. There wasn't really any good reason for this. Remove mp_get_chroma_shift() and IMGFMT_IS_YUVP16*(). mp_imgfmt_desc gives the same information and more.
* video: cleanup: move and rename vf_mpi_clear and vf_clone_attributeswm42013-01-131-6/+70
| | | | | | | | These functions weren't specific to video filters and were misplaced in vf.c. Move them to mp_image.c. Fix the big endian test in vf_mpi_clear/mp_image_clear, which has been messed up in 74df1d.
* mp_image: change how palette is handledwm42013-01-131-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | According to DOCS/OUTDATED-tech/colorspaces.txt, the following formats are supposed to be palettized: IMGFMT_BGR8 IMGFMT_RGB8, IMGFMT_BGR4_CHAR IMGFMT_RGB4_CHAR IMGFMT_BGR4 IMGFMT_RGB4 Of these, only BGR8 and RGB8 are actually treated as palettized in some way. ffmpeg has only one palettized format (AV_PIX_FMT_PAL8), and IMGFMT_BGR8 was inconsistently mapped to packed non-palettized RGB formats too (AV_PIX_FMT_BGR8). Moreover, vf_scale.c contained messy hacks to generate a palette when AV_PIX_FMT_BGR8 is output. (libswscale does not support AV_PIX_FMT_PAL8 output in the first place.) Get rid of all of this, and introduce IMGFMT_PAL8, which directly maps to AV_PIX_FMT_PAL8. Remove the palette creation code from vf_scale.c. IMGFMT_BGR8 maps to AV_PIX_FMT_RGB8 (don't ask me why it's swapped), without any palette use. Enabling it in vo_x11 or using it as vf_scale input seems to give correct results.
* mp_image: simplify image allocationwm42013-01-131-85/+42
| | | | | | | | | | | | | | | | | | | | | | | | mp_image_alloc_planes() allocated images with minimal stride, even if the resulting stride was unaligned. It was the responsibility of vf_get_image() to set an image's width to something larger than required to get an aligned stride, and then crop it. Always allocate with aligned strides instead. Get rid of IMGFMT_IF09 special handling. This format is not used anymore. (IF09 has 4x4 chroma sub-sampling, and that is what it was mainly used for - this is still supported.) Get rid of swapped chroma plane allocation. This is not used anywhere, and VOs like vo_xv, vo_direct3d and vo_sdl do their own swapping. Always round chroma width/height up instead of down. Consider 4:2:0 and an uneven image size. For luma, the size was left uneven, and the chroma size was rounded down. This doesn't make sense, because chroma would be missing for the bottom/right border. Remove mp_image_new_empty() and mp_image_alloc_planes(), they were not used anymore, except in draw_bmp.c. (It's still allowed to setup mp_images manually, you just can't allocate image data with them anymore - this is also done in draw_bmp.c.)
* video: use libavutil pixel format descriptorswm42013-01-131-128/+21
| | | | | | | Replace the internal pixel format stuff with code that queries the libavutil list of pixel format descriptors. Trying to map IMGFMT_IS_RGB() etc. turned out extremely hacky.
* video: remove things related to old DR codewm42013-01-131-19/+19
| | | | | | | | | | | | | | | Remove mp_image.width/height. The w/h members are the ones to use. width/height were used internally by vf_get_image(), and sometimes for other purposes. Remove some image flags, most of which are now useless or completely unused. This includes VFCAP_ACCEPT_STRIDE: the vf_expand insertion in vf.c does nothing. Remove some other unused mp_image fields. Some rather messy changes in vo_opengl[_old] to get rid of legacy mp_image flags and fields. This is left from when vo_gl supported DR.
* mp_image: require using mp_image_set_size() for setting w/hwm42013-01-131-2/+17
| | | | | | | | | | | | | | Setting the size of a mp_image must be done with mp_image_set_size() now. Do this to guarantee that the redundant fields (like chroma_width) are updated consistently. Replacing the redundant fields by function calls would probably be better, but there are too many uses of them, and is a bit less convenient. Most code actually called mp_image_setfmt(), which did this as well. This commit just makes things a bit more explicit. Warning: the video filter chain still sets up mp_images manually, and vf_get_image() is not updated.
* mp_image: refcounting helperswm42013-01-131-18/+220
|
* video: add support for 12 and 14 bit YUV pixel formatsStephen Hutchinson2012-12-031-0/+12
| | | | | | | | | | | | Based on a patch by qyot27. Add the missing parts in mp_get_chroma_shift(), which allow allocation of such images, and which make vo_opengl automatically accept the new formats. Change the IMGFMT_IS_YUVP16_LE/BE macros to properly report IMGFMT_444P14 as supported: this pixel format has the highest numerical bit width identifier (0x55), which is not covered by the mask ~0xfc. Remove 1 bit from the mask (makes it 0xf8) so that IMGFMT_IS_YUVP16(IMGFMT_444P14) is 1. This is slightly risky, as the organization of the image format IDs (actually FourCCs + mplayer internal IDs) is messy at best, but it should be ok.
* mp_image: make alloc_mpi() always allocate with aligned stridewm42012-11-221-17/+3
| | | | | | | | | | | | | | | | By "design", mplayer normally allocates aligned images only inside the filter chain, via the vf_get_image() function. This function pads the width of the requested image if a stride is allowed, sets that new width before calling mp_image_alloc_planes(). However, newer code wants aligned images as well (basically to satisfy libswscale). This affects all uses of alloc_mpi(). To get aligned strides, simply change alloc_mpi() to request an aligned width. Remove the old hack in mp_image_alloc_planes(), which special cases some image formats to be allocated with aligned strides. This is a temporary hack until mp_image_alloc_planes() is revised.
* draw_bmp: add RGB rendering to fix image quality issueswm42012-11-221-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | As pointed out in commit ed01df, the quality loss due to frequent conversion between RGB and YUV is too much when drawing OSD and subtitles. Fix this by staying in the same colorspace when drawing subtitles. Render directly to RGB, without converting to YUV first. The bad thing about packed RGB is that there are many pixel formats, which would all require special code for blending. It's also completely incompatible to planar YUV. Use planar RGB instead, which allows us to reuse all code originally written for planar YUV. The only thing that needs to be changed is the color conversion in the libass case. (In exchange for simpler code, the image has to be copied, but this is still much better than converting to YUV.) Unfortunately, libswscale doesn't support planar RGB output. Add a hack to sws_utils.c to handle conversion to planar RGB. In the common case, when converting 32 bit per pixel RGB, calling swscale can be avoided entirely. The change in mp_image.c is needed to allocate GBRP images correctly. (The issue with vo_x11 could be easily solved by always backing up the same bounding box as the bitmap drawing RGB<->YUV conversion does, but this commit is probably the better fix.)
* video: add IMGFMT_Y16/PIX_FMT_GRAY16wm42012-11-141-0/+2
| | | | | | | | | This pixel format is sometimes used with yuv4mpeg. vo_direct3d used its own IMGFMT_Y16 internally for some reason. vo_opengl, vo_opengl_old, and vo_direct3d should be able to display this pixel format natively.
* Rename directories, move files (step 2 of 2)wm42012-11-121-4/+4
| | | | | | | | | | | | Finish renaming directories and moving files. Adjust all include statements to make the previous commit compile. The two commits are separate, because git is bad at tracking renames and content changes at the same time. Also take this as an opportunity to remove the separation between "common" and "mplayer" sources in the Makefile. ("common" used to be shared between mplayer and mencoder.)
* Rename directories, move files (step 1 of 2) (does not compile)wm42012-11-121-0/+280
Tis drops the silly lib prefixes, and attempts to organize the tree in a more logical way. Make the top-level directory less cluttered as well. Renames the following directories: libaf -> audio/filter libao2 -> audio/out libvo -> video/out libmpdemux -> demux Split libmpcodecs: vf* -> video/filter vd*, dec_video.* -> video/decode mp_image*, img_format*, ... -> video/ ad*, dec_audio.* -> audio/decode libaf/format.* is moved to audio/ - this is similar to how mp_image.* is located in video/. Move most top-level .c/.h files to core. (talloc.c/.h is left on top- level, because it's external.) Park some of the more annoying files in compat/. Some of these are relicts from the time mplayer used ffmpeg internals. sub/ is not split, because it's too much of a mess (subtitle code is mixed with OSD display and rendering). Maybe the organization of core is not ideal: it mixes playback core (like mplayer.c) and utility helpers (like bstr.c/h). Should the need arise, the playback core will be moved somewhere else, while core contains all helper and common code.