summaryrefslogtreecommitdiffstats
path: root/video/filter/vf_scale.c
Commit message (Collapse)AuthorAgeFilesLines
* vf_scale: fix warningwm42013-01-271-1/+0
|
* video: move handling of -x/-y/-xy options to VOwm42013-01-231-4/+0
| | | | | | | | | Now the calculations of the final display size are done after the filter chain. This makes the difference between display aspect ratio and window size a bit more clear, especially in the -xy case. With an empty filter chain, the behavior of the options should be the same, except that they don't affect vo_image and vo_lavc anymore.
* video: decouple internal pixel formats from FourCCswm42013-01-131-23/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mplayer's video chain traditionally used FourCCs for pixel formats. For example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the string 'YV12' interpreted as unsigned int. Additionally, it used to encode information into the numeric values of some formats. The RGB formats had their bit depth and endian encoded into the least significant byte. Extended planar formats (420P10 etc.) had chroma shift, endian, and component bit depth encoded. (This has been removed in recent commits.) Replace the FourCC mess with a simple enum. Remove all the redundant formats like YV12/I420/IYUV. Replace some image format names by something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P. Add img_fourcc.h, which contains the old IDs for code that actually uses FourCCs. Change the way demuxers, that output raw video, identify the video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to request the rawvideo decoder, and sh_video->imgfmt specifies the pixel format. Like the previous hack, this is supposed to avoid the need for a complete codecs.cfg entry per format, or other lookup tables. (Note that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT raw video, but this is still considered better than adding a raw video decoder - even if trivial, it would be full of annoying lookup tables.) The TV code has not been tested. Some corrective changes regarding endian and other image format flags creep in.
* mp_image: change how palette is handledwm42013-01-131-52/+1
| | | | | | | | | | | | | | | | | | | | | | | | | According to DOCS/OUTDATED-tech/colorspaces.txt, the following formats are supposed to be palettized: IMGFMT_BGR8 IMGFMT_RGB8, IMGFMT_BGR4_CHAR IMGFMT_RGB4_CHAR IMGFMT_BGR4 IMGFMT_RGB4 Of these, only BGR8 and RGB8 are actually treated as palettized in some way. ffmpeg has only one palettized format (AV_PIX_FMT_PAL8), and IMGFMT_BGR8 was inconsistently mapped to packed non-palettized RGB formats too (AV_PIX_FMT_BGR8). Moreover, vf_scale.c contained messy hacks to generate a palette when AV_PIX_FMT_BGR8 is output. (libswscale does not support AV_PIX_FMT_PAL8 output in the first place.) Get rid of all of this, and introduce IMGFMT_PAL8, which directly maps to AV_PIX_FMT_PAL8. Remove the palette creation code from vf_scale.c. IMGFMT_BGR8 maps to AV_PIX_FMT_RGB8 (don't ask me why it's swapped), without any palette use. Enabling it in vo_x11 or using it as vf_scale input seems to give correct results.
* video/filter: change filter API, use refcounting, remove filter DRwm42013-01-131-24/+8
| | | | | | | | | | | | | | | | | | | | Change the entire filter API to use reference counted images instead of vf_get_image(). Remove filter "direct rendering". This was useful for vf_expand and (in rare cases) vf_sub: DR allowed these filters to pass a cropped image to the filters before them. Then, on filtering, the image was "uncropped", so that black bars could be added around the image without copying. This means that in some cases, vf_expand will be slower (-vf gradfun,expand for example). Note that another form of DR used for in-place filters has been replaced by simpler logic. Instead of trying to do DR, filters can check if the image is writeable (with mp_image_is_writeable()), and do true in-place if that's the case. This affects filters like vf_gradfun and vf_sub. Everything has to support strides now. If something doesn't, making a copy of the image data is required.
* video: remove slice based filtering and video outputwm42013-01-131-41/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Slices allowed filtering or drawing video in horizontal bands or blocks. This allowed working on the video in smaller units. In theory, this could bring a performance win by lowering cache pressure, as you didn't have to keep the whole video frame in cache while filtering, only the slice. In practice, the slice code path was barely used for the following reasons: - Multithreaded decoding with ffmpeg didn't use slices. The ffmpeg slice callback was disabled, because it can be called from another thread, and the mplayer video chain is not thread-safe. - There was nothing that would turn "full" images into appropriate slices, so slices were rarely used. - Most filters didn't actually support slices. On the other hand, supporting slices lead to code duplication and more complex code in general. I made some experiments and didn't find any actual measurable performance improvements when using slices. Even ffmpeg removed slices based filtering from libavfilter in favor of simpler code. The most broken thing about the slices code path is that slices can't be queued, like it is done for images in vo.c.
* vf_scale: prefer 420P10 -> YV12 instead of 444Pwm42012-12-281-0/+1
| | | | | | Strictly speaking, 444P is higher quality than YV12, but doing this is not very useful when playing 10 bit video with -vo opengl-old on a GPU that doesn't support 16 bit textures.
* vf_scale: support more pixel formatsRudolf Polzer2012-12-281-0/+4
| | | | | The RGBA, ARGB, BGRA, ABGR were previously not always supported, depending on system endianness.
* video: add support for 12 and 14 bit YUV pixel formatsStephen Hutchinson2012-12-031-0/+12
| | | | | | | | | | | | Based on a patch by qyot27. Add the missing parts in mp_get_chroma_shift(), which allow allocation of such images, and which make vo_opengl automatically accept the new formats. Change the IMGFMT_IS_YUVP16_LE/BE macros to properly report IMGFMT_444P14 as supported: this pixel format has the highest numerical bit width identifier (0x55), which is not covered by the mask ~0xfc. Remove 1 bit from the mask (makes it 0xf8) so that IMGFMT_IS_YUVP16(IMGFMT_444P14) is 1. This is slightly risky, as the organization of the image format IDs (actually FourCCs + mplayer internal IDs) is messy at best, but it should be ok.
* sws_utils: remove unused helperwm42012-11-241-1/+5
| | | | | | | | sws_getContextFromCmdLine_hq() was used by the screenshot code, which now uses mp_image_swscale(). Also move the mp_sws_set_colorspace() declaration from sws_utils.h to vf_scale.c.
* options, vo_x11: remove -zoom option, make it defaultwm42012-11-161-27/+1
| | | | | | | | | | | | | | The -zoom option enabled scaling with vo_x11. Remove the -zoom option, and make its behavior default. Since vo_x11 has to use libswscale for colorspace conversion anyway, which doesn't do actual extra scaling when vo_x11 is run in windowed mode, there should be no speed difference with this change. The code removed from vf_scale attempted to scale the video to d_width/ d_height, which matters for anamorphic video and the --xy option only. vo_x11 can handle these natively. The only case for which the removed vf_scale code could matter is encoding with vo_lavc, but since that didn't set VOFLAG_SWSCALE, nothing actually changes.
* Rename directories, move files (step 2 of 2)wm42012-11-121-11/+11
| | | | | | | | | | | | Finish renaming directories and moving files. Adjust all include statements to make the previous commit compile. The two commits are separate, because git is bad at tracking renames and content changes at the same time. Also take this as an opportunity to remove the separation between "common" and "mplayer" sources in the Makefile. ("common" used to be shared between mplayer and mencoder.)
* Rename directories, move files (step 1 of 2) (does not compile)wm42012-11-121-0/+718
Tis drops the silly lib prefixes, and attempts to organize the tree in a more logical way. Make the top-level directory less cluttered as well. Renames the following directories: libaf -> audio/filter libao2 -> audio/out libvo -> video/out libmpdemux -> demux Split libmpcodecs: vf* -> video/filter vd*, dec_video.* -> video/decode mp_image*, img_format*, ... -> video/ ad*, dec_audio.* -> audio/decode libaf/format.* is moved to audio/ - this is similar to how mp_image.* is located in video/. Move most top-level .c/.h files to core. (talloc.c/.h is left on top- level, because it's external.) Park some of the more annoying files in compat/. Some of these are relicts from the time mplayer used ffmpeg internals. sub/ is not split, because it's too much of a mess (subtitle code is mixed with OSD display and rendering). Maybe the organization of core is not ideal: it mixes playback core (like mplayer.c) and utility helpers (like bstr.c/h). Should the need arise, the playback core will be moved somewhere else, while core contains all helper and common code.