summaryrefslogtreecommitdiffstats
path: root/video/decode/vd_lavc.c
Commit message (Collapse)AuthorAgeFilesLines
* cleanup: replace OPT_FLAG_ON and OPT_MAKE_FLAGS with OPT_FLAGwm42013-02-091-1/+1
| | | | | | | | | | OPT_MAKE_FLAGS() used to emit two options (one with "no" prefixed), but that has been long removed by special casing flag options in the option parser. OPT_FLAG_ON() used to imply that there's no "no-" prefixed option, but this hasn't been the case for a while either. (Conceptually, it has been replaced by OPT_FLAG_STORE().) Remove OPT_FLAG_OFF, which was unused.
* vd_lavc: add stupid hack to fix decoding of some files with Libav 0.8.xwm42013-01-241-5/+8
| | | | | | | | | | The decoder returns with AVFrame.format not correctly set for some h264 files (strangely only some). We have to access AVCodecContext.pix_fmt instead. On newer libavcodec versions, it's the other way around: the AVCodecContext.pix_fmt may be incorrectly set on pixel format changes, and you are supposed to use AVFrame.format. The same problem probably exists on older ffmpeg versions too.
* Silence two compiler warningswm42013-01-161-1/+0
| | | | Both should be harmless.
* vd_lavc: remove -lavdopts vstats suboptionwm42013-01-131-94/+0
| | | | | | This printed per-frame statistics into a file, like bitrate or frame type. Not very useful and accesses obscure AVCodecContext fields (danger of deprecation/breakage), so get rid of it.
* vd_lavc: remove lowres decodingwm42013-01-131-11/+2
| | | | | | | | | | This was a "broken misfeature" according to Libav developers. It wasn't implemented for modern codecs (like h264), and has been removed from Libav a while ago (the AVCodecContext field has been marked as deprecated and its value is ignored). FFmpeg still supports it, but isn't much useful due to aforementioned reasons. Remove the code to enable it.
* vd_lavc: prefer AVFrame over AVCodecContext fieldswm42013-01-131-18/+17
| | | | | | This is more correct. Not all frame specific fields are in AVFrame, such as colorspace and color_range, and these are still queried through the decoder context.
* video: decouple internal pixel formats from FourCCswm42013-01-131-10/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mplayer's video chain traditionally used FourCCs for pixel formats. For example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the string 'YV12' interpreted as unsigned int. Additionally, it used to encode information into the numeric values of some formats. The RGB formats had their bit depth and endian encoded into the least significant byte. Extended planar formats (420P10 etc.) had chroma shift, endian, and component bit depth encoded. (This has been removed in recent commits.) Replace the FourCC mess with a simple enum. Remove all the redundant formats like YV12/I420/IYUV. Replace some image format names by something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P. Add img_fourcc.h, which contains the old IDs for code that actually uses FourCCs. Change the way demuxers, that output raw video, identify the video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to request the rawvideo decoder, and sh_video->imgfmt specifies the pixel format. Like the previous hack, this is supposed to avoid the need for a complete codecs.cfg entry per format, or other lookup tables. (Note that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT raw video, but this is still considered better than adding a raw video decoder - even if trivial, it would be full of annoying lookup tables.) The TV code has not been tested. Some corrective changes regarding endian and other image format flags creep in.
* mp_image: change how palette is handledwm42013-01-131-14/+0
| | | | | | | | | | | | | | | | | | | | | | | | | According to DOCS/OUTDATED-tech/colorspaces.txt, the following formats are supposed to be palettized: IMGFMT_BGR8 IMGFMT_RGB8, IMGFMT_BGR4_CHAR IMGFMT_RGB4_CHAR IMGFMT_BGR4 IMGFMT_RGB4 Of these, only BGR8 and RGB8 are actually treated as palettized in some way. ffmpeg has only one palettized format (AV_PIX_FMT_PAL8), and IMGFMT_BGR8 was inconsistently mapped to packed non-palettized RGB formats too (AV_PIX_FMT_BGR8). Moreover, vf_scale.c contained messy hacks to generate a palette when AV_PIX_FMT_BGR8 is output. (libswscale does not support AV_PIX_FMT_PAL8 output in the first place.) Get rid of all of this, and introduce IMGFMT_PAL8, which directly maps to AV_PIX_FMT_PAL8. Remove the palette creation code from vf_scale.c. IMGFMT_BGR8 maps to AV_PIX_FMT_RGB8 (don't ask me why it's swapped), without any palette use. Enabling it in vo_x11 or using it as vf_scale input seems to give correct results.
* mp_image: simplify image allocationwm42013-01-131-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | mp_image_alloc_planes() allocated images with minimal stride, even if the resulting stride was unaligned. It was the responsibility of vf_get_image() to set an image's width to something larger than required to get an aligned stride, and then crop it. Always allocate with aligned strides instead. Get rid of IMGFMT_IF09 special handling. This format is not used anymore. (IF09 has 4x4 chroma sub-sampling, and that is what it was mainly used for - this is still supported.) Get rid of swapped chroma plane allocation. This is not used anywhere, and VOs like vo_xv, vo_direct3d and vo_sdl do their own swapping. Always round chroma width/height up instead of down. Consider 4:2:0 and an uneven image size. For luma, the size was left uneven, and the chroma size was rounded down. This doesn't make sense, because chroma would be missing for the bottom/right border. Remove mp_image_new_empty() and mp_image_alloc_planes(), they were not used anymore, except in draw_bmp.c. (It's still allowed to setup mp_images manually, you just can't allocate image data with them anymore - this is also done in draw_bmp.c.)
* video/filter: change filter API, use refcounting, remove filter DRwm42013-01-131-40/+19
| | | | | | | | | | | | | | | | | | | | Change the entire filter API to use reference counted images instead of vf_get_image(). Remove filter "direct rendering". This was useful for vf_expand and (in rare cases) vf_sub: DR allowed these filters to pass a cropped image to the filters before them. Then, on filtering, the image was "uncropped", so that black bars could be added around the image without copying. This means that in some cases, vf_expand will be slower (-vf gradfun,expand for example). Note that another form of DR used for in-place filters has been replaced by simpler logic. Instead of trying to do DR, filters can check if the image is writeable (with mp_image_is_writeable()), and do true in-place if that's the case. This affects filters like vf_gradfun and vf_sub. Everything has to support strides now. If something doesn't, making a copy of the image data is required.
* mp_image: require using mp_image_set_size() for setting w/hwm42013-01-131-2/+1
| | | | | | | | | | | | | | Setting the size of a mp_image must be done with mp_image_set_size() now. Do this to guarantee that the redundant fields (like chroma_width) are updated consistently. Replacing the redundant fields by function calls would probably be better, but there are too many uses of them, and is a bit less convenient. Most code actually called mp_image_setfmt(), which did this as well. This commit just makes things a bit more explicit. Warning: the video filter chain still sets up mp_images manually, and vf_get_image() is not updated.
* vd_lavc: use refcountingwm42013-01-131-14/+48
| | | | | | | | | | | | | Note that if the codec doesn't support DR1, the image has to be copied. There is no other way to guarantee that the image will be valid after decoding the next image. The only important codec that doesn't support DR1 yet is rawvideo. It's likely that ffmpeg/Libav will fix this at some time. For now, this decoder uses an evil hack and puts pointers to the packet data into the returned frame. This means the image will actually get invalid as soon as the corresponding video packet is free'd, so copying the image is the only reasonable thing anyway.
* vd_lavc: add DR1 supportwm42013-01-131-21/+5
| | | | | | | | | | | | | | | | | | | | Replace libavcodec's native buffer allocation with code taken from ffplay/ffmpeg's libavfilter support. The code in lavc_dr1.c is directly copied from cmdutils.c. Note that this is quite arcane code, which contains some workarounds for decoder bugs and the like. This is not really a maintainance burden, since fixes from ffmpeg can be directly applied to the code in lavc_dr1.c. It's unknown why libavcodec doesn't provide such a function directly. avcodec_default_get_buffer() can't be reused for various reasons. There's some hope that the work known as The Evil Plan [1] will make custom get_buffer implementations unneeded. The DR1 support as of this commit does nothing. A future commit will use it to implement ref-counting for mp_image (similar to how AVFrame will be ref-counted with The Evil Plan.) [1] http://lists.libav.org/pipermail/libav-devel/2012-December/039781.html
* video: different way to enable hardware decoding, add software fallbackwm42013-01-131-66/+204
| | | | | | | | | | | | | | Deprecate the hardware specific video codec entries (like ffh264vdpau). Replace them with the --hwdec switch, which requests that a specific hardware decoding API should be used. The codecs.conf entries will be removed at a later time, but for now they are useful for testing and compatibility. Instead of --vc=ffh264vdpau, --hwdec=vdpau should be used. Add a fallback if hardware decoding fails. Most hardware decoders (including vdpau) support only a subset of h264, and having such a fallback is supposed to enable a better user experience.
* vd_lavc: remove codec DRwm42013-01-131-189/+14
| | | | | | | | | | This was buggy and didn't even work in the simplest cases. It was disabled when multithreading was used, and always disabled for h264. A better alternative (reference counting) will be added later. Hardware decoding still uses the ffmpeg DR mechanism, but has been decoupled from mpv's DR in the previous commit.
* video: make vdpau hardware decoding not use DR code pathwm42013-01-131-39/+107
| | | | | | vdpau hardware decoding used the DR (direct rendering) path to let the decoder query a surface from the VO. Special-case the HW decoding path instead, to make it separate from DR.
* vd_lavc: do not mutate global threads optionwm42013-01-131-10/+9
| | | | | | | | | | | This mutated the variable for the thread count option (lavc_param->threads) on decoder initialization. This didn't have any practical relevance, unless formats supporting hardware video decoding and other formats were played in the same mpv instance. In this case, hardware decoding would set threads to 1, and all files played after that would use only one thread as well even with software decoding. Remove XvMC leftover (CODEC_CAP_HWACCEL).
* vd_lavc: cosmetics: move debugging code out of the waywm42013-01-131-75/+82
| | | | | Relatively useless statistics printing code was stuffed directly into the middle of the central decode() function. Move it away.
* video: simplify decoder pixel format handlingwm42013-01-131-31/+3
| | | | | | | | | | | | | | | | | | | | Simplify the decoder pixel format handling by making it handle only the case vd_lavc needs: a video stream always decodes to a single pixel format. Remove the handling for multiple pixel formats, and remove the codecs.conf pixel format declarations that are left. Remove the handling of "ambiguous" pixel formats like YV12 vs. I420 (via VDCTRL_QUERY_FORMAT etc.). This is only a problem if the video chain supports I420, but not YV12, which doesn't seem to be the case anywhere, and in fact would not have any advantage. Make the "flip" flag a global per-codec flag, rather than a pixel format specific flag. (Some ffmpeg decoders still return a flipped image, so this has to be done manually.) Also fix handling of the flip operation: do not overwrite the global flip option, and make the --flip option invert the codec flip option rather than overriding it.
* video: remove slice based filtering and video outputwm42013-01-131-52/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Slices allowed filtering or drawing video in horizontal bands or blocks. This allowed working on the video in smaller units. In theory, this could bring a performance win by lowering cache pressure, as you didn't have to keep the whole video frame in cache while filtering, only the slice. In practice, the slice code path was barely used for the following reasons: - Multithreaded decoding with ffmpeg didn't use slices. The ffmpeg slice callback was disabled, because it can be called from another thread, and the mplayer video chain is not thread-safe. - There was nothing that would turn "full" images into appropriate slices, so slices were rarely used. - Most filters didn't actually support slices. On the other hand, supporting slices lead to code duplication and more complex code in general. I made some experiments and didn't find any actual measurable performance improvements when using slices. Even ffmpeg removed slices based filtering from libavfilter in favor of simpler code. The most broken thing about the slices code path is that slices can't be queued, like it is done for images in vo.c.
* video: make vdpau hardware decoding not use slices code pathwm42013-01-131-2/+15
| | | | | | | | | | | | | | For some reason, libavcodec abuses the slices rendering code path for hardware decoding: in that case, the only purpose of the draw callback is to pass a vdpau video surface object to video output. (It is unclear to me why this had to use the slices code, instead of just returning an AVFrame with the required vdpau state.) Make this code separate within mpv, so that the internal slices code path is not used for hardware decoding. Pass the vdpau state with VOCTRL_HWDEC_DECODER_RENDER instead. Remove the mencoder specific VOCTRLs.
* Add missing compat/libav.h includeswm42012-11-121-0/+2
| | | | For avcodec_free_frame().
* Rename directories, move files (step 2 of 2)wm42012-11-121-11/+11
| | | | | | | | | | | | Finish renaming directories and moving files. Adjust all include statements to make the previous commit compile. The two commits are separate, because git is bad at tracking renames and content changes at the same time. Also take this as an opportunity to remove the separation between "common" and "mplayer" sources in the Makefile. ("common" used to be shared between mplayer and mencoder.)
* Rename directories, move files (step 1 of 2) (does not compile)wm42012-11-121-0/+857
Tis drops the silly lib prefixes, and attempts to organize the tree in a more logical way. Make the top-level directory less cluttered as well. Renames the following directories: libaf -> audio/filter libao2 -> audio/out libvo -> video/out libmpdemux -> demux Split libmpcodecs: vf* -> video/filter vd*, dec_video.* -> video/decode mp_image*, img_format*, ... -> video/ ad*, dec_audio.* -> audio/decode libaf/format.* is moved to audio/ - this is similar to how mp_image.* is located in video/. Move most top-level .c/.h files to core. (talloc.c/.h is left on top- level, because it's external.) Park some of the more annoying files in compat/. Some of these are relicts from the time mplayer used ffmpeg internals. sub/ is not split, because it's too much of a mess (subtitle code is mixed with OSD display and rendering). Maybe the organization of core is not ideal: it mixes playback core (like mplayer.c) and utility helpers (like bstr.c/h). Should the need arise, the playback core will be moved somewhere else, while core contains all helper and common code.