summaryrefslogtreecommitdiffstats
path: root/video/decode/lavc.h
Commit message (Collapse)AuthorAgeFilesLines
* video: initial dxva2 supportwm42014-10-251-0/+1
| | | | | Shamelessly stolen from ffmpeg. It probably doesn't work - you can debug it yourself.
* vaapi: try dealing with Intel's braindamaged shit driverswm42014-08-211-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | So talking to a certain Intel dev, it sounded like modern VA-API drivers are reasonable thread-safe. But apparently that is not the case. Not at all. So add approximate locking around all vaapi API calls. The problem appeared once we moved decoding and display to different threads. That means the "vaapi-copy" mode was unaffected, but decoding with vo_vaapi or vo_opengl lead to random crashes. Untested on real Intel hardware. With the vdpau emulation, it seems to work fine - but actually it worked fine even before this commit, because vdpau was written and designed not by morons, but competent people (vdpau is guaranteed to be fully thread-safe). There is some probability that this commit doesn't fix things entirely. One problem is that locking might not be complete. For one, libavcodec _also_ accesses vaapi, so we have to rely on our own guesses how and when lavc uses vaapi (since we disable multithreading when doing hw decoding, our guess should be relatively good, but it's still a lavc implementation detail). One other reason that this commit might not help is Intel's amazing potential to fuckup anything that is good and holy.
* video: warn if an emulated hwdec API is usedwm42014-05-281-0/+1
| | | | | | | | | | | | | | | | mpv supports two hardware decoding APIs on Linux: vdpau and vaapi. Each of these has emulation wrappers. The wrappers are usually slower and have fewer features than their native opposites. In particular the libva vdpau driver is practically unmaintained. Check the vendor string and print a warning if emulation is detected. Checking vendor strings is a very stupid thing to do, but I find the thought of people using an emulated API for no reason worse. Also, make --hwdec=auto never use an API that is detected as emulated. This doesn't work quite right yet, because once one API is loaded, vo_opengl doesn't unload it, so no hardware decoding will be used if the first probed API (usually vdpau) is rejected. But good enough.
* video: add a "hwdec" property to enable or disable hw decoding at runtimewm42014-04-231-0/+1
|
* vd_lavc: reinit hwdec on profile changeswm42014-03-171-0/+1
| | | | | Needed in theory. I don't know if there are even any real-world files which change the profile mid-stream.
* vd_lavc: remove unused fieldwm42014-03-161-1/+0
|
* vd_lavc: remove compatibility crapwm42014-03-161-25/+3
| | | | | | | All this code was needed for compatibility with very old libavcodec versions only (such as Libav 9). Includes some now-possible simplifications too.
* vd_lavc: ridiculous workaround for Libav 9 compatibilitywm42014-03-161-0/+1
| | | | | | | | This "sometimes" crashed when seeking. The fault apparently lies in libavcodec: the decoder returns an unreferenced frame! This is completely insane, but somehow I'm apparently still expected to work this around. As a reaction, I will drop Libav 9 support in the next commit. (While this commit will go into release/0.3.)
* video: initialize hw decoder in get_formatwm42014-03-101-0/+7
| | | | | | | | | | | | | | | | Apparently the "right" place to initialize the hardware decoder is in the libavcodec get_format callback. This doesn't change vda.c and vdpau_old.c, because I don't have OSX, and vdpau_old.c is probably going to be removed soon (if Libav ever manages to release Libav 10). So for now the init_decoder callback added with this commit is optional. This also means vdpau.c and vaapi.c don't have to manage and check the image parameters anymore. This change is probably needed for when libavcodec VDA supports gets a new iteration of its API.
* video/decode: mp_msg conversionswm42013-12-211-0/+1
| | | | Doesn't cover vdpau/vaapi parts yet, because these are a bit messier.
* Reduce recursive config.h inclusions in headerswm42013-12-181-2/+0
| | | | | | In my opinion, config.h inclusions should be kept to a minimum. MPlayer code really liked including config.h everywhere, though, even in often used header files. Try to reduce this.
* video: move video filter chain initialization from decoder to playerwm42013-12-101-2/+0
| | | | | | | | | | | | | This should help fixing some issues (like not draining video frames correctly on reinit), as well as decoupling the decoder, filter chain, and VO code. I also wanted to make the hardware video decoding fallback work properly if software-only video filters are inserted. This currently has the issue that the fallback is too violent, and throws away a bunch of demuxer packets needed to restart software decoding properly. But keeping "backup" packets turned out as too hacky, so I'm not doing this, at least not yet.
* Take care of some libavutil deprecations, drop support for FFmpeg 1.0wm42013-11-291-1/+1
| | | | | | | | | | | | | | PIX_FMT_* -> AV_PIX_FMT_* (except some pixdesc constants) enum PixelFormat -> enum AVPixelFormat Losen some version checks in certain newer pixel formats. av_pix_fmt_descriptors -> av_pix_fmt_desc_get This removes support for FFmpeg 1.0.x, which is even older than Libav 9.x. Support for it probably was already broken, and its libswresample was rejected by our build system anyway because it's broken. Mostly untested; it does compile with Libav 9.9.
* video: move handling of container vs. stream AR out of vd_lavc.cwm42013-11-231-1/+0
| | | | | | | Now the actual decoder doesn't need to care about this anymore, and it's handled in generic code instead. This simplifies vd_lavc.c, and in particular we don't need to detect format changes in the old way anymore.
* video: move struct mp_hwdec_info into its own header filewm42013-11-231-0/+1
| | | | | | | | This means most code accessing this struct must now include hwdec.h instead of dec_video.h. I just put it into dec_video.h at first because I thought a separate file would be a waste, but it's more proper to do it this way, as there are too many files which include dec_video.h only to get the mp_hwdec_info definition.
* vf_vavpp: make it work with vo_opengl and software decodingwm42013-11-221-2/+0
| | | | | | | | | | vo_opengl always loads the hwdec backend lazily, so hwdec_request_api() has to be called to possibly load it. This makes vf_vavpp work with software decoding. (Hardware decoding loads the backend before the filter is initialized, so this case is different.) Also, the VFCTRL_GET_HWDEC_INFO call doesn't need to be checked. If it fails, the info will be left blank.
* vo_opengl: add infrastructure for hardware decoding OpenGL interopwm42013-11-041-0/+2
| | | | | | | | | | | | Most hardware decoding APIs provide some OpenGL interop. This allows using vo_opengl, without having to read the video data back from GPU. This requires adding a backend for each hardware decoding API. (Each backend is an entry in gl_hwdec_vaglx[].) The backends expose video data as a set of OpenGL textures. Add infrastructure to support this. The next commit will add support for VA-API.
* video: check profiles with hardware decodingwm42013-11-011-0/+13
| | | | | | | | | | | | | | | We had some code for checking profiles earlier, which was removed in commits 2508f38 and adfb71b. These commits mentioned that (working) hw decoding was sometimes prevented due to profile checking, but I can't find the samples anymore that showed this behavior. Also, I changed my opinion, and I think checking the profiles is something that should be done for better fallback to software decoding behavior. The checks roughly follow VLC's vdpau profile checks, although we do not check codec levels. (VLC's profile checks aren't necessarily completely correct, but they're a welcome help anyway.) Add a --vd-lavc-check-hw-profile option, which skips the profile check.
* vaapi: allow GPU read-back with --hwdec=vaapi-copywm42013-09-251-0/+1
| | | | | | | | | | | | | | This code is actually quite inefficient: it reuses the (slow, simple) screenshot code. It uses an inefficient method to read the image (vaGetImage() instead of vaDeriveImage()), allocates new memory for each frame that is read, and it tries all image formats again each time. Also, in my tests it always picked NV12 as image format, which is not ideal if you actually want to filter the video, and vo_xv can't handle this format without conversion either. However, a user confirmed that it worked for him, so everything is fine.
* vd_lavc: allow process_image to change image formatwm42013-09-251-1/+1
| | | | | | | | | | | | | | This will allow GPU read-back with process_image. We have to restructure how init_vo() works. Instead of initializing the VO before process_image is called, rename init_vo() to update_image_params(), and let it update the params only. Then we really initialize the VO after process_image. As a consequence of these changes, already decoded hw frames are correctly unreferenced if creation of the filter chain fails. This could trigger assertions on VO uninitialization, because it's not allowed to reference hw frames past VO lifetime.
* video/decode: change fix_image callbackwm42013-08-151-1/+2
| | | | | This might make it slightly easier when trying to implement surface read-back for hardware decoding.
* video/decode: pass parameters directly to hwdec allocate_image callbackwm42013-08-151-1/+2
| | | | | | Instead of passing AVFrame. This also moves the mysterious logic about the size of the allocated image to common code, instead of duplicating it everywhere.
* video: add vaapi decode and output supportwm42013-08-121-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is based on the MPlayer VA API patches. To be exact it's based on a very stripped down version of commit f1ad459a263f8537f6c from git://gitorious.org/vaapi/mplayer.git. This doesn't contain useless things like benchmarking hacks and the demo code for GLX interop. Also, unlike in the original patch, decoding and video output are split into separate source files (the separation between decoding and display also makes pixel format hacks unnecessary). On the other hand, some features not present in the original patch were added, like screenshot support. VA API is rather bad for actual video output. Dealing with older libva versions or the completely broken vdpau backend doesn't help. OSD is low quality and should be rather slow. In some cases, only either OSD or subtitles can be shown at the same time (because OSD is drawn first, OSD is prefered). Also, libva can't decide whether it accepts straight or premultiplied alpha for OSD sub-pictures: the vdpau backend seems to assume premultiplied, while a native vaapi driver uses straight. So I picked straight alpha. It doesn't matter much, because the blending code for straight alpha I added to img_convert.c is probably buggy, and ASS subtitles might be blended incorrectly. Really good video output with VA API would probably use OpenGL and the GL interop features, but at this point you might just use vo_opengl. (Patches for making HW decoding with vo_opengl have a chance of being accepted.) Despite these issues, decoding seems to work ok. I still got tearing on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also tested with the vdpau vaapi wrapper on a nvidia system; however this was rather broken. (Fortunately, there is no reason to use mpv's VAAPI support over native VDPAU.)
* video: redo hw decoding initialization, add --hwdec=autowm42013-08-111-4/+28
| | | | | | | | | | | | | | | | Change how the HW decoding stuff is organized, the way it's initialized in particular. Instead of duplicating the list of supported codecs for hwaccel decoders, add a probe function which allows each decoder to report whether it supports a given codec. Add an "auto" choice to the --hwdec option, which automatically enables hardware decoding if libavcodec and/or the VO supports it. What mpv prints on the terminal changes a bit. Now it will just print a single line whether hw decoding is used or not (and nothing at all if no hw decoding at all was requested). The pretty violent fallback from hw decoding to software decoding is still quite verbose and evil-looking though.
* vdpau: split off decoder parts, use "new" libavcodec vdpau hwaccel APIwm42013-07-281-1/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the decoder parts from vo_vdpau.c to a new file vdpau_old.c. This file is named so because because it's written against the "old" libavcodec vdpau pseudo-decoder (e.g. "h264_vdpau"). Add support for the "new" libavcodec vdpau support. This was recently added and replaces the "old" vdpau parts. (In fact, Libav is about to deprecate and remove the "old" API without deprecation grace period, so we have to support it now. Moreover, there will probably be no Libav release which supports both, so the transition is even less smooth than we could hope, and we have to support both the old and new API.) Whether the old or new API is used is checked by a configure test: if the new API is found, it is used, otherwise the old API is assumed. Some details might be handled differently. Especially display preemption is a bit problematic with the "new" libavcodec vdpau support: it wants to keep a pointer to a specific vdpau API function (which can be driver specific, because preemption might switch drivers). Also, surface IDs are now directly stored in AVFrames (and mp_images), so they can't be forced to VDP_INVALID_HANDLE on preemption. (This changes even with older libavcodec versions, because mp_image always uses the newer representation to make vo_vdpau.c simpler.) Decoder initialization in the new code tries to deal with codec profiles, while the old code always uses the highest profile per codec. Surface allocation changes. Since the decoder won't call config() in vo_vdpau.c on video size change anymore, we allow allocating surfaces of arbitrary size instead of locking it to what the VO was configured. The non-hwdec code also has slightly different allocation behavior now. Enabling the old vdpau special decoders via e.g. --vd=lavc:h264_vdpau doesn't work anymore (a warning suggesting the --hwdec option is printed instead).
* video: add a new method to configure filters and VOswm42013-06-281-0/+1
| | | | | | | | | | | | | | | | | | The filter chain and the video ouputs have config() functions. They are strictly limited to transfering the video size and format. Other parameters (like color levels) have to be transferred separately. Improve upon this by introducing a separate set of reconfig() functions, which use mp_image_params to carry format parameters. This struct contains all image format related parameters from config(), plus additional parameters such as colorspace. Change vf_rotate to use it, as well as vo_opengl. vf_rotate is just an example/test case, but vo_opengl will need it later. The intention is also to get rid of VOCTRL_SET_YUV_COLORSPACE. This information is now handed to the VOs via reconfig(). The getter, VOCTRL_GET_YUV_COLORSPACE, will still be needed though.
* video: make use of libavcodec refcountingwm42013-03-131-4/+6
| | | | Now lavc_dr1.c is not used anymore if libavcodec is recent enough.
* core: redo how codecs are mapped, remove codecs.confwm42013-02-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use codec names instead of FourCCs to identify codecs. Rewrite how codecs are selected and initialized. Now each decoder exports a list of decoders (and the codec it supports) via add_decoders(). The order matters, and the first decoder for a given decoder is preferred over the other decoders. E.g. all ad_mpg123 decoders are preferred over ad_lavc, because it comes first in the mpcodecs_ad_drivers array. Likewise, decoders within ad_lavc that are enumerated first by libavcodec (using av_codec_next()) are preferred. (This is actually critical to select h264 software decoding by default instead of vdpau. libavcodec and ffmpeg/avconv use the same method to select decoders by default, so we hope this is sane.) The codec names follow libavcodec's codec names as defined by AVCodecDescriptor.name (see libavcodec/codec_desc.c). Some decoders have names different from the canonical codec name. The AVCodecDescriptor API is relatively new, so we need a compatibility layer for older libavcodec versions for codec names that are referenced internally, and which are different from the decoder name. (Add a configure check for that, because checking versions is getting way too messy.) demux/codec_tags.c is generated from the former codecs.conf (minus "special" decoders like vdpau, and excluding the mappings that are the same as the mappings libavformat's exported RIFF tables). It contains all the mappings from FourCCs to codec name. This is needed for demux_mkv, demux_mpg, demux_avi and demux_asf. demux_lavf will set the codec as determined by libavformat, while the other demuxers have to do this on their own, using the mp_set_audio/video_codec_from_tag() functions. Note that the sh_audio/video->format members don't uniquely identify the codec anymore, and sh->codec takes over this role. Replace the --ac/--vc/--afm/--vfm with new --vd/--ad options, which provide cover the functionality of the removed switched. Note: there's no CODECS_FLAG_FLIP flag anymore. This means some obscure container/video combinations (e.g. the sample Film_200_zygo_pro.mov) are played flipped. ffplay/avplay doesn't handle this properly either, so we don't care and blame ffmeg/libav instead.
* vd_lavc: remove -lavdopts vstats suboptionwm42013-01-131-3/+0
| | | | | | This printed per-frame statistics into a file, like bitrate or frame type. Not very useful and accesses obscure AVCodecContext fields (danger of deprecation/breakage), so get rid of it.
* video: decouple internal pixel formats from FourCCswm42013-01-131-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mplayer's video chain traditionally used FourCCs for pixel formats. For example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the string 'YV12' interpreted as unsigned int. Additionally, it used to encode information into the numeric values of some formats. The RGB formats had their bit depth and endian encoded into the least significant byte. Extended planar formats (420P10 etc.) had chroma shift, endian, and component bit depth encoded. (This has been removed in recent commits.) Replace the FourCC mess with a simple enum. Remove all the redundant formats like YV12/I420/IYUV. Replace some image format names by something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P. Add img_fourcc.h, which contains the old IDs for code that actually uses FourCCs. Change the way demuxers, that output raw video, identify the video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to request the rawvideo decoder, and sh_video->imgfmt specifies the pixel format. Like the previous hack, this is supposed to avoid the need for a complete codecs.cfg entry per format, or other lookup tables. (Note that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT raw video, but this is still considered better than adding a raw video decoder - even if trivial, it would be full of annoying lookup tables.) The TV code has not been tested. Some corrective changes regarding endian and other image format flags creep in.
* video/filter: change filter API, use refcounting, remove filter DRwm42013-01-131-4/+1
| | | | | | | | | | | | | | | | | | | | Change the entire filter API to use reference counted images instead of vf_get_image(). Remove filter "direct rendering". This was useful for vf_expand and (in rare cases) vf_sub: DR allowed these filters to pass a cropped image to the filters before them. Then, on filtering, the image was "uncropped", so that black bars could be added around the image without copying. This means that in some cases, vf_expand will be slower (-vf gradfun,expand for example). Note that another form of DR used for in-place filters has been replaced by simpler logic. Instead of trying to do DR, filters can check if the image is writeable (with mp_image_is_writeable()), and do true in-place if that's the case. This affects filters like vf_gradfun and vf_sub. Everything has to support strides now. If something doesn't, making a copy of the image data is required.
* vd_lavc: use refcountingwm42013-01-131-2/+5
| | | | | | | | | | | | | Note that if the codec doesn't support DR1, the image has to be copied. There is no other way to guarantee that the image will be valid after decoding the next image. The only important codec that doesn't support DR1 yet is rawvideo. It's likely that ffmpeg/Libav will fix this at some time. For now, this decoder uses an evil hack and puts pointers to the packet data into the returned frame. This means the image will actually get invalid as soon as the corresponding video packet is free'd, so copying the image is the only reasonable thing anyway.
* vd_lavc: add DR1 supportwm42013-01-131-0/+41
Replace libavcodec's native buffer allocation with code taken from ffplay/ffmpeg's libavfilter support. The code in lavc_dr1.c is directly copied from cmdutils.c. Note that this is quite arcane code, which contains some workarounds for decoder bugs and the like. This is not really a maintainance burden, since fixes from ffmpeg can be directly applied to the code in lavc_dr1.c. It's unknown why libavcodec doesn't provide such a function directly. avcodec_default_get_buffer() can't be reused for various reasons. There's some hope that the work known as The Evil Plan [1] will make custom get_buffer implementations unneeded. The DR1 support as of this commit does nothing. A future commit will use it to implement ref-counting for mp_image (similar to how AVFrame will be ref-counted with The Evil Plan.) [1] http://lists.libav.org/pipermail/libav-devel/2012-December/039781.html