summaryrefslogtreecommitdiffstats
path: root/video/decode/vd_lavc.c
Commit message (Collapse)AuthorAgeFilesLines
* vd_lavc: remove explicit crystalhd supportwm42013-11-061-14/+0
| | | | | | | | | | | | | | | | This removes "--hwdec=crystalhd". I doubt anyone even tried to use this. But even if someone wants to use it, the decoders can still be explicitly invoked with e.g.: --vd=lavc:h264_crystalhd The only advantage our special code provided was fallback to software decoding. (But I'm not sure how the ffmpeg crystalhd pseudo-decoder actually behaves.) Removing this will allow some simplifications as soon as we don't need vdpau_old.c anymore.
* Merge branch 'master' into have_configurewm42013-11-041-0/+7
|\ | | | | | | | | Conflicts: configure
| * vo_opengl: add infrastructure for hardware decoding OpenGL interopwm42013-11-041-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | Most hardware decoding APIs provide some OpenGL interop. This allows using vo_opengl, without having to read the video data back from GPU. This requires adding a backend for each hardware decoding API. (Each backend is an entry in gl_hwdec_vaglx[].) The backends expose video data as a set of OpenGL textures. Add infrastructure to support this. The next commit will add support for VA-API.
* | configure: uniform the defines to #define HAVE_xxx (0|1)Stefano Pigozzi2013-11-031-6/+5
|/ | | | | | | | | | | | | | | | | | | | | The configure followed 5 different convetions of defines because the next guy always wanted to introduce a new better way to uniform it[1]. For an hypothetic feature 'hurr' you could have had: * #define HAVE_HURR 1 / #undef HAVE_DURR * #define HAVE_HURR / #undef HAVE_DURR * #define CONFIG_HURR 1 / #undef CONFIG_DURR * #define HAVE_HURR 1 / #define HAVE_DURR 0 * #define CONFIG_HURR 1 / #define CONFIG_DURR 0 All is now uniform and uses: * #define HAVE_HURR 1 * #define HAVE_DURR 0 We like definining to 0 as opposed to `undef` bcause it can help spot typos and is very helpful when doing big reorganizations in the code. [1]: http://xkcd.com/927/ related
* demux: rename Windows symbolswm42013-11-021-10/+10
| | | | | | | | | | | | | | | | | | | | There are some Microsoft Windows symbols which are traditionally used by the mplayer core, because it used to be convenient (avi was the big format, using binary windows decoders made sense...). So these symbols have the exact same definition as the Windows one, and if mplayer is compiled on Windows, the symbols from windows.h are used. This broke recently just because some files were shuffled around, and the symbols defined in ms_hdr.h collided with windows.h ones. Since we don't have windows binary decoders anymore, there's not the slightest reason our symbols should have the same names. Rename them to reduce the risk for collision, and to fix the recent regression. Drop WAVEFORMATEXTENSIBLE, because it's mostly unused. ao_dsound defines its own version if the windows headers don't define it, and ao_wasapi is not available on systems where this symbol is missing. Also reindent ms_hdr.h.
* video: check profiles with hardware decodingwm42013-11-011-0/+43
| | | | | | | | | | | | | | | We had some code for checking profiles earlier, which was removed in commits 2508f38 and adfb71b. These commits mentioned that (working) hw decoding was sometimes prevented due to profile checking, but I can't find the samples anymore that showed this behavior. Also, I changed my opinion, and I think checking the profiles is something that should be done for better fallback to software decoding behavior. The checks roughly follow VLC's vdpau profile checks, although we do not check codec levels. (VLC's profile checks aren't necessarily completely correct, but they're a welcome help anyway.) Add a --vd-lavc-check-hw-profile option, which skips the profile check.
* vd_lavc: add more ifdeffery and ffmpeg cargo culting for correctnesswm42013-10-311-7/+13
| | | | | | | | | | | | | We mixed the "old" AVFrame management functions (avcodec_alloc_frame, avcodec_free_frame) with reference counting. This doesn't work correctly; you must use av_frame_alloc and av_frame_free. Of course ffmpeg doesn't warn us about the bad usage, but will just mess up things silently. (Thanks a lot...) While the alloc function seems to be 100% compatible, the free function will do bad things, such as freeing memory that might still be referenced by another frame. I didn't experience any actual bugs, but maybe that was pure luck.
* core: add --force-windowwm42013-10-021-1/+0
| | | | | | | | | | | | | | | | | This commit adds the --force-window option, which will cause mpv always to create a window when started. This can be useful when pretending that mpv is a GUI application (which it isn't, but users pretend anyway), and playing audio files would run mpv in the background without giving a window to control it. This doesn't actually create the window immediately: it only does so only after initializing playback and when it is clear that there won't be any actual video. This could be a problem when starting slow or completely stuck network streams (mpv would remain frozen in the background), or if video initialization somehow is stuck forever in an in-between state (like when the decoder doesn't output a video frame, but doesn't return an error either). Well, we can pretend only so much that mpv is a GUI application.
* video: let sh_video->aspect always be container aspect ratiowm42013-09-261-6/+8
| | | | | | | Now writing -1 to the 'aspect' property resets the video to the auto aspect ratio. Returning the aspect from the property becomes a bit more complicated, because we still try to return the container aspect ratio if no frame has been decoded yet.
* vaapi: allow GPU read-back with --hwdec=vaapi-copywm42013-09-251-0/+2
| | | | | | | | | | | | | | This code is actually quite inefficient: it reuses the (slow, simple) screenshot code. It uses an inefficient method to read the image (vaGetImage() instead of vaDeriveImage()), allocates new memory for each frame that is read, and it tries all image formats again each time. Also, in my tests it always picked NV12 as image format, which is not ideal if you actually want to filter the video, and vo_xv can't handle this format without conversion either. However, a user confirmed that it worked for him, so everything is fine.
* vd_lavc: allow process_image to change image formatwm42013-09-251-17/+20
| | | | | | | | | | | | | | This will allow GPU read-back with process_image. We have to restructure how init_vo() works. Instead of initializing the VO before process_image is called, rename init_vo() to update_image_params(), and let it update the params only. Then we really initialize the VO after process_image. As a consequence of these changes, already decoded hw frames are correctly unreferenced if creation of the filter chain fails. This could trigger assertions on VO uninitialization, because it's not allowed to reference hw frames past VO lifetime.
* vd_lavc: reset last_sample_aspect_ratio in uninit_avctx()xylosper2013-09-131-0/+1
| | | | | | | | In init_vo(), if sh->aspect is 0 or last_sample_aspect_ratio is set, sh->aspect is overwritten. With software decoding fallback behaviour, this makes the aspect ratio from container ignored since last_sample_aspect_ratio is already set in first try with hardware decoding.
* video/out: don't require VOs to handle screenshot aspect speciallywm42013-08-241-3/+1
| | | | | | | | | | | | | | | | This affects VOs which just reuse the mp_image from draw_image() to return screenshots. The aspect of these images is never different from the aspect the screenshots should be, so there's no reason to adjust the aspect in these cases. Other VOs still need it in order to restore the original image attributes. This requires some changes to the video filter code to make sure that the aspect in the passed mp_images is consistent. The changes in mplayer.c and vd_lavc.c are (probably) not strictly needed for this commit, but contribute to consistency.
* video: add vda decode support (with hwaccel) and direct renderingStefano Pigozzi2013-08-221-6/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Decoding H264 using Video Decode Acceleration used the custom 'vda_h264_dec' decoder in FFmpeg. The Good: This new implementation has some advantages over the previous one: - It works with Libav: vda_h264_dec never got into Libav since they prefer client applications to use the hwaccel API. - It is way more efficient: in my tests this implementation yields a reduction of CPU usage of roughly ~50% compared to using `vda_h264_dec` and ~65-75% compared to h264 software decoding. This is mainly because `vo_corevideo` was adapted to perform direct rendering of the `CVPixelBufferRefs` created by the Video Decode Acceleration API Framework. The Bad: - `vo_corevideo` is required to use VDA decoding acceleration. - only works with versions of ffmpeg/libav new enough (needs reference refcounting). That is FFmpeg 2.0+ and Libav's git master currently. The Ugly: VDA was hardcoded to use UYVY (2vuy) for the uploaded video texture. One one end this makes the code simple since Apple's OpenGL implementation actually supports this out of the box. It would be nice to support other output image formats and choose the best format depending on the input, or at least making it configurable. My tests indicate that CPU usage actually increases with a 420p IMGFMT output which is not what I would have expected. NOTE: There is a small memory leak with old versions of FFmpeg and with Libav since the CVPixelBufferRef is not automatically released when the AVFrame is deallocated. This can cause leaks inside libavcodec for decoded frames that are discarded before mpv wraps them inside a refcounted mp_image (this only happens on seeks). For frames that enter mpv's refcounting facilities, this is not a problem since we rewrap the CVPixelBufferRef in our mp_image that properly forwards CVPixelBufferRetain/CvPixelBufferRelease calls to the underying CVPixelBufferRef. So, for FFmpeg use something more recent than `b3d63995` for Libav the patch was posted to the dev ML in July and in review since, apparently, the proposed fix is rather hacky.
* video/decode: change fix_image callbackwm42013-08-151-2/+2
| | | | | This might make it slightly easier when trying to implement surface read-back for hardware decoding.
* vd_lavc: fix comment, document hwdec video frame size trickinesswm42013-08-151-1/+4
| | | | | | | | | About this issue, it would be better if the surfaces could be allocated with the real size, and the vdpau video mixer could be created with that size as well. That would be a bit hard, because the real surface size had to be communicated to vdpau. So I'm going with this solution. vaapi seems to be fine with either surface size, so there's hopefully no problem.
* video/decode: pass parameters directly to hwdec allocate_image callbackwm42013-08-151-1/+5
| | | | | | Instead of passing AVFrame. This also moves the mysterious logic about the size of the allocated image to common code, instead of duplicating it everywhere.
* video: add vaapi decode and output supportwm42013-08-121-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is based on the MPlayer VA API patches. To be exact it's based on a very stripped down version of commit f1ad459a263f8537f6c from git://gitorious.org/vaapi/mplayer.git. This doesn't contain useless things like benchmarking hacks and the demo code for GLX interop. Also, unlike in the original patch, decoding and video output are split into separate source files (the separation between decoding and display also makes pixel format hacks unnecessary). On the other hand, some features not present in the original patch were added, like screenshot support. VA API is rather bad for actual video output. Dealing with older libva versions or the completely broken vdpau backend doesn't help. OSD is low quality and should be rather slow. In some cases, only either OSD or subtitles can be shown at the same time (because OSD is drawn first, OSD is prefered). Also, libva can't decide whether it accepts straight or premultiplied alpha for OSD sub-pictures: the vdpau backend seems to assume premultiplied, while a native vaapi driver uses straight. So I picked straight alpha. It doesn't matter much, because the blending code for straight alpha I added to img_convert.c is probably buggy, and ASS subtitles might be blended incorrectly. Really good video output with VA API would probably use OpenGL and the GL interop features, but at this point you might just use vo_opengl. (Patches for making HW decoding with vo_opengl have a chance of being accepted.) Despite these issues, decoding seems to work ok. I still got tearing on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also tested with the vdpau vaapi wrapper on a nvidia system; however this was rather broken. (Fortunately, there is no reason to use mpv's VAAPI support over native VDPAU.)
* video: redo hw decoding initialization, add --hwdec=autowm42013-08-111-76/+118
| | | | | | | | | | | | | | | | Change how the HW decoding stuff is organized, the way it's initialized in particular. Instead of duplicating the list of supported codecs for hwaccel decoders, add a probe function which allows each decoder to report whether it supports a given codec. Add an "auto" choice to the --hwdec option, which automatically enables hardware decoding if libavcodec and/or the VO supports it. What mpv prints on the terminal changes a bit. Now it will just print a single line whether hw decoding is used or not (and nothing at all if no hw decoding at all was requested). The pretty violent fallback from hw decoding to software decoding is still quite verbose and evil-looking though.
* core: move contents to mpvcore (2/2)Stefano Pigozzi2013-08-061-7/+7
| | | | Followup commit. Fixes all the files references.
* vd_lavc: print warning if hardware decoding API is not availablewm42013-07-301-0/+3
| | | | | At least currently, this case pretty much happens only in the case vdpau is requested, but not compiled in.
* vd_lavc: fix CONFIG_VDPAU checkwm42013-07-301-1/+1
| | | | | | CONFIG_VDPAU was just defined to 0, instead of undefined when vdpau was unavailable. I blame the old mplayer code, which apparently can't have consistent conventions.
* vdpau: split off decoder parts, use "new" libavcodec vdpau hwaccel APIwm42013-07-281-71/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the decoder parts from vo_vdpau.c to a new file vdpau_old.c. This file is named so because because it's written against the "old" libavcodec vdpau pseudo-decoder (e.g. "h264_vdpau"). Add support for the "new" libavcodec vdpau support. This was recently added and replaces the "old" vdpau parts. (In fact, Libav is about to deprecate and remove the "old" API without deprecation grace period, so we have to support it now. Moreover, there will probably be no Libav release which supports both, so the transition is even less smooth than we could hope, and we have to support both the old and new API.) Whether the old or new API is used is checked by a configure test: if the new API is found, it is used, otherwise the old API is assumed. Some details might be handled differently. Especially display preemption is a bit problematic with the "new" libavcodec vdpau support: it wants to keep a pointer to a specific vdpau API function (which can be driver specific, because preemption might switch drivers). Also, surface IDs are now directly stored in AVFrames (and mp_images), so they can't be forced to VDP_INVALID_HANDLE on preemption. (This changes even with older libavcodec versions, because mp_image always uses the newer representation to make vo_vdpau.c simpler.) Decoder initialization in the new code tries to deal with codec profiles, while the old code always uses the highest profile per codec. Surface allocation changes. Since the decoder won't call config() in vo_vdpau.c on video size change anymore, we allow allocating surfaces of arbitrary size instead of locking it to what the VO was configured. The non-hwdec code also has slightly different allocation behavior now. Enabling the old vdpau special decoders via e.g. --vd=lavc:h264_vdpau doesn't work anymore (a warning suggesting the --hwdec option is printed instead).
* Fix some -Wshadow warningswm42013-07-231-4/+4
| | | | | | In general, this warning can hint to actual bugs. We don't enable it yet, because it would conflict with some unmerged code, and we should check with clang too (this commit was done by testing with gcc).
* vd: add VDCTRL_GET_PARAMSwm42013-07-151-0/+3
| | | | | | This is probably going to be unused, but might help with debugging and such. It returns the image parameters as determined by the video decoder.
* Remove old demuxerswm42013-07-071-4/+1
| | | | | | | | | | Delete demux_avi, demux_asf, demux_mpg, demux_ts. libavformat does better than them (except in rare corner cases), and the demuxers have a bad influence on the rest of the code. Often they don't output proper packets, and require additional audio and video parsing. Most work only in --no-correct-pts mode. Remove them to facilitate further cleanups.
* vo_opengl: handle chroma locationwm42013-06-281-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the video decoder chroma location flags and render chroma locations other than centered. Until now, we've always used the intuitive and obvious centered chroma location, but H.264 uses something else. FFmpeg provides a small overview in libavcodec/avcodec.h: ----------- /** * X X 3 4 X X are luma samples, * 1 2 1-6 are possible chroma positions * X X 5 6 X 0 is undefined/unknown position */ enum AVChromaLocation{ AVCHROMA_LOC_UNSPECIFIED = 0, AVCHROMA_LOC_LEFT = 1, ///< mpeg2/4, h264 default AVCHROMA_LOC_CENTER = 2, ///< mpeg1, jpeg, h263 AVCHROMA_LOC_TOPLEFT = 3, ///< DV AVCHROMA_LOC_TOP = 4, AVCHROMA_LOC_BOTTOMLEFT = 5, AVCHROMA_LOC_BOTTOM = 6, AVCHROMA_LOC_NB , ///< Not part of ABI }; ----------- The visual difference is literally minimal, but since videophiles apparently consider this detail as quality mark of a video renderer, support it anyway. We don't bother with chroma locations other than centered and left, though. Not sure about correctness, but it's probably ok.
* video: add a new method to configure filters and VOswm42013-06-281-6/+16
| | | | | | | | | | | | | | | | | | The filter chain and the video ouputs have config() functions. They are strictly limited to transfering the video size and format. Other parameters (like color levels) have to be transferred separately. Improve upon this by introducing a separate set of reconfig() functions, which use mp_image_params to carry format parameters. This struct contains all image format related parameters from config(), plus additional parameters such as colorspace. Change vf_rotate to use it, as well as vo_opengl. vf_rotate is just an example/test case, but vo_opengl will need it later. The intention is also to get rid of VOCTRL_SET_YUV_COLORSPACE. This information is now handed to the VOs via reconfig(). The getter, VOCTRL_GET_YUV_COLORSPACE, will still be needed though.
* options: remove -lavdopts debug suboptionwm42013-06-281-4/+0
| | | | This can be set as avopt instead.
* core: add common function to initialize AVPacketwm42013-06-031-11/+2
| | | | | | | | | | Audio and video had their own (very similar) functions to initialize an AVPacket (ffmpeg's packet struct) from a demux_packet (mplayer's packet struct). Add a common function for these. Also use this function for sd_lavc_conv. This is actually a functional change, as some libavfilter subtitle demuxers add weird out-of-band stuff as side-data.
* options: remove some questionable -lavdopts suboptionswm42013-05-291-15/+0
| | | | | | Most of these are rather questionable, the rest you rarely need to set manually. You still can set all of them with -lavdopts-o (because libavcodec has AVOptions for them).
* vd_lavc: change VDCTRL_REINIT_VO behaviorwm42013-05-181-3/+1
| | | | | | | | | | | | | This tried to use ctx->pic (last decoded AVFrame) for the frame bounds. However, if av_frame_unref() on the AVFrame is called, the function will reset _all_ AVFrame fields, even those which are not involved with memory management. As result, mpcodecs_config_vo() was called with 0 width/height, which made the function exit early, instead of reconfiguring the filter chain. Go back to using mpcodecs_config_vo() directly. (That's what it did before this VDCTRL was originally introduced; the original reason for it disappeared.)
* video: rename VDCTRL_RESET_ASPECT to VDCTRL_REINIT_VOwm42013-05-181-1/+1
| | | | Same thing, and VDCTRL_REINIT_VO implies more generic use.
* vd_lavc: hack-fix vdpau decoding with non mod 16 videowm42013-05-141-1/+10
| | | | | | | This changes the code so that it does the same as MPlayer, mplayer2 and mpv before ref-counted AVFrame. The problem is that get_buffer2 is called with aligned frame dimensions, while get_buffer didn't. This breaks the mpv video frame size change detection.
* video: add --hwdec-codecs option to whitelist codecs for hw decodingwm42013-05-041-3/+16
|
* vd_lavc: allow explicitly selecting vdpau hw decoderswm42013-05-041-1/+11
| | | | | | | | | | | | This allows using the vdpau decoders with -vd without having to use the -hwdec switch (basically like in mplayer). Note that this way of selecting the hardware decoder is still deprecated. libavcodec went away from adding special decoder entries for hardware decoding, and instead makes use of the "hwaccel" architecture, where hardware decoders use the same decoder names as the software decoders. The old vdpau special decoders will probably be deprecated and removed in the future.
* vd_lavc: fix decoder init failure pathwm42013-04-271-14/+18
| | | | | | | | | libavcodec decoder initialization failure caused a segfault, because it wasn't properly reported back in init(). Also remove the return value from init_avctx(), which actually makes things simpler. Instead, ctx->avctx can be checked to see whether initialization was ok.
* core: always pass data via packet fields to video decoderswm42013-03-281-9/+8
| | | | | | | Makes the code a bit simpler to follow, at least in the "modern" decoding path (update_video_nocorrect_pts() is used with old demuxers, which don't return proper packets and need further parsing, so this code looks less simple now).
* video: make use of libavcodec refcountingwm42013-03-131-6/+71
| | | | Now lavc_dr1.c is not used anymore if libavcodec is recent enough.
* video: prepare for libavcodec refcountingwm42013-03-131-46/+40
| | | | | Some minor refactoring and moving code around. There should be no functional changes.
* m_option: don't define OPT_BASE_STRUCT by defaultwm42013-03-011-0/+2
| | | | | | | | | | | OPT_BASE_STRUCT defines which struct the OPT_ macros (like OPT_INT etc.) reference implicitly, since these macros take struct member names but no struct type. Normally, only cfg-mplayer.h should need this, and other places shouldn't be bothered with having to #undef it. (Some files, like demux_lavf.c, still store their options in MPOpts. In the long term, this should be removed, and handled like e.g. with VO suboptions instead.)
* dec_video: remove weird offset for VDCTRL_QUERY_UNSEEN_FRAMESwm42013-02-261-1/+3
| | | | | The return value of get_current_video_decoder_lag() should be the same before and after this change in all cases.
* vd_lavc: better warning message for software decoding fallbackwm42013-02-261-1/+2
| | | | At least I hope it's better.
* vd_lavc: fix software decoding fallbackwm42013-02-211-1/+1
| | | | | | The string is deallocated by the callee after initialization, so fallback at runtime passes a deallocated string to libavcodec, which results in random crashes. Regression introduced by commit 4d016a9.
* demux_lavf, ad_lavc, vd_lavc: pass codec header data directlywm42013-02-101-6/+11
| | | | | | | | | | |