summaryrefslogtreecommitdiffstats
path: root/video/decode/vd_lavc.c
Commit message (Collapse)AuthorAgeFilesLines
* video/decode: mp_msg conversionswm42013-12-211-26/+32
| | | | Doesn't cover vdpau/vaapi parts yet, because these are a bit messier.
* Split mpvcore/ into common/, misc/, bstr/wm42013-12-171-5/+5
|
* Move options/config related files from mpvcore/ to options/wm42013-12-171-2/+2
| | | | | | | | | Since m_option.h and options.h are extremely often included, a lot of files have to be changed. Moving path.c/h to options/ is a bit questionable, but since this is mainly about access to config files (which are also handled in options/), it's probably ok.
* Replace mp_tmsg, mp_dbg -> mp_msg, remove mp_gtext(), remove set_osd_tmsgwm42013-12-161-11/+11
| | | | | | | | | The tmsg stuff was for the internal gettext() based translation system, which nobody ever attempted to use and thus was removed. mp_gtext() and set_osd_tmsg() were also for this. mp_dbg was once enabled in debug mode only, but since we have log level for enabling debug messages, it seems utterly useless.
* video: move video filter chain initialization from decoder to playerwm42013-12-101-43/+27
| | | | | | | | | | | | | This should help fixing some issues (like not draining video frames correctly on reinit), as well as decoupling the decoder, filter chain, and VO code. I also wanted to make the hardware video decoding fallback work properly if software-only video filters are inserted. This currently has the issue that the fallback is too violent, and throws away a bunch of demuxer packets needed to restart software decoding properly. But keeping "backup" packets turned out as too hacky, so I'm not doing this, at least not yet.
* video: create a separate context for video filter chainwm42013-12-071-2/+2
| | | | | | This adds vf_chain, which unlike vf_instance refers to the filter chain as a whole. This makes the filter API less awkward, and will allow handling format negotiation better.
* vd_lavc: factor out libavcodec thread setupwm42013-12-041-15/+1
|
* vd_lavc: don't check required hwdec fieldswm42013-12-041-4/+3
|
* av_common: add timebase parameter to mp_set_av_packet()wm42013-12-041-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | If the timebase is set, it's used for converting the packet timestamps. Otherwise, the previous method of reinterpret-casting the mpv style double timestamps to libavcodec style int64_t timestamps is used. Also replace the kind of awkward mp_get_av_frame_pkt_ts() function by mp_pts_from_av(), which simply converts timestamps in a way the old function did. (Plus it takes a timebase parameter, similar to the addition to mp_set_av_packet().) Note that this should not change anything yet. The code in ad_lavc.c and vd_lavc.c passes NULL for the timebase parameters. We could set AVCodecContext.pkt_timebase and use that if we want to give libavcodec "proper" timestamps. This could be important for ad_lavc.c: some codecs (opus, probably mp3 and aac too) have weird requirements about doing decoding preroll on the container level, and thus require adjusting the audio start timestamps in some cases. libavcodec doesn't tell us how much was skipped, so we either get shifted timestamps (by the length of the skipped data), or we give it proper timestamps. (Note: libavcodec interprets or changes timestamps only if pkt_timebase is set, which by default it is not.) This would require selecting a timebase though, so I feel uncomfortable with the idea. At least this change paves the way, and will allow some testing.
* Take care of some libavutil deprecations, drop support for FFmpeg 1.0wm42013-11-291-8/+8
| | | | | | | | | | | | | | PIX_FMT_* -> AV_PIX_FMT_* (except some pixdesc constants) enum PixelFormat -> enum AVPixelFormat Losen some version checks in certain newer pixel formats. av_pix_fmt_descriptors -> av_pix_fmt_desc_get This removes support for FFmpeg 1.0.x, which is even older than Libav 9.x. Support for it probably was already broken, and its libswresample was rejected by our build system anyway because it's broken. Mostly untested; it does compile with Libav 9.9.
* video: refactor PTS code, add fall back heuristic to DTSwm42013-11-271-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Refactor the PTS handling code to make it cleaner, and to separate the bits that use PTS sorting. Add a heuristic to fall back to DTS if the PTS us non-monotonic. This code is based on what FFmpeg/Libav use for ffplay/avplay and also best_effort_timestamp (which is only in FFmpeg). Basically, this 1. just uses the DTS if PTS is unset, and 2. ignores PTS entirely if PTS is non- monotonic, but DTS is sorted. The code is pretty much the same as in Libav [1]. I'm not sure if all of it is really needed, or if it does more than what the paragraph above mentions. But maybe it's fine to cargo-cult this. This heuristic fixes playback of mpeg4 in ogm, which returns packets with PTS==DTS, even though the PTS timestamps should follow codec reordering. This is probably a libavformat demuxer bug, but good luck trying to fix it. The way vd_lavc.c returns the frame PTS and DTS to dec_video.c is a bit inelegant, but maybe better than trying to mess the PTS back into the decoder callback again. [1] https://git.libav.org/?p=libav.git;a=blob;f=cmdutils.c;h=3f1c667075724c5cde69d840ed5ed7d992898334;hb=fa515c2088e1d082d45741bbd5c05e13b0500804#l1431
* cosmetics: rename video/audio reset functionswm42013-11-271-1/+1
| | | | | | | | | | These used the suffix _resync_stream, which is a bit misleading. Nothing gets "resynchronized", they really just reset state. (Some audio decoders actually used to "resync" by reading packets for resuming playback, but that's not the case anymore.) Also move the function in dec_video.c to the top of the file.
* demux: export dts from demux_lavf, use it for aviwm42013-11-251-4/+1
| | | | | | | | | Having the DTS directly can be useful for restoring PTS values. The avi file format doesn't actually store PTS values, just DTS. An older hack explicitly exported the DTS as PTS (ignoring the [I assume] genpts generated non-sense PTS), which is not necessary anymore due to this change.
* video: pass PTS as part of demux_packet/AVPacket and mp_image/AVFramewm42013-11-251-11/+11
| | | | | | | | | | | | | | | Instead of passing the PTS as separate field, pass it as part of the usual data structures. Basically, this removes strange artifacts from the API. (It's not finished, though: the final decoded PTS goes through strange paths, and filter_video() finally overwrites the decoded mp_image's pts field with it.) We also stop using libavcodec's reordered_opaque fields, and use AVPacket.pts and AVFrame.pkt_pts. This is slightly unorthodox, because these pts fields are not "really" opaque anymore, yet we treat them as such. But the end result should be the same, and reordered_opaque is marked as partially deprecated (it's not clear whether it's really deprecated).
* vd_lavc: improve a commentwm42013-11-241-1/+2
|
* vd_lavc: when falling back to software, revert filter error statuswm42013-11-231-0/+2
| | | | | | | | | | | | | | | | | | | When mpv is started with some video filters set (--vf is used), and hardware decoding is requested, and hardware decoding would be possible, but is prevented due to video filters that accept software formats only, the fallback didn't work properly sometimes. This fallback works rather violently: it tries to initialize the filter chain, and if it fails it throws away the frame decoded using the hardware, and retries with software. The case that didn't work was when decoding the current packet didn't immediately lead to a new frame. Then the filter chain wouldn't be reinitialized, and the playloop would stop playback as soon as it encounters the error flag. Fix this by resetting the filter error flag (back to "uninitialized"), which is a rather violent, but somewhat working solution. The fallback in general should perhaps be cleaned up later.
* video: move handling of container vs. stream AR out of vd_lavc.cwm42013-11-231-36/+17
| | | | | | | Now the actual decoder doesn't need to care about this anymore, and it's handled in generic code instead. This simplifies vd_lavc.c, and in particular we don't need to detect format changes in the old way anymore.
* dec_video: make vf_input and hwdec_info statically allocatedwm42013-11-231-2/+2
| | | | | | | | | | | The only reason why these structs were dynamically allocated was to avoid recursive includes in stheader.h, which is (or was) a very central file included by almost all other files. (If a struct is referenced via a pointer type only, it can be forward referenced, and the definition of the struct is not needed.) Now that they're out of stheader.h, this difference doesn't matter anymore, and the code can be simplified. Also sneak in some sanity checks.
* demux: remove gsh field from sh_audio/sh_video/sh_subwm42013-11-231-3/+3
| | | | | | | | | This used to be needed to access the generic stream header from the specific headers, which in turn was needed because the decoders had access only to the specific headers. This is not the case anymore, so this can finally be removed again. Also move the "format" field from the specific headers to sh_stream.
* video: move decoder context from sh_video into new structwm42013-11-231-72/+71
| | | | | | | | | | This is similar to the sh_audio commit. This is mostly cosmetic in nature, except that it also adds automatical freeing of the decoder driver's state struct (which was in sh_video->context, now in dec_video->priv). Also remove all the stheader.h fields that are not needed anymore.
* demux: rename demux_packet.h to packet.hwm42013-11-181-1/+1
| | | | This always bothered me.
* vd_lavc: select correct hw decoder profile for constrained baseline h264wm42013-11-141-2/+4
| | | | | | | | | | | | | | | | | | | | The existing code tried to remove the "extra" profile flags for h264. FF_PROFILE_H264_INTRA doesn't matter for us at all, because it's set only for profiles the vdpau/vaapi APIs don't support. The FF_PROFILE_H264_CONSTRAINED flag on the other hand is added to H264_BASELINE, except that it makes the file a real subset of H264_MAIN and H264_HIGH. Removing that flag would select the BASELINE profile, which appears to be rarely supported by hardware decoders. This means we accidentally rejected perfectly hardware decodable files. Use MAIN for it instead. (vaapi has explicit support for CONSTRAINED_BASELINE, but it seems to be a new thing, and is not reported as supported where I tried. So don't bother to check it, and do the same as on vdpau.) See github issue #204.
* vd_lavc: remove explicit crystalhd supportwm42013-11-061-14/+0
| | | | | | | | | | | | | | | | This removes "--hwdec=crystalhd". I doubt anyone even tried to use this. But even if someone wants to use it, the decoders can still be explicitly invoked with e.g.: --vd=lavc:h264_crystalhd The only advantage our special code provided was fallback to software decoding. (But I'm not sure how the ffmpeg crystalhd pseudo-decoder actually behaves.) Removing this will allow some simplifications as soon as we don't need vdpau_old.c anymore.
* Merge branch 'master' into have_configurewm42013-11-041-0/+7
|\ | | | | | | | | Conflicts: configure
| * vo_opengl: add infrastructure for hardware decoding OpenGL interopwm42013-11-041-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | Most hardware decoding APIs provide some OpenGL interop. This allows using vo_opengl, without having to read the video data back from GPU. This requires adding a backend for each hardware decoding API. (Each backend is an entry in gl_hwdec_vaglx[].) The backends expose video data as a set of OpenGL textures. Add infrastructure to support this. The next commit will add support for VA-API.
* | configure: uniform the defines to #define HAVE_xxx (0|1)Stefano Pigozzi2013-11-031-6/+5
|/ | | | | | | | | | | | | | | | | | | | | The configure followed 5 different convetions of defines because the next guy always wanted to introduce a new better way to uniform it[1]. For an hypothetic feature 'hurr' you could have had: * #define HAVE_HURR 1 / #undef HAVE_DURR * #define HAVE_HURR / #undef HAVE_DURR * #define CONFIG_HURR 1 / #undef CONFIG_DURR * #define HAVE_HURR 1 / #define HAVE_DURR 0 * #define CONFIG_HURR 1 / #define CONFIG_DURR 0 All is now uniform and uses: * #define HAVE_HURR 1 * #define HAVE_DURR 0 We like definining to 0 as opposed to `undef` bcause it can help spot typos and is very helpful when doing big reorganizations in the code. [1]: http://xkcd.com/927/ related
* demux: rename Windows symbolswm42013-11-021-10/+10
| | | | | | | | | | | | | | | | | | | | There are some Microsoft Windows symbols which are traditionally used by the mplayer core, because it used to be convenient (avi was the big format, using binary windows decoders made sense...). So these symbols have the exact same definition as the Windows one, and if mplayer is compiled on Windows, the symbols from windows.h are used. This broke recently just because some files were shuffled around, and the symbols defined in ms_hdr.h collided with windows.h ones. Since we don't have windows binary decoders anymore, there's not the slightest reason our symbols should have the same names. Rename them to reduce the risk for collision, and to fix the recent regression. Drop WAVEFORMATEXTENSIBLE, because it's mostly unused. ao_dsound defines its own version if the windows headers don't define it, and ao_wasapi is not available on systems where this symbol is missing. Also reindent ms_hdr.h.
* video: check profiles with hardware decodingwm42013-11-011-0/+43
| | | | | | | | | | | | | | | We had some code for checking profiles earlier, which was removed in commits 2508f38 and adfb71b. These commits mentioned that (working) hw decoding was sometimes prevented due to profile checking, but I can't find the samples anymore that showed this behavior. Also, I changed my opinion, and I think checking the profiles is something that should be done for better fallback to software decoding behavior. The checks roughly follow VLC's vdpau profile checks, although we do not check codec levels. (VLC's profile checks aren't necessarily completely correct, but they're a welcome help anyway.) Add a --vd-lavc-check-hw-profile option, which skips the profile check.
* vd_lavc: add more ifdeffery and ffmpeg cargo culting for correctnesswm42013-10-311-7/+13
| | | | | | | | | | | | | We mixed the "old" AVFrame management functions (avcodec_alloc_frame, avcodec_free_frame) with reference counting. This doesn't work correctly; you must use av_frame_alloc and av_frame_free. Of course ffmpeg doesn't warn us about the bad usage, but will just mess up things silently. (Thanks a lot...) While the alloc function seems to be 100% compatible, the free function will do bad things, such as freeing memory that might still be referenced by another frame. I didn't experience any actual bugs, but maybe that was pure luck.
* core: add --force-windowwm42013-10-021-1/+0
| | | | | | | | | | | | | | | | | This commit adds the --force-window option, which will cause mpv always to create a window when started. This can be useful when pretending that mpv is a GUI application (which it isn't, but users pretend anyway), and playing audio files would run mpv in the background without giving a window to control it. This doesn't actually create the window immediately: it only does so only after initializing playback and when it is clear that there won't be any actual video. This could be a problem when starting slow or completely stuck network streams (mpv would remain frozen in the background), or if video initialization somehow is stuck forever in an in-between state (like when the decoder doesn't output a video frame, but doesn't return an error either). Well, we can pretend only so much that mpv is a GUI application.
* video: let sh_video->aspect always be container aspect ratiowm42013-09-261-6/+8
| | | | | | | Now writing -1 to the 'aspect' property resets the video to the auto aspect ratio. Returning the aspect from the property becomes a bit more complicated, because we still try to return the container aspect ratio if no frame has been decoded yet.
* vaapi: allow GPU read-back with --hwdec=vaapi-copywm42013-09-251-0/+2
| | | | | | | | | | | | | | This code is actually quite inefficient: it reuses the (slow, simple) screenshot code. It uses an inefficient method to read the image (vaGetImage() instead of vaDeriveImage()), allocates new memory for each frame that is read, and it tries all image formats again each time. Also, in my tests it always picked NV12 as image format, which is not ideal if you actually want to filter the video, and vo_xv can't handle this format without conversion either. However, a user confirmed that it worked for him, so everything is fine.
* vd_lavc: allow process_image to change image formatwm42013-09-251-17/+20
| | | | | | | | | | | | | | This will allow GPU read-back with process_image. We have to restructure how init_vo() works. Instead of initializing the VO before process_image is called, rename init_vo() to update_image_params(), and let it update the params only. Then we really initialize the VO after process_image. As a consequence of these changes, already decoded hw frames are correctly unreferenced if creation of the filter chain fails. This could trigger assertions on VO uninitialization, because it's not allowed to reference hw frames past VO lifetime.
* vd_lavc: reset last_sample_aspect_ratio in uninit_avctx()xylosper2013-09-131-0/+1
| | | | | | | | In init_vo(), if sh->aspect is 0 or last_sample_aspect_ratio is set, sh->aspect is overwritten. With software decoding fallback behaviour, this makes the aspect ratio from container ignored since last_sample_aspect_ratio is already set in first try with hardware decoding.
* video/out: don't require VOs to handle screenshot aspect speciallywm42013-08-241-3/+1
| | | | | | | | | | | | | | | | This affects VOs which just reuse the mp_image from draw_image() to return screenshots. The aspect of these images is never different from the aspect the screenshots should be, so there's no reason to adjust the aspect in these cases. Other VOs still need it in order to restore the original image attributes. This requires some changes to the video filter code to make sure that the aspect in the passed mp_images is consistent. The changes in mplayer.c and vd_lavc.c are (probably) not strictly needed for this commit, but contribute to consistency.
* video: add vda decode support (with hwaccel) and direct renderingStefano Pigozzi2013-08-221-6/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Decoding H264 using Video Decode Acceleration used the custom 'vda_h264_dec' decoder in FFmpeg. The Good: This new implementation has some advantages over the previous one: - It works with Libav: vda_h264_dec never got into Libav since they prefer client applications to use the hwaccel API. - It is way more efficient: in my tests this implementation yields a reduction of CPU usage of roughly ~50% compared to using `vda_h264_dec` and ~65-75% compared to h264 software decoding. This is mainly because `vo_corevideo` was adapted to perform direct rendering of the `CVPixelBufferRefs` created by the Video Decode Acceleration API Framework. The Bad: - `vo_corevideo` is required to use VDA decoding acceleration. - only works with versions of ffmpeg/libav new enough (needs reference refcounting). That is FFmpeg 2.0+ and Libav's git master currently. The Ugly: VDA was hardcoded to use UYVY (2vuy) for the uploaded video texture. One one end this makes the code simple since Apple's OpenGL implementation actually supports this out of the box. It would be nice to support other output image formats and choose the best format depending on the input, or at least making it configurable. My tests indicate that CPU usage actually increases with a 420p IMGFMT output which is not what I would have expected. NOTE: There is a small memory leak with old versions of FFmpeg and with Libav since the CVPixelBufferRef is not automatically released when the AVFrame is deallocated. This can cause leaks inside libavcodec for decoded frames that are discarded before mpv wraps them inside a refcounted mp_image (this only happens on seeks). For frames that enter mpv's refcounting facilities, this is not a problem since we rewrap the CVPixelBufferRef in our mp_image that properly forwards CVPixelBufferRetain/CvPixelBufferRelease calls to the underying CVPixelBufferRef. So, for FFmpeg use something more recent than `b3d63995` for Libav the patch was posted to the dev ML in July and in review since, apparently, the proposed fix is rather hacky.
* video/decode: change fix_image callbackwm42013-08-151-2/+2
| | | | | This might make it slightly easier when trying to implement surface read-back for hardware decoding.
* vd_lavc: fix comment, document hwdec video frame size trickinesswm42013-08-151-1/+4
| | | | | | | | | About this issue, it would be better if the surfaces could be allocated with the real size, and the vdpau video mixer could be created with that size as well. That would be a bit hard, because the real surface size had to be communicated to vdpau. So I'm going with this solution. vaapi seems to be fine with either surface size, so there's hopefully no problem.
* video/decode: pass parameters directly to hwdec allocate_image callbackwm42013-08-151-1/+5
| | | | | | Instead of passing AVFrame. This also moves the mysterious logic about the size of the allocated image to common code, instead of duplicating it everywhere.
* video: add vaapi decode and output supportwm42013-08-121-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is based on the MPlayer VA API patches. To be exact it's based on a very stripped down version of commit f1ad459a263f8537f6c from git://gitorious.org/vaapi/mplayer.git. This doesn't contain useless things like benchmarking hacks and the demo code for GLX interop. Also, unlike in the original patch, decoding and video output are split into separate source files (the separation between decoding and display also makes pixel format hacks unnecessary). On the other hand, some features not present in the original patch were added, like screenshot support. VA API is rather bad for actual video output. Dealing with older libva versions or the completely broken vdpau backend doesn't help. OSD is low quality and should be rather slow. In some cases, only either OSD or subtitles can be shown at the same time (because OSD is drawn first, OSD is prefered). Also, libva can't decide whether it accepts straight or premultiplied alpha for OSD sub-pictures: the vdpau backend seems to assume premultiplied, while a native vaapi driver uses straight. So I picked straight alpha. It doesn't matter much, because the blending code for straight alpha I added to img_convert.c is probably buggy, and ASS subtitles might be blended incorrectly. Really good video output with VA API would probably use OpenGL and the GL interop features, but at this point you might just use vo_opengl. (Patches for making HW decoding with vo_opengl have a chance of being accepted.) Despite these issues, decoding seems to work ok. I still got tearing on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also tested with the vdpau vaapi wrapper on a nvidia system; however this was rather broken. (Fortunately, there is no reason to use mpv's VAAPI support over native VDPAU.)
* video: redo hw decoding initialization, add --hwdec=autowm42013-08-111-76/+118
| | | | | | | | | | | | | | | | Change how the HW decoding stuff is organized, the way it's initialized in particular. Instead of duplicating the list of supported codecs for hwaccel decoders, add a probe function which allows each decoder to report whether it supports a given codec. Add an "auto" choice to the --hwdec option, which automatically enables hardware decoding if libavcodec and/or the VO supports it. What mpv prints on the terminal changes a bit. Now it will just print a single line whether hw decoding is used or not (and nothing at all if no hw decoding at all was requested). The pretty violent fallback from hw decoding to software decoding is still quite verbose and evil-looking though.
* core: move contents to mpvcore (2/2)Stefano Pigozzi2013-08-061-7/+7
| | | | Followup commit. Fixes all the files references.
* vd_lavc: print warning if hardware decoding API is not availablewm42013-07-301-0/+3
| | | | | At least currently, this case pretty much happens only in the case vdpau is requested, but not compiled in.
* vd_lavc: fix CONFIG_VDPAU checkwm42013-07-301-1/+1
| | | |