summaryrefslogtreecommitdiffstats
path: root/video/decode
Commit message (Collapse)AuthorAgeFilesLines
* demux: merge sh_video/sh_audio/sh_subwm42016-01-122-20/+21
| | | | | | | | | | This is mainly a refactor. I'm hoping it will make some things easier in the future due to cleanly separating codec metadata and stream metadata. Also, declare that the "codec" field can not be NULL anymore. demux.c will set it to "" if it's NULL when added. This gets rid of a corner case everything had to handle, but which rarely happened.
* mpv_talloc.h: rename from talloc.hDmitrij D. Czarkoff2016-01-111-1/+1
| | | | This change helps avoiding conflict with talloc.h from libtalloc.
* dxva2: log more debug infoswm42016-01-111-11/+41
| | | | | Dump the complete list of decoders and image formats. If it's a decoder we know, add a stringified name.
* Fix build on older libavcodec versionswm42016-01-081-0/+2
| | | | avcodec_profile_name() was added only a week ago or so.
* vd_lavc: log codec profile when attempting hardware decodingwm42016-01-081-0/+4
| | | | Should be useful.
* vaapi: add VP9 profile entiresBtbN2015-12-201-0/+7
|
* video: switch from using display aspect to sample aspectwm42015-12-192-19/+19
| | | | | | | | | | | | | | | | MPlayer traditionally always used the display aspect ratio, e.g. 16:9, while FFmpeg uses the sample (aka pixel) aspect ratio. Both have a bunch of advantages and disadvantages. Actually, it seems using sample aspect ratio is generally nicer. The main reason for the change is making mpv closer to how FFmpeg works in order to make life easier. It's also nice that everything uses integer fractions instead of floats now (except --video-aspect option/property). Note that there is at least 1 user-visible change: vf_dsize now does not set the display size, only the display aspect ratio. This is because the image_params d_w/d_h fields did not just set the display aspect, but also the size (except in encoding mode).
* vd_lavc: fix avctx NULL checkswm42015-12-051-1/+4
| | | | | | If reinit after a fallback from hardware fails, this field can be NULL. The check in control() was broken due to a typo (found by Coverity), and decode() lacked the check entirely.
* video: readd codec delay estimationwm42015-12-023-0/+18
| | | | | | | | | | | | | | | | | Approximately reverts commit 3ccac74d. This failed with some avi files, which do pseudo-VFR by sending packets with empty frames (or repeat frames, depending on point of view). Specifically, these packets are not 0 bytes, so they don't get skipped by libavformat, as with the usual VFR avi hack. Instead, the packet contains a VOP with vop_coded=0, so libavcodec will just return no frame. We could probably distinguish such skipped frames and delayed frames by explicitly measuring the codec delay by counting how long it takes to get the very first frame (and then treat skips as explicit drops), but we may as well simply reinstate the old code. To appease to at least one semi-broken case, do not enable this logic on the RPI, as the FFmpeg MMAL wrapper has arbitrary buffering (and MMAL itself is asynchronous).
* dxva2: reject 10 bit HEVCwm42015-11-231-0/+4
| | | | | | | | | 10 bit HEVC would require DXVA2_ModeHEVC_VLD_Main10, and most a different surface type (judging by lavfsplitter source code, both P010 and P016 would work). Since I'm unable to test this stuff, exclude 10 bit for now. See #2516.
* videotoolbox: make decoder format customizablewm42015-11-171-2/+2
| | | | | | | | | | Because apparently there's no ideal universally working format. The weird OpenGL texture format for kCVPixelFormatType_32BGRA is from: http://stackoverflow.com/questions/22077544/draw-an-iosurface-to-an-opengl-context (Which apparently got it from the linked Apple example code.)
* vd_lavc: be more careful with flushing the decoderwm42015-11-102-5/+22
| | | | | | | | | | | | | | | | | Until now, we've relied on the following things: - you can send flush packets to the decoder even if it's fully flushed, - you can send new packets to a flushed decoder, - you can send new packers to a partially flushed decoder. ("flushing" refers to sending flush packets to the decoder until the decoder does not return new pictures, not avcodec_flush_buffers().) All of these are questionable. The libavcodec API probably doesn't guarantee that these work well or at all, even though most decoders have no issue with these. But especially with hardware decoding wrappers (like MMAL), real problems can be expected. Isolate us from these corner cases by handling them explicitly.
* video: increase avi pts buffer sizewm42015-11-061-1/+1
| | | | | When decoding on RPI/MMAL, the buffering between decoder input and output can be quite excessive.
* rpi: add support for codecs other than h264wm42015-11-053-7/+20
| | | | FFmpeg now supports h264 and mpeg2. At least vc-1 will probably follow.
* vd_lavc: make hwdec fallback more tolerantwm42015-11-032-6/+14
| | | | | | | | | | | | A hw decoder might fail to decode a frame for multiple reasons, and not always just because decoding is impossible. We can't generally distinguish these reasons well. Make it more tolerant by accepting failures of 3 frames, but not more. The threshold can be adjusted by the repurposed --vd-lavc-software-fallback option. (This behavior was suggested much earlier in some PR, but at the time the "proper" hwdec fallback was indistinguishable from decoding error. With the current situation, "proper" fallback is still instantious.)
* vdpau: fix uninit when init failswm42015-11-012-5/+2
| | | | | | | | | | The uninit() function was called twice if the uninit() function failed (once by init(), once by vd_lavc.c code), which caused crashes due to double-free. (This failure is a corner case, and all other hwdec backends appear to handle this case gracefully.) I do not think this code should be able to deal with uninit() being called other than once. Guarantee that it's called exactly once.
* vd_lavc: fix declarationswm42015-10-301-6/+6
| | | | | | | Fixes linker failure. How did this ever work? Apparently it did most of the time, but apparently we just got the first case where it didn't. Fixes #2433.
* vd_lavc: make software decoding fallback an optionRodger Combs2015-10-251-1/+5
|
* vd_lavc: attempt to fallback from hwdec before anything is decodedwm42015-10-191-4/+4
| | | | | | | | | | The previous commit moved the av_frame_unref() after the got_picture check. This accidentally also deferred the software fallback reinitialization to until a software picture was decoded (instead of the exact time of the fallback), which is not ideal. Just rely on the fact that calling av_frame_unref() on a frame is ok even if nothing was decoded.
* vd_lavc: continue decoding properly after decoding failurewm42015-10-191-3/+7
| | | | | | | | | | Commit 12cd48a8 started setting the hwdec_failed field even if hwdec was not active, and because it also checked this field even if hwdec was not active, broke decoding forever. Fix this, and also avoid a memory leak or API misuse by releasing the decoded picture. Passing an unreleased frame to the decoder has as far as I know no defined effects.
* vd_lavc: work around libavcodec nonsense causing hwdec init failurewm42015-10-121-0/+3
| | | | | | | | | | | | | | | | The libavcodec h264 decoder contains some idiotic code with unknown purpose (no sample or explanation known that necessitates its existence), that causes the AVCodecContext.get_format callback to be invoked at a time when hwaccels can't be initialized. By definition, the get_format callback is supposed to initialize hwaccels (another idiotic thing now part of the API, but different story). This causes hwdec initialization sometimes to fail (WolfensteinTwitch.mp4): the first get_format callback will mark it as failed, so the second get_format (the "proper" normal one) will not bother restoring the state, and hwdec init fails. While this should be fixed in libavcodec (good luck with that), it's quite easy to workaround.
* vd_lavc: refuse to initialize vaapi with unknown profileswm42015-10-111-3/+1
| | | | | | Bad idea, although I'm not sure how harmful it actually was. Although this is common code, only the vaapi hwaccel still uses it.
* video: fix base for --no-correct-ptswm42015-10-062-9/+10
| | | | | | | | | | Use the first encountered packet PTS/DTS as base, instead of the last one. This does not add the amount of frames buffered in the codec to the PTS offset, and thus is better. Also, don't add the frame time if there was no decoded frame yet. The first frame should obviously have the timestamp of the first packet (going by this heuristic).
* video: increase maximum number of buffered AVI pts sampleswm42015-10-061-1/+1
| | | | | | | While b-frame reordering limits the maximum required number to around 16, the number of additionally buffered frames can be much higher. Guess when this actually matters? (For the libavcodec MMAL wrapper.)
* video: don't sort AVI pts sampleswm42015-10-061-14/+10
| | | | | It's obviously not needed, and only an artifact of the old PTS determination code.
* video: remove user-controllable PTS sorting (--pts-association-mode)wm42015-10-062-57/+5
| | | | | | | | | Useless. Sometimes it might be useful to make some extremely broken files work, but on the other hand --no-correct-pts is sufficient for these cases. While we still need some of the code for AVI, the "auto" mode in particular inflated the size of the code.
* video: disable framedrop if avi-style timestamps are usedwm42015-10-061-0/+3
| | | | | | | | | | | | This can't be handled correctly at all. Other cases when the decoder might drop a frame (such as completely failing to decode a frame) will shift timestamps by a frame, and it can't be avoided. While we could maybe find a better way to handle this with libavcodec's main decoders, this seems to be much harder if it should work with certain HW decoders, which don't passthrough the DTS field (such as MMAL). Another problem are .avi files with b-frames. So just leave it as it is.
* video: remove codec delay estimationwm42015-10-033-15/+2
| | | | | | | | | | This was used only by the timestamp sorting code, which is a fallback for avi files (as well as avi-muxed mkv files). This was supposed to prevent accumulating timestamps in case the decoder consumes more packets than it outputs frames (i.e. frames are dropped). This didn't work very well (timestamps could be off by a large amount), the estimation of the delay was fragile, and the interdependencies with the decoder were annoying, so kill it.
* video: cosmetics: remove trailing whitespacewm42015-10-031-1/+1
|
* Revert "vd_lavc: do not abort hardware decoding on errors"wm42015-09-281-0/+1
| | | | | | | | | | This essentially reverts commit 009dfbe3. FFmpeg VideoToolbox support is being wacky, and can cause major issues, such as not being able to decode a single frame. (E.g. by playing a .ts file. This should be fixed in FFmpeg eventually.) This is not a straight revert of the commit; just a functional one. We keep the slightly simpler code structure.
* video: remove VDA supportwm42015-09-282-122/+0
| | | | | | | | | VideoToolbox is preferred. Now that FFmpeg released 2.8, there's no reason to support VDA anymore. In fact, we had a bug that made VDA not useable with older FFmpeg versions in some newer mpv releases. VideoToolbox is supported even on slightly older OSX versions, and if not, you still can run mpv without hw decoding.
* vd_lavc: remove some ancient cargo-cultingwm42015-09-281-1/+0
| | | | | | | | Definitely not needed anymore, and fixes a crash in some weird corner- cases. The extradata freeing is apparently still needed, though. (Because a codec context can be opened again, which makes no sense, but ok.)
* vaapi: remove dependency on X11wm42015-09-271-13/+58
| | | | | | | | | | | | | There are at least 2 ways of using VAAPI without X11 (Wayland, DRM). Remove the X11 requirement from the decoder part and the EGL interop. This will be used by a following commit, which adds Wayland support. The worst about this is the decoder part, which includes a bad hack for using the decoder without any VO interop (also known as "vaapi-copy" mode). Separate the X11 parts so that they're self-contained. For the EGL interop code we do something similar (it's kept slightly simpler, because it essentially only has to translate between our silly MPGetNativeDisplay abstraction and the vaGetDisplay...() call).
* video: refactor GPU memcpy usagewm42015-09-252-193/+5
| | | | | | | | | | | | | | | | | Make the GPU memcpy from the dxva2 code generally useful to other parts of the player. We need to check at configure time whether SSE intrinsics work at all. (At least in this form, they won't work on clang, for example. It also won't work on non-x86.) Introduce a mp_image_copy_gpu(), and make the dxva2 code use it. Do some awkward stuff to share the existing code used by mp_image_copy(). I'm hoping that FFmpeg will sooner or later provide a function like this, so we can remove most of this again. (There is a patch, bit it's stuck in limbo since forever.) All this is used by the following commit.
* vd_lavc: Fix recovery from vdpau preemptionPhilip Langdale2015-09-251-3/+3
| | | | | Flushing buffers, and thereby triggering decoder reinitialisation needs to happen before attempting, and failing, to decode.
* vd_lavc: do not abort hardware decoding on errorswm42015-09-231-8/+7
| | | | | | | | | | Usually, libavcodec ignores errors reported by the hardware decoding API, so it's not like we can actually escape if the hardware is somehow acting up. For normal fallback purposes, or if parts of the hw decoding API which we actually check fails, we do this by setting and checking the hwdec_failed flag anyway.
* vd_lavc: minor cleanup to hwdec fallback codewm42015-09-231-15/+8
| | | | | | | | | The comment was largely outdated, and described the old situation when we used a "violent" fallback by making get_buffer2 fail completely. Also, for the case when the hw decoder initialization succeeded (in get_format), but get_buffer2 for some reason requests something unexpected, we also can fallback more gracefully and in the same way.
* video: make --field-dominance set interlaced flagKevin Mitchell2015-09-101-4/+6
| | | | fixes #2289
* vd_lavc: better hwdec log outputwm42015-09-022-4/+17
| | | | | | | | | | | | Often, we don't know whether hardware decoding will work until we've tried. (This used to be different, but API changes and improvements in libavcodec led to this situation.) We will often output that we're going to use hardware decoding, and then print a fallback warning. Instead, print the status once we have decoded a frame. Some of the old messages are turned into verbose messages, which should be helpful for debugging. Also add some new ones.
* vd_lavc: factor all hwdec fallbacks into the same functionwm42015-09-021-24/+19
| | | | | | | | The fallback at initialization time was basically duplicated, maybe for the sake of showing a different error message. This doesn't matter anymore; not much can fail at initialization anymore. Most meaningful and common errors happen either at probing or in get_format (when the actual hw decoder is initialized).
* video: make container vs. bitstream aspect ratio configurablewm42015-08-302-17/+38
| | | | | | Utterly idiotic bullshit. Fixes #2259.
* vd_lavc: bump number of allocated surfaces for hwdec with HEVCwm42015-08-241-1/+4
|
* vaapi: add HEVC profile entrieswm42015-08-241-0/+10
| | | | | | | libavcodec does not support HEVC via VAAPI yet, so this won't work. However, there is ongoing work to add HEVC support to VAAPI, and this change might help with testing. (Or maybe not - but there is no harm in this change.)
* vd_lavc: remove unneeded hwdec parameterswm42015-08-198-25/+16
| | | | | | | All hwdec backends now use a single pixel format, and the format is always checked. Also, the init_decoder callback is now mandatory.
* video: fix VideoToolbox/VDA autodetectionwm42015-08-171-2/+12
| | | | | | | | | | | | | | | This affects vo_opengl_cb in particular: it'll most likely auto-load VDA, and then the VideoToolbox decoder won't work. And everything fails. This is mainly caused by FFmpeg using separate pixfmts for the _same_ thing (CVPixelBuffers), simply because libavcodec's architecture demands that hwaccel backends are selected by pixfmts. (Which makes no sense, but now we have the mess.) So instead of duplicating FFmpeg's misdesign, just change the format to our own canonical one on the image output by the decoder. Now the GL interop code is exactly the same for VDA and VT, and we use the VT name only.
* video: remove old vdpau hwaccel API usagewm42015-08-101-224/+0
| | | | | | | While the "old" libavcodec vdpau API is not deprecated (only the very- old API is), it's still relatively complicated code that badly duplicates the much simpler newer vdpau code. It exists only for the sake of older FFmpeg releases; get rid of it.
* hwdec: add VideoToolbox supportSebastien Zwickert2015-08-052-0/+119
| | | | | | | | VDA is being deprecated in OS X 10.11 so this is needed to keep hwdec working. The code needs libavcodec support which was added recently (to FFmpeg git, libav doesn't support it). Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
* win32: revert wchar_t changeswm42015-08-011-2/+2
| | | | | | | | | | | Revert "win32: more wchar_t -> WCHAR replacements" Revert "win32: replace wchar_t with WCHAR" Doing a "partial" port of this makes no sense anymore from my perspective. Revert the changes, as they're confusing without context, maintenance, and progress. These changes were a bit premature anyway, and might actually cause other issues (locale neutrality etc. as it was pointed out).
* win32: more wchar_t -> WCHAR replacementswm42015-07-301-2/+2
| | | | | | | | | | | | | This was essentially missing from commit 0b52ac8a. Since L"..." string literals have the type wchar_t[], we can't use them for UTF-16 strings. Use C11 u"..." string literals instead. These have the type char16_t[], but we simply assume char16_t is the same underlying type as WCHAR. In practice, they're both unsigned short. For this reason use -std=c11 on Windows. Since Windows is a "special" environment (we require either MinGW or Cygwin), we don't need to worry too much about compiler compatibility.
* video: don't restrict --vd-lavc-threads to a maximum of 16wm42015-07-231-1/+1
| | | | | | Only do it when the number of threads is autodetected, as more than 16 threads are still considered not recommended. (libavcodec prints a warning.)
* vaapi: allow allocating additional surfaces during decodingwm42015-07-151-3/+2
| | | | | | | | | | | | | Fixes problems with --vo=opengl:interpolation. The issue here is that vo_opengl retains more surfaces than what was preallocated for the decoder. Until now, we just explicitly failed to decode frames for which no additional surfaces are available. Since modern drivers usually are fine with not "registering" surfaces before the decoder is created, just allow allocating additional surfaces if needed. (We also could probably recreate the HW decoder, since the HW decoder should be stateless. But let's try to avoid raising the overall complexity of the code.)
* vaapi: increase number of additional surfaceswm42015-07-081-6/+2
| | | | | | | | | | | | | | | | | | | Sometime recently, hardware decoding started to fail if h264 with full reference frames was decoded, and --vo=vaapi was used. VAAPI requires registering all surfaces that the decoder will ever use in advance, so if the playback chain uses more surfaces than originally allocated, we fail and drop back to software decoding. I'm not really sure why or when this started happening. Commit 7b9d7265 for one is not the cause - it can be reproduced with earlier commits. It also seems to be timing dependent. Possibly it has to do with the way vo.c retains previous surfaces, and the way they can be queued/unqueued asynchronously. Increasing the number of reserved additional surfaces by 1 fixes it. (Though I have no idea where exactly all these surfaces are being used. Or rather, _when_.)
* dxva2: fix handling of cropped videowm42015-07-061-1/+5
| | | | | | Basically, we need to make sure to allocate enough data for the pretty dumb copy_nv12 function. (It could be avoided by making the function less dumb, but this fix is simpler.)
* video: replace our own refcounting with libavutil'swm42015-07-051-8/+5
| | | | | | | | | | | | | | | | | | | | | | mpv had refcounted frames before libav*, so we were not using libavutil's facilities. Change this and drop our own code. Since AVFrames are not actually refcounted, and only the image data they reference, the semantics change a bit. This affects mainly mp_image_pool, which was operating on whole images instead of buffers. While we could work on AVBufferRefs instead (and use AVBufferPool), this doesn't work for use with hardware decoding, which doesn't map cleanly to FFmpeg's reference counting. But it worked out. One weird consequence is that we still need our custom image data allocation function (for normal image data), because AVFrame's uses multiple buffers. There also seems to be a timing-dependent problem with vaapi (the pool appears to be "leaking" surfaces). I don't know if this is a new problem, or whether the code changes just happened to cause it more often. Raising the number of reserved surfaces seemed to fix it, but since it appears to be timing dependent, and I couldn't find anything wrong with the code, I'm just going to assume it's not a new bug.
* client API, dxva2: add a workaround for OpenGL fullscreen issueswm42015-07-031-0/+1
| | | | | | | | | This is basically a hack for drivers which prevent the mpv DXVA2 decoder glue from working if OpenGL is in fullscreen mode. Since it doesn't add any "hard" new API to the client API, some of the code would be required for a true zero-copy hw decoding pipeline, and sine it isn't too much code after all, this is probably acceptable.
* vo_direct3d, dxva2: use the same D3D devicewm42015-07-031-0/+10
| | | | | Since we still read-back (and don't have hard plans