summaryrefslogtreecommitdiffstats
path: root/video/decode/vda.c
Commit message (Collapse)AuthorAgeFilesLines
* video: remove VDA supportwm42015-09-281-118/+0
| | | | | | | | | VideoToolbox is preferred. Now that FFmpeg released 2.8, there's no reason to support VDA anymore. In fact, we had a bug that made VDA not useable with older FFmpeg versions in some newer mpv releases. VideoToolbox is supported even on slightly older OSX versions, and if not, you still can run mpv without hw decoding.
* vd_lavc: remove unneeded hwdec parameterswm42015-08-191-1/+1
| | | | | | | All hwdec backends now use a single pixel format, and the format is always checked. Also, the init_decoder callback is now mandatory.
* video: fix VideoToolbox/VDA autodetectionwm42015-08-171-2/+12
| | | | | | | | | | | | | | | This affects vo_opengl_cb in particular: it'll most likely auto-load VDA, and then the VideoToolbox decoder won't work. And everything fails. This is mainly caused by FFmpeg using separate pixfmts for the _same_ thing (CVPixelBuffers), simply because libavcodec's architecture demands that hwaccel backends are selected by pixfmts. (Which makes no sense, but now we have the mess.) So instead of duplicating FFmpeg's misdesign, just change the format to our own canonical one on the image output by the decoder. Now the GL interop code is exactly the same for VDA and VT, and we use the VT name only.
* vda: add support for nv12 image formatsStefano Pigozzi2015-05-131-1/+9
| | | | | | | | | The hardware always decodes to nv12 so using this image format causes less cpu usage than uyvy (which we are currently using, since Apple examples and other free software use that). The reduction in cpu usage can add up to quite a bit, especially for 4k or high fps video. This needs an accompaning commit in libavcodec.
* vda: only support the new hwaccel 1.2 API (remove old code)Stefano Pigozzi2014-08-011-145/+13
| | | | | | | | | Since the new hwaccel API is now merged in ffmpeg's stable release, we can finally remove support for the old API. I pretty much kept lu_zero's new code unchanged and just added some error printing (that we had with the old glue code) to make the life of our users less miserable.
* vda: Hwaccel 1.2 supportLuca Barbato2014-05-121-34/+69
| | | | Use the new context and the default functions provided.
* vda: Simplify codec selectionLuca Barbato2014-05-121-28/+2
| | | | VDA supports h264 only.
* vd_lavc: remove compatibility crapwm42014-03-161-1/+1
| | | | | | | All this code was needed for compatibility with very old libavcodec versions only (such as Libav 9). Includes some now-possible simplifications too.
* vda: attempt to fix build (2)wm42013-12-221-2/+2
| | | | Still no OSX here.
* msg: rename mp_msg_log -> mp_msgwm42013-12-211-2/+2
| | | | Same for companion functions.
* video/decode: mp_msg conversionswm42013-12-211-5/+6
| | | | Doesn't cover vdpau/vaapi parts yet, because these are a bit messier.
* Split mpvcore/ into common/, misc/, bstr/wm42013-12-171-2/+2
|
* vo_opengl: support for vda hardware decodingStefano Pigozzi2013-12-021-0/+3
| | | | | | | | | | | The harder work was done in the previous commits. After that this feature comes out almost for free. The only problem is I can't get the textures created with CGLTexImageIOSurface2D to download properly, thus the code performs download using some CoreVideo APIs. If someone knows why download of textures created with CGLTexImageIOSurface2D doesn't work please contact me :)
* video: add vda decode support (with hwaccel) and direct renderingStefano Pigozzi2013-08-221-0/+219
Decoding H264 using Video Decode Acceleration used the custom 'vda_h264_dec' decoder in FFmpeg. The Good: This new implementation has some advantages over the previous one: - It works with Libav: vda_h264_dec never got into Libav since they prefer client applications to use the hwaccel API. - It is way more efficient: in my tests this implementation yields a reduction of CPU usage of roughly ~50% compared to using `vda_h264_dec` and ~65-75% compared to h264 software decoding. This is mainly because `vo_corevideo` was adapted to perform direct rendering of the `CVPixelBufferRefs` created by the Video Decode Acceleration API Framework. The Bad: - `vo_corevideo` is required to use VDA decoding acceleration. - only works with versions of ffmpeg/libav new enough (needs reference refcounting). That is FFmpeg 2.0+ and Libav's git master currently. The Ugly: VDA was hardcoded to use UYVY (2vuy) for the uploaded video texture. One one end this makes the code simple since Apple's OpenGL implementation actually supports this out of the box. It would be nice to support other output image formats and choose the best format depending on the input, or at least making it configurable. My tests indicate that CPU usage actually increases with a 420p IMGFMT output which is not what I would have expected. NOTE: There is a small memory leak with old versions of FFmpeg and with Libav since the CVPixelBufferRef is not automatically released when the AVFrame is deallocated. This can cause leaks inside libavcodec for decoded frames that are discarded before mpv wraps them inside a refcounted mp_image (this only happens on seeks). For frames that enter mpv's refcounting facilities, this is not a problem since we rewrap the CVPixelBufferRef in our mp_image that properly forwards CVPixelBufferRetain/CvPixelBufferRelease calls to the underying CVPixelBufferRef. So, for FFmpeg use something more recent than `b3d63995` for Libav the patch was posted to the dev ML in July and in review since, apparently, the proposed fix is rather hacky.