summaryrefslogtreecommitdiffstats
path: root/video/decode
Commit message (Collapse)AuthorAgeFilesLines
* vdpau_old: restore hardware decoding with old APIwm42013-10-201-0/+1
| | | | --hwdec=vdpau did nothing with older ffmpeg/libav versions. Oops.
* cosmetics: replace "CTRL" defines by enumswm42013-10-021-4/+6
| | | | Because why not.
* core: add --force-windowwm42013-10-021-1/+0
| | | | | | | | | | | | | | | | | This commit adds the --force-window option, which will cause mpv always to create a window when started. This can be useful when pretending that mpv is a GUI application (which it isn't, but users pretend anyway), and playing audio files would run mpv in the background without giving a window to control it. This doesn't actually create the window immediately: it only does so only after initializing playback and when it is clear that there won't be any actual video. This could be a problem when starting slow or completely stuck network streams (mpv would remain frozen in the background), or if video initialization somehow is stuck forever in an in-between state (like when the decoder doesn't output a video frame, but doesn't return an error either). Well, we can pretend only so much that mpv is a GUI application.
* vaapi: remove non-VLD entrypointswm42013-09-291-6/+2
| | | | | These probably don't work. libavcodec doesn't seem to support them, and neither did the original mplayer-vaapi patch.
* vaapi: fix non-sense conditionwm42013-09-291-1/+1
| | | | Attempting signed comparison on unsigned value.
* vaapi: potentially make reading surfaces back to system RAM fasterwm42013-09-271-1/+4
| | | | | | | | Don't allocate a VAImage and a mp_image every time. VAImage are cached in the surfaces themselves, and for mp_image an explicit pool is created. The retry loop runs only once for each surface now. This also makes use of vaDeriveImage() if possible.
* video: let sh_video->disp_w/h always be container sizewm42013-09-261-4/+1
| | | | | | | Nothing really accesses it. Subtitle initialization actually does in a somewhat meaningful way, but there container size is probably fine, as subtitles were always initialized before the first video frame was decoded.
* video: let sh_video->aspect always be container aspect ratiowm42013-09-262-11/+14
| | | | | | | Now writing -1 to the 'aspect' property resets the video to the auto aspect ratio. Returning the aspect from the property becomes a bit more complicated, because we still try to return the container aspect ratio if no frame has been decoded yet.
* vd: move aspect calculation to a functionwm42013-09-261-24/+9
| | | | | | | | This function would probably be useful in other places too. I'm not sure why vd.c doesn't apply the aspect if it changes size by less than 4 pixels. Maybe it's supposed to avoid ugly results with bad scalers if the difference is too small to be noticed normally.
* vd: remove some dead codewm42013-09-261-14/+4
| | | | | As of now, this function is called only after decoding a full, valid video frame, so the image parameters should be reliable.
* vaapi: allow GPU read-back with --hwdec=vaapi-copywm42013-09-253-3/+108
| | | | | | | | | | | | | | This code is actually quite inefficient: it reuses the (slow, simple) screenshot code. It uses an inefficient method to read the image (vaGetImage() instead of vaDeriveImage()), allocates new memory for each frame that is read, and it tries all image formats again each time. Also, in my tests it always picked NV12 as image format, which is not ideal if you actually want to filter the video, and vo_xv can't handle this format without conversion either. However, a user confirmed that it worked for him, so everything is fine.
* vd_lavc: allow process_image to change image formatwm42013-09-252-18/+21
| | | | | | | | | | | | | | This will allow GPU read-back with process_image. We have to restructure how init_vo() works. Instead of initializing the VO before process_image is called, rename init_vo() to update_image_params(), and let it update the params only. Then we really initialize the VO after process_image. As a consequence of these changes, already decoded hw frames are correctly unreferenced if creation of the filter chain fails. This could trigger assertions on VO uninitialization, because it's not allowed to reference hw frames past VO lifetime.
* vaapi: add vf_vavpp and use it for deinterlacingxylosper2013-09-251-36/+29
| | | | | | | | Merged from pull request #246 by xylosper. Minor cosmetic changes, some adjustments (compatibility with older libva versions), and manpage additions by wm4. Signed-off-by: wm4 <wm4@nowhere>
* vd_lavc: reset last_sample_aspect_ratio in uninit_avctx()xylosper2013-09-131-0/+1
| | | | | | | | In init_vo(), if sh->aspect is 0 or last_sample_aspect_ratio is set, sh->aspect is overwritten. With software decoding fallback behaviour, this makes the aspect ratio from container ignored since last_sample_aspect_ratio is already set in first try with hardware decoding.
* core: add --deinterlace option, restore it with resume functionalitywm42013-09-131-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The --deinterlace option does on playback start what the "deinterlace" property normally does at runtime. You could do this before by using the --vf option or by messing with the vo_vdpau default options, but this new option is supposed to be a "foolproof" way. The main motivation for adding this is so that the deinterlace property can be restored when using the video resume functionality (quit_watch_later command). Implementation-wise, this is a bit messy. The video chain is rebuilt in mpcodecs_reconfig_vo(), where we don't have access to MPContext, so the usual mechanism for enabling deinterlacing can't be used. Further, mpcodecs_reconfig_vo() is called by the video decoder, which doesn't have access to MPContext either. Moving this call to mplayer.c isn't currently possible either (see below). So we just do this before frames are filtered, which potentially means setting the deinterlacing every frame. Fortunately, setting deinterlacing is stable and idempotent, so this is hopefully not a problem. We also add a counter that is incremented on each reconfig to reduce the amount of additional work per frame to nearly zero. The reason we can't move mpcodecs_reconfig_vo() to mplayer.c is because of hardware decoding: we need to check whether the video chain works before we decide that we can use hardware decoding. Changing it so that this can be decided in advance without building a filter chain sounds like a good idea and should be done, but we aren't there yet.
* video: handle video output levels with mp_image_paramswm42013-08-243-33/+1
| | | | | | | | | | | | Until now, video output levels (obscure feature, like using TV screens that require RGB output in limited range, similar to YUY) still required handling of VOCTRL_SET_YUV_COLORSPACE. Simplify this, and use the new mp_image_params code. This gets rid of some code. VOCTRL_SET_YUV_COLORSPACE is not needed at all anymore in VOs that use the reconfig callback. The result of VOCTRL_GET_YUV_COLORSPACE is now used only used for the colormatrix related properties (basically, for display on OSD). For other VOs, VOCTRL_SET_YUV_COLORSPACE will be sent only once after config instead of twice.
* video/out: don't require VOs to handle screenshot aspect speciallywm42013-08-241-3/+1
| | | | | | | | | | | | | | | | This affects VOs which just reuse the mp_image from draw_image() to return screenshots. The aspect of these images is never different from the aspect the screenshots should be, so there's no reason to adjust the aspect in these cases. Other VOs still need it in order to restore the original image attributes. This requires some changes to the video filter code to make sure that the aspect in the passed mp_images is consistent. The changes in mplayer.c and vd_lavc.c are (probably) not strictly needed for this commit, but contribute to consistency.
* video: add vda decode support (with hwaccel) and direct renderingStefano Pigozzi2013-08-222-6/+223
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Decoding H264 using Video Decode Acceleration used the custom 'vda_h264_dec' decoder in FFmpeg. The Good: This new implementation has some advantages over the previous one: - It works with Libav: vda_h264_dec never got into Libav since they prefer client applications to use the hwaccel API. - It is way more efficient: in my tests this implementation yields a reduction of CPU usage of roughly ~50% compared to using `vda_h264_dec` and ~65-75% compared to h264 software decoding. This is mainly because `vo_corevideo` was adapted to perform direct rendering of the `CVPixelBufferRefs` created by the Video Decode Acceleration API Framework. The Bad: - `vo_corevideo` is required to use VDA decoding acceleration. - only works with versions of ffmpeg/libav new enough (needs reference refcounting). That is FFmpeg 2.0+ and Libav's git master currently. The Ugly: VDA was hardcoded to use UYVY (2vuy) for the uploaded video texture. One one end this makes the code simple since Apple's OpenGL implementation actually supports this out of the box. It would be nice to support other output image formats and choose the best format depending on the input, or at least making it configurable. My tests indicate that CPU usage actually increases with a 420p IMGFMT output which is not what I would have expected. NOTE: There is a small memory leak with old versions of FFmpeg and with Libav since the CVPixelBufferRef is not automatically released when the AVFrame is deallocated. This can cause leaks inside libavcodec for decoded frames that are discarded before mpv wraps them inside a refcounted mp_image (this only happens on seeks). For frames that enter mpv's refcounting facilities, this is not a problem since we rewrap the CVPixelBufferRef in our mp_image that properly forwards CVPixelBufferRetain/CvPixelBufferRelease calls to the underying CVPixelBufferRef. So, for FFmpeg use something more recent than `b3d63995` for Libav the patch was posted to the dev ML in July and in review since, apparently, the proposed fix is rather hacky.
* vaapi: use highest available profile, instead of mapping it exactlywm42013-08-191-41/+38
| | | | | | | Now the code does the same as the original MPlayer VAAPI patch, instead of trying to map the profiles exactly. See previous commit for justification and discussion.
* vdpau: don't try to match codec profileswm42013-08-191-27/+11
| | | | | | | | | | | | | | | | Instead, do what MPlayer did all these years. It worked for them, so there's probably no reason to change this. Apparently fixes playback with some files, where the VDPAU decoder does not formally support a profile, but decoding works with a more powerful profile anyway. Though note that MPlayer did this because it couldn't do it in a better way (no decoder reported profiles available when creating the VDPAU decoder), so it's not entirely clear whether this is a good idea. An alterbative implementation might try to map the profiles exactly, and do some fall backs if the exact profile is not available. But this hack-solution works too.
* vdpau_old: add forgotten probe callbackwm42013-08-151-0/+1
| | | | Could make it crash if the VO didn't support vdpau decoding.
* vdpau: fix compilation on Libavwm42013-08-151-0/+1
| | | | | | Libav's <libavcodec/vdpau.h> header uses some libavocdec symbols without forward-declaring them and without including the headers declaring them. FFmpeg's header for this is fine.
* video/decode: change fix_image callbackwm42013-08-153-5/+7
| | | | | This might make it slightly easier when trying to implement surface read-back for hardware decoding.
* vd_lavc: fix comment, document hwdec video frame size trickinesswm42013-08-151-1/+4
| | | | | | | | | About this issue, it would be better if the surfaces could be allocated with the real size, and the vdpau video mixer could be created with that size as well. That would be a bit hard, because the real surface size had to be communicated to vdpau. So I'm going with this solution. vaapi seems to be fine with either surface size, so there's hopefully no problem.
* video/decode: pass parameters directly to hwdec allocate_image callbackwm42013-08-155-20/+14
| | | | | | Instead of passing AVFrame. This also moves the mysterious logic about the size of the allocated image to common code, instead of duplicating it everywhere.
* video: add vaapi decode and output supportwm42013-08-124-0/+420
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is based on the MPlayer VA API patches. To be exact it's based on a very stripped down version of commit f1ad459a263f8537f6c from git://gitorious.org/vaapi/mplayer.git. This doesn't contain useless things like benchmarking hacks and the demo code for GLX interop. Also, unlike in the original patch, decoding and video output are split into separate source files (the separation between decoding and display also makes pixel format hacks unnecessary). On the other hand, some features not present in the original patch were added, like screenshot support. VA API is rather bad for actual video output. Dealing with older libva versions or the completely broken vdpau backend doesn't help. OSD is low quality and should be rather slow. In some cases, only either OSD or subtitles can be shown at the same time (because OSD is drawn first, OSD is prefered). Also, libva can't decide whether it accepts straight or premultiplied alpha for OSD sub-pictures: the vdpau backend seems to assume premultiplied, while a native vaapi driver uses straight. So I picked straight alpha. It doesn't matter much, because the blending code for straight alpha I added to img_convert.c is probably buggy, and ASS subtitles might be blended incorrectly. Really good video output with VA API would probably use OpenGL and the GL interop features, but at this point you might just use vo_opengl. (Patches for making HW decoding with vo_opengl have a chance of being accepted.) Despite these issues, decoding seems to work ok. I still got tearing on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also tested with the vdpau vaapi wrapper on a nvidia system; however this was rather broken. (Fortunately, there is no reason to use mpv's VAAPI support over native VDPAU.)
* video: redo hw decoding initialization, add --hwdec=autowm42013-08-114-99/+195
| | | | | | | | | | | | | | | | Change how the HW decoding stuff is organized, the way it's initialized in particular. Instead of duplicating the list of supported codecs for hwaccel decoders, add a probe function which allows each decoder to report whether it supports a given codec. Add an "auto" choice to the --hwdec option, which automatically enables hardware decoding if libavcodec and/or the VO supports it. What mpv prints on the terminal changes a bit. Now it will just print a single line whether hw decoding is used or not (and nothing at all if no hw decoding at all was requested). The pretty violent fallback from hw decoding to software decoding is still quite verbose and evil-looking though.
* core: move contents to mpvcore (2/2)Stefano Pigozzi2013-08-064-13/+13
| | | | Followup commit. Fixes all the files references.
* vd_lavc: print warning if hardware decoding API is not availablewm42013-07-301-0/+3
| | | | | At least currently, this case pretty much happens only in the case vdpau is requested, but not compiled in.
* vd_lavc: fix CONFIG_VDPAU checkwm42013-07-301-1/+1
| | | | | | CONFIG_VDPAU was just defined to 0, instead of undefined when vdpau was unavailable. I blame the old mplayer code, which apparently can't have consistent conventions.
* vdpau: split off decoder parts, use "new" libavcodec vdpau hwaccel APIwm42013-07-286-73/+587
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the decoder parts from vo_vdpau.c to a new file vdpau_old.c. This file is named so because because it's written against the "old" libavcodec vdpau pseudo-decoder (e.g. "h264_vdpau"). Add support for the "new" libavcodec vdpau support. This was recently added and replaces the "old" vdpau parts. (In fact, Libav is about to deprecate and remove the "old" API without deprecation grace period, so we have to support it now. Moreover, there will probably be no Libav release which supports both, so the transition is even less smooth than we could hope, and we have to support both the old and new API.) Whether the old or new API is used is checked by a configure test: if the new API is found, it is used, otherwise the old API is assumed. Some details might be handled differently. Especially display preemption is a bit problematic with the "new" libavcodec vdpau support: it wants to keep a pointer to a specific vdpau API function (which can be driver specific, because preemption might switch drivers). Also, surface IDs are now directly stored in AVFrames (and mp_images), so they can't be forced to VDP_INVALID_HANDLE on preemption. (This changes even with older libavcodec versions, because mp_image always uses the newer representation to make vo_vdpau.c simpler.) Decoder initialization in the new code tries to deal with codec profiles, while the old code always uses the highest profile per codec. Surface allocation changes. Since the decoder won't call config() in vo_vdpau.c on video size change anymore, we allow allocating surfaces of arbitrary size instead of locking it to what the VO was configured. The non-hwdec code also has slightly different allocation behavior now. Enabling the old vdpau special decoders via e.g. --vd=lavc:h264_vdpau doesn't work anymore (a warning suggesting the --hwdec option is printed instead).
* lavc_dr1: make reference counting thread-safewm42013-07-281-4/+39
| | | | | | | | See previous commits. This time, the lock is kept for rather long times (e.g. for the duration of a big image memory allocation), but this (probably) still doesn't matter at all. This also affects legacy code only (pre-refcounting libavcodec).
* Fix some -Wshadow warningswm42013-07-231-4/+4
| | | | | | In general, this warning can hint to actual bugs. We don't enable it yet, because it would conflict with some unmerged code, and we should check with clang too (this commit was done by testing with gcc).
* video: remove fullscreen flags chaoswm42013-07-181-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There was a MPOpts fullscreen field, a mp_vo_opts.fs field, and VOFLAG_FULLSCREEN. Remove all these and introduce a mp_vo_opts.fullscreen flag instead. When VOs receive VOCTRL_FULLSCREEN, they are supposed to set the current fullscreen mode to the state in mp_vo_opts.fullscreen. They also should do this implicitly on config(). VOs which are capable of doing so can update the mp_vo_opts.fullscreen if the actual fullscreen mode changes (e.g. if the user uses the window manager controls). If fullscreen mode switching fails, they can also set mp_vo_opts.fullscreen to the actual state. Note that the X11 backend does almost none of this, and it has a private fs flag to store the fullscreen flag, instead of getting it from the WM. (Possibly because it has to deal with broken WMs.) The fullscreen option has to be checked on config() to deal with the -fs option, especially with something like: mpv --fs file1.mkv --{ --no-fs file2.mkv --} (It should start in fullscreen mode, but go to windowed mode when playing file2.mkv.) Wayland changes by: Alexander Preisinger <alexander.preisinger@gmail.com> Cocoa changes by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
* video: redo how colorspaces are handledwm42013-07-163-45/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of handling colorspaces with VFCTRLs/VOCTRLs, make them part of the normal video format negotiation. The colorspace is passed down like other video params with config/reconfig calls. Forcing colorspaces (via the --colormatrix options and properties) is handled differently too: if it's changed, completely reinit the video chain. This is slower and requires a precise seek to the same position to perform an update, but it's simpler and less bug-prone. Considering switching the colorspace at runtime by user-interaction is a rather obscure feature, this is a good change. The colorspace VFCTRLs and VOCTRLs are still kept. The VOs rely on it, and would have to be changed to get rid of them. We'll do that later, and convert them incrementally instead of in one go. Note that controlling the output range now always works on VO level. Basically, this means you can't get vf_scale to output full-range YUV for whatever reason. If that is really wanted, it should be a vf_scale option. the previous behavior didn't make too much sense anyway. This commit fixes a few bugs (such as playing RGB video and converting that to YUV with vf_scale - a recent commit broke this and forced the VO to display YUV as RGB if possible), and might introduce some new ones.
* Fix build on Libav stable (dammit)wm42013-07-151-0/+8
| | | | | | | The previous commit fixed Libav git, but it was still broken on Libav 9.8. Also, while we're at it, add a note to lavc_dr1.c and its status.
* vd: add VDCTRL_GET_PARAMSwm42013-07-152-0/+4
| | | | | | This is probably going to be unused, but might help with debugging and such. It returns the image parameters as determined by the video decoder.
* video: unify colorspace setupwm42013-07-153-21/+36
| | | | | | | | | | Guess the colorspace directly in mpcodecs_reconfig_vo(), instead of in set_video_colorspace(). The difference is that the latter function just makes the video filter chain (and VOs) force the detected colorspace, and then throws it away, while the former is a bit more general and central. Not really a big difference and it doesn't matter much in practice, but it guarantees that there is no internal disagreement about the colorspace.
* vf: add vf_control wrapperwm42013-07-151-6/+6
| | | | | Slightly cleaner, although rather redundant. But still, why wasn't this added 10 years ago?
* dec_video: add vd_control wrapperwm42013-07-152-7/+12
| | | | Slightly cleaner.
* Cleanup some include statementswm42013-07-122-4/+1
|
* Remove old demuxerswm42013-07-071-4/+1
| | | | | | | | | | Delete demux_avi, demux_asf, demux_mpg, demux_ts. libavformat does better than them (except in rare corner cases), and the demuxers have a bad influence on the rest of the code. Often they don't output proper packets, and require additional audio and video parsing. Most work only in --no-correct-pts mode. Remove them to facilitate further cleanups.
* vo_opengl: handle chroma locationwm42013-06-281-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the video decoder chroma location flags and render chroma locations other than centered. Until now, we've always used the intuitive and obvious centered chroma location, but H.264 uses something else. FFmpeg provides a small overview in libavcodec/avcodec.h: ----------- /** * X X 3 4 X X are luma samples, * 1 2 1-6 are possible chroma positions * X X 5 6 X 0 is undefined/unknown position */ enum AVChromaLocation{ AVCHROMA_LOC_UNSPECIFIED = 0, AVCHROMA_LOC_LEFT = 1, ///< mpeg2/4, h264 default AVCHROMA_LOC_CENTER = 2, ///< mpeg1, jpeg, h263 AVCHROMA_LOC_TOPLEFT = 3, ///< DV AVCHROMA_LOC_TOP = 4, AVCHROMA_LOC_BOTTOMLEFT = 5, AVCHROMA_LOC_BOTTOM = 6, AVCHROMA_LOC_NB , ///< Not part of ABI }; ----------- The visual difference is literally minimal, but since videophiles apparently consider this detail as quality mark of a video renderer, support it anyway. We don't bother with chroma locations other than centered and left, though. Not sure about correctness, but it's probably ok.
* video: add a new method to configure filters and VOswm42013-06-284-28/+47
| | | | | | | | | | | | | | | | | | The filter chain and the video ouputs have config() functions. They are strictly limited to transfering the video size and format. Other parameters (like color levels) have to be transferred separately. Improve upon this by introducing a separate set of reconfig() functions, which use mp_image_params to carry format parameters. This struct contains all image format related parameters from config(), plus additional parameters such as colorspace. Change vf_rotate to use it, as well as vo_opengl. vf_rotate is just an example/test case, but vo_opengl will need it later. The intention is also to get rid of VOCTRL_SET_YUV_COLORSPACE. This information is now handed to the VOs via reconfig(). The getter, VOCTRL_GET_YUV_COLORSPACE, will still be needed though.
* options: remove -lavdopts debug suboptionwm42013-06-281-4/+0
| | | | This can be set as avopt instead.
* osdep: remove shmem wrapperwm42013-06-181-1/+0