summaryrefslogtreecommitdiffstats
path: root/video/decode/lavc.h
Commit message (Collapse)AuthorAgeFilesLines
* vd_lavc: expose mastering display side data reference peakNiklas Haas2016-07-031-1/+1
| | | | | | | | | | | | | | | | | | | This greatly improves the result when decoding typical (ST.2084) HDR content, since the job of tone mapping gets significantly easier when you're only mapping from 1000 to 250, rather than 10000 to 250. The difference is so drastic that we can now even reasonably use `hdr-tone-mapping=linear` and get a very perceptually uniform result that is only slightly darker than normal. (To compensate for the extra dynamic range) Due to weird implementation details, this only seems to be present on keyframes (or something like that), so we have to cache the last seen value for the frames in between. Also, in some files the metadata is just completely broken / nonsensical, so I decided to apply a simple heuristic to detect completely broken metadata.
* vo_opengl: generalize HDR tone mapping mechanismNiklas Haas2016-07-031-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | This involves multiple changes: 1. Brightness metadata is split into nominal peak and signal peak. For a quick and dirty explanation: nominal peak is the brightest value that your color space can represent (i.e. the brightness of an encoded 1.0), and signal peak is the brightest value that actually occurs in the video (i.e. the brightest thing that's displayed). 2. vo_opengl uses a new decision logic to figure out the right nom_peak and sig_peak for all situations. It also does a better job of picking the right target gamut/colorspace to use for the OSD. (Which still is and still should be treated as sRGB). This change in logic also fixes #3293 en passant. 3. Since it was growing rapidly, the logic for auto-guessing / inferring the right colorimetry configuration (in pass_colormanage) was split from the logic for actually performing the adaptation (now pass_color_map). Right now, the new logic doesn't do a whole lot since HDR metadata is still ignored (but not for long).
* video: add --hwdec=auto-copy modewm42016-05-111-0/+2
| | | | | | | | This uses the normal autoprobing rules like "auto", but rejects anything that isn't flagged as copying data back to system memory. The chunk in command.c was dead code, so remove it instead of updating it.
* video: refactor how VO exports hwdec device handleswm42016-05-091-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | The main change is with video/hwdec.h. mp_hwdec_info is made opaque (and renamed to mp_hwdec_devices). Its accessors are mainly thread-safe (or documented where not), which makes the whole thing saner and cleaner. In particular, thread-safety rules become less subtle and more obvious. The new internal API makes it easier to support multiple OpenGL interop backends. (Although this is not done yet, and it's not clear whether it ever will.) This also removes all the API-specific fields from mp_hwdec_ctx and replaces them with a "ctx" field. For d3d in particular, we drop the mp_d3d_ctx struct completely, and pass the interfaces directly. Remove the emulation checks from vaapi.c and vdpau.c; they are pointless, and the checks that matter are done on the VO layer. The d3d hardware decoders might slightly change behavior: dxva2-copy will not use the VO device anymore if the VO supports proper interop. This pretty much assumes that any in such cases the VO will not use any form of exclusive mode, which makes using the VO device in copy mode unnecessary. This is a big refactor. Some things may be untested and could be broken.
* vd_lavc: better hwdec wrapper decoder selectionwm42016-04-251-0/+6
| | | | | | | | | | | | | | | | This is intended for cases when --hwdec needs to override the decoder implementation in use, like for example on the RPI. It does two things: 1. Allow the hwdec to indicate a decoder suffix. libavcodec by convention adds a suffix to all wrapper decoders, and here we start relying on it. While not necessarily the best idea, it's the only thing we got. libavcodec's hwaccel list is useless, because it only has the codec ID, not the associated decoder's name. 2. Make --hwdec=auto work properly. It shouldn't fail anymore, and hwdec probing should reliably work, even if a different decoder is selected with --vd. The semantics of --hwdec should dictate that it overrides the default decoder.
* vd_lavc: let hardware decoder request delaying frames explicitlywm42016-04-071-0/+5
| | | | | | | | | | Until now, the presence of the process_image() callback was used to set a delay queue with a hardcoded size. Change this to a vd_lavc_hwdec field instead, so the decoder can explicitly set this if it's really needed. Do this so process_image() can be used in the VideoToolbox glue code for something entirely unrelated.
* vd_lavc: fix codec vs. decoder confusionwm42016-04-071-2/+2
| | | | | | | | | | Some functions which expected a codec name (i.e. the name of the video format itself) were passed a decoder name. Most "native" libavcodec decoders have the same name as the codec, so this was never an issue. This should mean that e.g. using "--vd=lavc:h264_mmal --hwdec=mmal" should now actually enable native surface mode (instead of doing copy- back).
* vd_lavc: remove redundant best_csp fieldwm42016-02-031-1/+0
| | | | And some other simplifications.
* vd_lavc: force microsecond timestamps on RPIwm42016-02-031-0/+1
| | | | | | | | | | Avoids "problems". In particular, it makes MMAL output a NOPTS timestamp if the input timestamp was NOPTS. Don't do it for other decoders. Ideally, we will at some point in the future switch to integer fractions for timestamps at least up until the filter layer. But this would be a larger change, and for now I'd prefer keeping the not-rounded demuxer timestamps (if we have them).
* vd_lavc: simplify an aspect of hwdec fallbackwm42016-01-291-1/+1
| | | | | | Don't give the "software_fallback_decoder" field special meaning. Alwass set it, and rename it to "decoder". Whether hw decoding is used is determined by the "hwdec" field already.
* vd_lavc: delay images before reading them backwm42016-01-251-0/+6
| | | | Facilitates hardware pipelining in particular with nvidia/dxva.
* vd_lavc: be more careful with flushing the decoderwm42015-11-101-0/+1
| | | | | | | | | | | | | | | | | Until now, we've relied on the following things: - you can send flush packets to the decoder even if it's fully flushed, - you can send new packets to a flushed decoder, - you can send new packers to a partially flushed decoder. ("flushing" refers to sending flush packets to the decoder until the decoder does not return new pictures, not avcodec_flush_buffers().) All of these are questionable. The libavcodec API probably doesn't guarantee that these work well or at all, even though most decoders have no issue with these. But especially with hardware decoding wrappers (like MMAL), real problems can be expected. Isolate us from these corner cases by handling them explicitly.
* rpi: add support for codecs other than h264wm42015-11-051-1/+1
| | | | FFmpeg now supports h264 and mpeg2. At least vc-1 will probably follow.
* vd_lavc: make hwdec fallback more tolerantwm42015-11-031-0/+1
| | | | | | | | | | | | A hw decoder might fail to decode a frame for multiple reasons, and not always just because decoding is impossible. We can't generally distinguish these reasons well. Make it more tolerant by accepting failures of 3 frames, but not more. The threshold can be adjusted by the repurposed --vd-lavc-software-fallback option. (This behavior was suggested much earlier in some PR, but at the time the "proper" hwdec fallback was indistinguishable from decoding error. With the current situation, "proper" fallback is still instantious.)
* vd_lavc: better hwdec log outputwm42015-09-021-0/+1
| | | | | | | | | | | | Often, we don't know whether hardware decoding will work until we've tried. (This used to be different, but API changes and improvements in libavcodec led to this situation.) We will often output that we're going to use hardware decoding, and then print a fallback warning. Instead, print the status once we have decoded a frame. Some of the old messages are turned into verbose messages, which should be helpful for debugging. Also add some new ones.
* vd_lavc: remove unneeded hwdec parameterswm42015-08-191-3/+2
| | | | | | | All hwdec backends now use a single pixel format, and the format is always checked. Also, the init_decoder callback is now mandatory.
* vdpau: add support for the "new" libavcodec vdpau APIwm42015-05-281-0/+2
| | | | | | | | | Yet another of these dozens of hwaccel changes. This time, libavcodec provides utility functions, which initialize the vdpau decoder and map codec profiles. So a lot of work the API user had to do falls away. This also will give us support for high bit depth profiles, and possibly HEVC once libavcodec supports it.
* vd_lavc: make hardware decoding fallback less violentwm42015-05-281-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Most of hardware decoding is initialized lazily. When the first packet is parsed, libavcodec will call get_format() to check whether hw or sw decoding is wanted. Until now, we've returned AV_PIX_FMT_NONE from get_format() if hw decoder initialization failed. This caused the avcodec_decode_video2() call to fail, which in turn let us trigger the fallback. We didn't return a sw format from get_format(), because we didn't want to continue decoding at all. (The reason being that full reinitialization is more robust when continuing sw decoding.) This has some disadvantages. libavcodec vomited some unwanted error messages. Sometimes the failures are more severe, like it happened with HEVC. In this case, the error code path simply acted up in a way that was extremely inconvenient (and had to be fixed by myself). In general, libavcodec is not designed to fallback this way. Make it a bit less violent from the API usage point of view. Return a sw format if hw decoder initialization fails. In this case, we let get_buffer2() call avcodec_default_get_buffer2() as well. libavcodec is allowed to perform its own sw fallback. But once the decode function returns, we do the full reinitialization we wanted to do. The result is that the fallback is more robust, and doesn't trigger any decoder error codepaths or messages either. Change our own fallback message to a warning, since there are no other messages with error severity anymore.
* vd_lavc: report actually used hwdec APIwm42015-05-251-1/+0
| | | | | Instead of the requested one, which can be just "auto", and which is rather useless.
* RPI supportwm42015-03-291-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This requires FFmpeg git master for accelerated hardware decoding. Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav will also work. Most things work. Screenshots don't work with accelerated/opaque decoding (except using full window screenshot mode). Subtitles are very slow - even simple but huge overlays can cause frame drops. This always uses fullscreen mode. It uses dispmanx and mmal directly, and there are no window managers or anything on this level. vo_opengl also kind of works, but is pretty useless and slow. It can't use opaque hardware decoding (copy back can be used by forcing the option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend is preferred over the X11 ones in case you're trying on X11; but X11 is even more useless on RPI. This doesn't correctly reject extended h264 profiles and thus doesn't fallback to software decoding. The hw supports only up to the high profile, and will e.g. return garbage for Hi10P video. This sets a precedent of enabling hw decoding by default, but only if RPI support is compiled (which most hopefully it will be disabled on desktop Linux platforms). While it's more or less required to use hw decoding on the weak RPI, it causes more problems than it solves on real platforms (Linux has the Intel GPU problem, OSX still has some cases with broken decoding.) So I can live with this compromise of having different defaults depending on the platform. Raspberry Pi 2 is required. This wasn't tested on the original RPI, though at least decoding itself seems to work (but full playback was not tested).
* command: add property returning detected hwdec APIwm42015-02-021-12/+0
| | | | | | | | | This is somewhat imperfect, because detection of hw decoding APIs is mostly done on demand, and often avoided if not necessary. (For example, we know very well that there are no hw decoders for certain codecs.) This also requires every hwdec backend to identify itself (see hwdec.h changes).
* video: initial dxva2 supportwm42014-10-251-0/+1
| | | | | Shamelessly stolen from ffmpeg. It probably doesn't work - you can debug it yourself.
* vaapi: try dealing with Intel's braindamaged shit driverswm42014-08-211-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | So talking to a certain Intel dev, it sounded like modern VA-API drivers are reasonable thread-safe. But apparently that is not the case. Not at all. So add approximate locking around all vaapi API calls. The problem appeared once we moved decoding and display to different threads. That means the "vaapi-copy" mode was unaffected, but decoding with vo_vaapi or vo_opengl lead to random crashes. Untested on real Intel hardware. With the vdpau emulation, it seems to work fine - but actually it worked fine even before this commit, because vdpau was written and designed not by morons, but competent people (vdpau is guaranteed to be fully thread-safe). There is some probability that this commit doesn't fix things entirely. One problem is that locking might not be complete. For one, libavcodec _also_ accesses vaapi, so we have to rely on our own guesses how and when lavc uses vaapi (since we disable multithreading when doing hw decoding, our guess should be relatively good, but it's still a lavc implementation detail). One other reason that this commit might not help is Intel's amazing potential to fuckup anything that is good and holy.
* video: warn if an emulated hwdec API is usedwm42014-05-281-0/+1
| | | | | | | | | | | | | | | | mpv supports two hardware decoding APIs on Linux: vdpau and vaapi. Each of these has emulation wrappers. The wrappers are usually slower and have fewer features than their native opposites. In particular the libva vdpau driver is practically unmaintained. Check the vendor string and print a warning if emulation is detected. Checking vendor strings is a very stupid thing to do, but I find the thought of people using an emulated API for no reason worse. Also, make --hwdec=auto never use an API that is detected as emulated. This doesn't work quite right yet, because once one API is loaded, vo_opengl doesn't unload it, so no hardware decoding will be used if the first probed API (usually vdpau) is rejected. But good enough.
* video: add a "hwdec" property to enable or disable hw decoding at runtimewm42014-04-231-0/+1
|
* vd_lavc: reinit hwdec on profile changeswm42014-03-171-0/+1
| | | | | Needed in theory. I don't know if there are even any real-world files which change the profile mid-stream.
* vd_lavc: remove unused fieldwm42014-03-161-1/+0
|
* vd_lavc: remove compatibility crapwm42014-03-161-25/+3
| | | | | | | All this code was needed for compatibility with very old libavcodec versions only (such as Libav 9). Includes some now-possible simplifications too.
* vd_lavc: ridiculous workaround for Libav 9 compatibilitywm42014-03-161-0/+1
| | | | | | | | This "sometimes" crashed when seeking. The fault apparently lies in libavcodec: the decoder returns an unreferenced frame! This is completely insane, but somehow I'm apparently still expected to work this around. As a reaction, I will drop Libav 9 support in the next commit. (While this commit will go into release/0.3.)
* video: initialize hw decoder in get_formatwm42014-03-101-0/+7
| | | | | | | | | | | | | | | | Apparently the "right" place to initialize the hardware decoder is in the libavcodec get_format callback. This doesn't change vda.c and vdpau_old.c, because I don't have OSX, and vdpau_old.c is probably going to be removed soon (if Libav ever manages to release Libav 10). So for now the init_decoder callback added with this commit is optional. This also means vdpau.c and vaapi.c don't have to manage and check the image parameters anymore. This change is probably needed for when libavcodec VDA supports gets a new iteration of its API.
* video/decode: mp_msg conversionswm42013-12-211-0/+1
| | | | Doesn't cover vdpau/vaapi parts yet, because these are a bit messier.
* Reduce recursive config.h inclusions in headerswm42013-12-181-2/+0
| | | | | | In my opinion, config.h inclusions should be kept to a minimum. MPlayer code really liked including config.h everywhere, though, even in often used header files. Try to reduce this.
* video: move video filter chain initialization from decoder to playerwm42013-12-101-2/+0
| | | | | | | | | | | | | This should help fixing some issues (like not draining video frames correctly on reinit), as well as decoupling the decoder, filter chain, and VO code. I also wanted to make the hardware video decoding fallback work properly if software-only video filters are inserted. This currently has the issue that the fallback is too violent, and throws away a bunch of demuxer packets needed to restart software decoding properly. But keeping "backup" packets turned out as too hacky, so I'm not doing this, at least not yet.
* Take care of some libavutil deprecations, drop support for FFmpeg 1.0wm42013-11-291-1/+1
| | | | | | | | | | | | | | PIX_FMT_* -> AV_PIX_FMT_* (except some pixdesc constants) enum PixelFormat -> enum AVPixelFormat Losen some version checks in certain newer pixel formats. av_pix_fmt_descriptors -> av_pix_fmt_desc_get This removes support for FFmpeg 1.0.x, which is even older than Libav 9.x. Support for it probably was already broken, and its libswresample was rejected by our build system anyway because it's broken. Mostly untested; it does compile with Libav 9.9.
* video: move handling of container vs. stream AR out of vd_lavc.cwm42013-11-231-1/+0
| | | | | | | Now the actual decoder doesn't need to care about this anymore, and it's handled in generic code instead. This simplifies vd_lavc.c, and in particular we don't need to detect format changes in the old way anymore.
* video: move struct mp_hwdec_info into its own header filewm42013-11-231-0/+1
| | | | | | | | This means most code accessing this struct must now include hwdec.h instead of dec_video.h. I just put it into dec_video.h at first because I thought a separate file would be a waste, but it's more proper to do it this way, as there are too many files which include dec_video.h only to get the mp_hwdec_info definition.
* vf_vavpp: make it work with vo_opengl and software decodingwm42013-11-221-2/+0
| | | | | | | | | | vo_opengl always loads the hwdec backend lazily, so hwdec_request_api() has to be called to possibly load it. This makes vf_vavpp work with software decoding. (Hardware decoding loads the backend before the filter is initialized, so this case is different.) Also, the VFCTRL_GET_HWDEC_INFO call doesn't need to be checked. If it fails, the info will be left blank.
* vo_opengl: add infrastructure for hardware decoding OpenGL interopwm42013-11-041-0/+2
| | | | | | | | | | | | Most hardware decoding APIs provide some OpenGL interop. This allows using vo_opengl, without having to read the video data back from GPU. This requires adding a backend for each hardware decoding API. (Each backend is an entry in gl_hwdec_vaglx[].) The backends expose video data as a set of OpenGL textures. Add infrastructure to support this. The next commit will add support for VA-API.
* video: check profiles with hardware decodingwm42013-11-011-0/+13
| | | | | | | | | | | | | | | We had some code for checking profiles earlier, which was removed in commits 2508f38 and adfb71b. These commits mentioned that (working) hw decoding was sometimes prevented due to profile checking, but I can't find the samples anymore that showed this behavior. Also, I changed my opinion, and I think checking the profiles is something that should be done for better fallback to software decoding behavior. The checks roughly follow VLC's vdpau profile checks, although we do not check codec levels. (VLC's profile checks aren't necessarily completely correct, but they're a welcome help anyway.) Add a --vd-lavc-check-hw-profile option, which skips the profile check.
* vaapi: allow GPU read-back with --hwdec=vaapi-copywm42013-09-251-0/+1
| | | | | | | | | | | | | | This code is actually quite inefficient: it reuses the (slow, simple) screenshot code. It uses an inefficient method to read the image (vaGetImage() instead of vaDeriveImage()), allocates new memory for each frame that is read, and it tries all image formats again each time. Also, in my tests it always picked NV12 as image format, which is not ideal if you actually want to filter the video, and vo_xv can't handle this format without conversion either. However, a user confirmed that it worked for him, so everything is fine.
* vd_lavc: allow process_image to change image formatwm42013-09-251-1/+1
| | | | | | | | | | | | | | This will allow GPU read-back with process_image. We have to restructure how init_vo() works. Instead of initializing the VO before process_image is called, rename init_vo() to update_image_params(), and let it update the params only. Then we really initialize the VO after process_image. As a consequence of these changes, already decoded hw frames are correctly unreferenced if creation of the filter chain fails. This could trigger assertions on VO uninitialization, because it's not allowed to reference hw frames past VO lifetime.
* video/decode: change fix_image callbackwm42013-08-151-1/+2
| | | | | This might make it slightly easier when trying to implement surface read-back for hardware decoding.
* video/decode: pass parameters directly to hwdec allocate_image callbackwm42013-08-151-1/+2
| | | | | | Instead of passing AVFrame. This also moves the mysterious logic about the size of the allocated image to common code, instead of duplicating it everywhere.
* video: add vaapi decode and output supportwm42013-08-121-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is based on the MPlayer VA API patches. To be exact it's based on a very stripped down version of commit f1ad459a263f8537f6c from git://gitorious.org/vaapi/mplayer.git. This doesn't contain useless things like benchmarking hacks and the demo code for GLX interop. Also, unlike in the original patch, decoding and video output are split into separate source files (the separation between decoding and display also makes pixel format hacks unnecessary). On the other hand, some features not present in the original patch were added, like screenshot support. VA API is rather bad for actual video output. Dealing with older libva versions or the completely broken vdpau backend doesn't help. OSD is low quality and should be rather slow. In some cases, only either OSD or subtitles can be shown at the same time (because OSD is drawn first, OSD is prefered). Also, libva can't decide whether it accepts straight or premultiplied alpha for OSD sub-pictures: the vdpau backend seems to assume premultiplied, while a native vaapi driver uses straight. So I picked straight alpha. It doesn't matter much, because the blending code for straight alpha I added to img_convert.c is probably buggy, and ASS subtitles might be blended incorrectly. Really good video output with VA API would probably use OpenGL and the GL interop features, but at this point you might just use vo_opengl. (Patches for making HW decoding with vo_opengl have a chance of being accepted.) Despite these issues, decoding seems to work ok. I still got tearing on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also tested with the vdpau vaapi wrapper on a nvidia system; however this was rather broken. (Fortunately, there is no reason to use mpv's VAAPI support over native VDPAU.)
* video: redo hw decoding initialization, add --hwdec=autowm42013-08-111-4/+28
| | | | | | | | | | | | | | | | Change how the HW decoding stuff is organized, the way it's initialized in particular. Instead of duplicating the list of supported codecs for hwaccel decoders, add a probe function which allows each decoder to report whether it supports a given codec. Add an "auto" choice to the --hwdec option, which automatically enables hardware decoding if libavcodec and/or the VO supports it. What mpv prints on the terminal changes a bit. Now it will just print a single line whether hw decoding is used or not (and nothing at all if no hw decoding at all was requested). The pretty violent fallback from h