summaryrefslogtreecommitdiffstats
path: root/video/decode/lavc.h
Commit message (Collapse)AuthorAgeFilesLines
* vd_lavc: fix device leak with copy-mode hwaccelswm42017-08-091-0/+1
| | | | | | | | | | | | Apparently this was broken by the "ctx->hwdec" check in the if condition guarding the destroy call, and "ctx->hwdec = NULL;" was moved up earlier, making this always dead code. This should probably be refcounted or so, although that could make it worse as well. For now, add a flag whether the device should be destroyed. Fixes #4735.
* vo_opengl: add direct rendering supportwm42017-07-241-0/+7
| | | | | | | | | | | | | | | | | | | | Can be enabled via --vd-lavc-dr=yes. See manpage additions for what it does. This reminds of the MPlayer -dr flag, but the implementation is completely different. It's the same basic concept: letting the decoder render into a GPU buffer to avoid a copy. Unlike MPlayer, this doesn't try to go through filters (libavfilter doesn't support this anyway). Unless a filter can work in-place, DR will be silently disabled. MPlayer had very complex semantics about buffer types and management (which apparently nobody ever understood) and weird restrictions that mostly limited it to mpeg2 style codecs. The mpv code does not do any of this, and just lets the decoder allocate an arbitrary number of untyped images. (No MPlayer code was used.) Parts of the code based on work by atomnuker (starting point for the generic code) and haasn (some GL definitions, some basic PBO code, and correct fencing).
* vd_lavc: remove unused hwaccel support codewm42017-07-041-6/+0
| | | | Was used by old hwaccel implementations.
* vd: use ST.2086 / HDR10 MaxCLL in addition to mastering metadataNiklas Haas2017-06-181-1/+1
| | | | | | | | | | | | | MaxCLL is the more authoritative source for the metadata we are interested in. The use of mastering metadata is sort of a hack anyway, since there's no clearly-defined relationship between the mastering peak brightness and the actual content. (Unlike MaxCLL, which is an explicit relationship) Also move the parameter fixing to `fix_image_params` I don't know if the avutil check is strictly necessary but I've included it anyway to be on the safe side.
* d3d: add support for new libavcodec hwaccel APIwm42017-06-081-0/+2
| | | | | | Unfortunately quite a mess, in particular due to the need to have some compatibility with the old API. (The old API will be supported only in short term.)
* cuda: add new way to set cuda context on cuvid codecswm42017-05-051-0/+5
| | | | See FFmpeg commit c0f17a905f3588bf61ba6d86a83c6835d431ed3d.
* vd_lavc: add support for decoders which use AVCodecContext.hw_device_ctxwm42017-05-031-3/+4
| | | | | | | | | | | | These decoders can select the decoding device with hw_device_ctx, but don't use hw_frames_ctx (at least not in a meaningful way). Currently unused, but intended to be used for cuvid, as soon as it hits ffmpeg git master. Also make the vdpau and vaapi hwaccel definition structs static, as we have removed the old code which would have had clashing external declarations.
* decode: fix extra surface countwm42017-02-271-2/+1
| | | | | | | | | | | | FFmpeg could crash with vaapi (new) and --vo=opengl + interpolation. It seems the actual surface count the old vaapi code uses (and which usually never exceeded the preallocated amount) was higher than what was used for the new vaapi code, so just correct that. The d3d helpers also had weird code that bumped the real pool size, fix them as well. Why this would result in an assertion failure instead of a proper error, who knows.
* vd_lavc, vaapi: move hw device creation to generic codewm42017-02-201-3/+7
| | | | | | | | hw_vaapi.c didn't do much interesting anymore. Other than the function to create a device for decoding with vaapi-copy, everything can be done by generic code. Other libavcodec hwaccels are planned to provide the same API as vaapi. It will be possible to drop the other hw_ files in the future. They will use this generic code instead.
* vd_lavc: move most vaapi hwaccel setup code to generic codewm42017-02-161-0/+13
| | | | | Now hw_vaapi.c contains only the device setup, which could probably also be abstracted.
* vd_lavc: remove some leftover vaapi locking infrastructurewm42017-02-161-3/+0
|
* vaapi: move AVHWFramesContext setup code to common codewm42017-01-171-0/+4
| | | | | | | | | | In a way it can be reused. For now, sw_format and initial_pool_size determination are still vaapi-specific. I'm hoping this can be eventally moved to libavcodec in some way. Checking the supported_formats array is not really vaapi-specific, and could be moved to the generic code path too, but for now it would make things more complex. hw_cuda.c can't use this, but hw_vdpau.c will in the following commit.
* vaapi: handle image copying for vaapi-copy in common codewm42017-01-121-0/+3
| | | | | | | | | | | | | | Other hwdecs will also be able to use this (as soon as they are switched to use AVHWFramesContext). As an additional feature, failing to copy back the frame counts as hardware decoding failure and can trigger fallback. This can be done easily now, because it needs no way to communicate this from the hwaccel glue code to the common code. The old code is still required for the old decode API, until we either drop or rewrite it. vo_vaapi.c's OSD code (fuck...) also uses these surface functions to a higher degree.
* vaapi: support new libavcodec vaapi APIwm42017-01-111-0/+2
| | | | | | | | | | | | | | | | | The old API is deprecated, and libavcodec prints a warning at runtime. The new API is a bit nicer and does many things for you, such as managing the underlying hwaccel decoder. libavutil also provides code for managing surfaces (we use their surface pool). The new code does not contain any code from the original MPlayer VAAPI patch (that was used as base for some of the vaapi code in mpv). Thus the new code is LGPL. The new API actually does not add any visible symbols, so the only way to detect it is a version check. Of course, the versions overlap between FFmpeg and Libav, which requires additional care. The new API did not get merged into FFmpeg yet, so there's no check for FFmpeg.
* video: share hwdec extra surface count between backendswm42017-01-111-0/+7
| | | | It's a quite important and fragile magic number.
* vd_lavc: complicated improved fallback behavior for --hwdec=cudawm42017-01-101-1/+6
| | | | | | | | | | | | | The ffmpeg cuda wrappers need more than 1 packet for determining whether hw decoding will work. So do something complicated and keep up to 32 packets when trying to do hw decoding, and replay the packets on the software decoder if it doesn't work. This code was written in a delirious state, testing for regressions and determining whether this is worth the trouble will follow later. All mpv git users are alpha testers as of this moment. Fixes #3914.
* video: restructure decode loopwm42017-01-101-0/+4
| | | | Basically change everything. Why does the code get larger? No idea.
* vd_lavc: expose mastering display side data reference peakNiklas Haas2016-07-031-1/+1
| | | | | | | | | | | | | | | | | | | This greatly improves the result when decoding typical (ST.2084) HDR content, since the job of tone mapping gets significantly easier when you're only mapping from 1000 to 250, rather than 10000 to 250. The difference is so drastic that we can now even reasonably use `hdr-tone-mapping=linear` and get a very perceptually uniform result that is only slightly darker than normal. (To compensate for the extra dynamic range) Due to weird implementation details, this only seems to be present on keyframes (or something like that), so we have to cache the last seen value for the frames in between. Also, in some files the metadata is just completely broken / nonsensical, so I decided to apply a simple heuristic to detect completely broken metadata.
* vo_opengl: generalize HDR tone mapping mechanismNiklas Haas2016-07-031-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | This involves multiple changes: 1. Brightness metadata is split into nominal peak and signal peak. For a quick and dirty explanation: nominal peak is the brightest value that your color space can represent (i.e. the brightness of an encoded 1.0), and signal peak is the brightest value that actually occurs in the video (i.e. the brightest thing that's displayed). 2. vo_opengl uses a new decision logic to figure out the right nom_peak and sig_peak for all situations. It also does a better job of picking the right target gamut/colorspace to use for the OSD. (Which still is and still should be treated as sRGB). This change in logic also fixes #3293 en passant. 3. Since it was growing rapidly, the logic for auto-guessing / inferring the right colorimetry configuration (in pass_colormanage) was split from the logic for actually performing the adaptation (now pass_color_map). Right now, the new logic doesn't do a whole lot since HDR metadata is still ignored (but not for long).
* video: add --hwdec=auto-copy modewm42016-05-111-0/+2
| | | | | | | | This uses the normal autoprobing rules like "auto", but rejects anything that isn't flagged as copying data back to system memory. The chunk in command.c was dead code, so remove it instead of updating it.
* video: refactor how VO exports hwdec device handleswm42016-05-091-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | The main change is with video/hwdec.h. mp_hwdec_info is made opaque (and renamed to mp_hwdec_devices). Its accessors are mainly thread-safe (or documented where not), which makes the whole thing saner and cleaner. In particular, thread-safety rules become less subtle and more obvious. The new internal API makes it easier to support multiple OpenGL interop backends. (Although this is not done yet, and it's not clear whether it ever will.) This also removes all the API-specific fields from mp_hwdec_ctx and replaces them with a "ctx" field. For d3d in particular, we drop the mp_d3d_ctx struct completely, and pass the interfaces directly. Remove the emulation checks from vaapi.c and vdpau.c; they are pointless, and the checks that matter are done on the VO layer. The d3d hardware decoders might slightly change behavior: dxva2-copy will not use the VO device anymore if the VO supports proper interop. This pretty much assumes that any in such cases the VO will not use any form of exclusive mode, which makes using the VO device in copy mode unnecessary. This is a big refactor. Some things may be untested and could be broken.
* vd_lavc: better hwdec wrapper decoder selectionwm42016-04-251-0/+6
| | | | | | | | | | | | | | | | This is intended for cases when --hwdec needs to override the decoder implementation in use, like for example on the RPI. It does two things: 1. Allow the hwdec to indicate a decoder suffix. libavcodec by convention adds a suffix to all wrapper decoders, and here we start relying on it. While not necessarily the best idea, it's the only thing we got. libavcodec's hwaccel list is useless, because it only has the codec ID, not the associated decoder's name. 2. Make --hwdec=auto work properly. It shouldn't fail anymore, and hwdec probing should reliably work, even if a different decoder is selected with --vd. The semantics of --hwdec should dictate that it overrides the default decoder.
* vd_lavc: let hardware decoder request delaying frames explicitlywm42016-04-071-0/+5
| | | | | | | | | | Until now, the presence of the process_image() callback was used to set a delay queue with a hardcoded size. Change this to a vd_lavc_hwdec field instead, so the decoder can explicitly set this if it's really needed. Do this so process_image() can be used in the VideoToolbox glue code for something entirely unrelated.
* vd_lavc: fix codec vs. decoder confusionwm42016-04-071-2/+2
| | | | | | | | | | Some functions which expected a codec name (i.e. the name of the video format itself) were passed a decoder name. Most "native" libavcodec decoders have the same name as the codec, so this was never an issue. This should mean that e.g. using "--vd=lavc:h264_mmal --hwdec=mmal" should now actually enable native surface mode (instead of doing copy- back).
* vd_lavc: remove redundant best_csp fieldwm42016-02-031-1/+0
| | | | And some other simplifications.
* vd_lavc: force microsecond timestamps on RPIwm42016-02-031-0/+1
| | | | | | | | | | Avoids "problems". In particular, it makes MMAL output a NOPTS timestamp if the input timestamp was NOPTS. Don't do it for other decoders. Ideally, we will at some point in the future switch to integer fractions for timestamps at least up until the filter layer. But this would be a larger change, and for now I'd prefer keeping the not-rounded demuxer timestamps (if we have them).
* vd_lavc: simplify an aspect of hwdec fallbackwm42016-01-291-1/+1
| | | | | | Don't give the "software_fallback_decoder" field special meaning. Alwass set it, and rename it to "decoder". Whether hw decoding is used is determined by the "hwdec" field already.
* vd_lavc: delay images before reading them backwm42016-01-251-0/+6
| | | | Facilitates hardware pipelining in particular with nvidia/dxva.
* vd_lavc: be more careful with flushing the decoderwm42015-11-101-0/+1
| | | | | | | | | | | | | | | | | Until now, we've relied on the following things: - you can send flush packets to the decoder even if it's fully flushed, - you can send new packets to a flushed decoder, - you can send new packers to a partially flushed decoder. ("flushing" refers to sending flush packets to the decoder until the decoder does not return new pictures, not avcodec_flush_buffers().) All of these are questionable. The libavcodec API probably doesn't guarantee that these work well or at all, even though most decoders have no issue with these. But especially with hardware decoding wrappers (like MMAL), real problems can be expected. Isolate us from these corner cases by handling them explicitly.
* rpi: add support for codecs other than h264wm42015-11-051-1/+1
| | | | FFmpeg now supports h264 and mpeg2. At least vc-1 will probably follow.
* vd_lavc: make hwdec fallback more tolerantwm42015-11-031-0/+1
| | | | | | | | | | | | A hw decoder might fail to decode a frame for multiple reasons, and not always just because decoding is impossible. We can't generally distinguish these reasons well. Make it more tolerant by accepting failures of 3 frames, but not more. The threshold can be adjusted by the repurposed --vd-lavc-software-fallback option. (This behavior was suggested much earlier in some PR, but at the time the "proper" hwdec fallback was indistinguishable from decoding error. With the current situation, "proper" fallback is still instantious.)
* vd_lavc: better hwdec log outputwm42015-09-021-0/+1
| | | | | | | | | | | | Often, we don't know whether hardware decoding will work until we've tried. (This used to be different, but API changes and improvements in libavcodec led to this situation.) We will often output that we're going to use hardware decoding, and then print a fallback warning. Instead, print the status once we have decoded a frame. Some of the old messages are turned into verbose messages, which should be helpful for debugging. Also add some new ones.
* vd_lavc: remove unneeded hwdec parameterswm42015-08-191-3/+2
| | | | | | | All hwdec backends now use a single pixel format, and the format is always checked. Also, the init_decoder callback is now mandatory.
* vdpau: add support for the "new" libavcodec vdpau APIwm42015-05-281-0/+2
| | | | | | | | | Yet another of these dozens of hwaccel changes. This time, libavcodec provides utility functions, which initialize the vdpau decoder and map codec profiles. So a lot of work the API user had to do falls away. This also will give us support for high bit depth profiles, and possibly HEVC once libavcodec supports it.
* vd_lavc: make hardware decoding fallback less violentwm42015-05-281-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Most of hardware decoding is initialized lazily. When the first packet is parsed, libavcodec will call get_format() to check whether hw or sw decoding is wanted. Until now, we've returned AV_PIX_FMT_NONE from get_format() if hw decoder initialization failed. This caused the avcodec_decode_video2() call to fail, which in turn let us trigger the fallback. We didn't return a sw format from get_format(), because we didn't want to continue decoding at all. (The reason being that full reinitialization is more robust when continuing sw decoding.) This has some disadvantages. libavcodec vomited some unwanted error messages. Sometimes the failures are more severe, like it happened with HEVC. In this case, the error code path simply acted up in a way that was extremely inconvenient (and had to be fixed by myself). In general, libavcodec is not designed to fallback this way. Make it a bit less violent from the API usage point of view. Return a sw format if hw decoder initialization fails. In this case, we let get_buffer2() call avcodec_default_get_buffer2() as well. libavcodec is allowed to perform its own sw fallback. But once the decode function returns, we do the full reinitialization we wanted to do. The result is that the fallback is more robust, and doesn't trigger any decoder error codepaths or messages either. Change our own fallback message to a warning, since there are no other messages with error severity anymore.
* vd_lavc: report actually used hwdec APIwm42015-05-251-1/+0
| | | | | Instead of the requested one, which can be just "auto", and which is rather useless.
* RPI supportwm42015-03-291-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This requires FFmpeg git master for accelerated hardware decoding. Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav will also work. Most things work. Screenshots don't work with accelerated/opaque decoding (except using full window screenshot mode). Subtitles are very slow - even simple but huge overlays can cause frame drops. This always uses fullscreen mode. It uses dispmanx and mmal directly, and there are no window managers or anything on this level. vo_opengl also kind of works, but is pretty useless and slow. It can't use opaque hardware decoding (copy back can be used by forcing the option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend is preferred over the X11 ones in case you're trying on X11; but X11 is even more useless on RPI. This doesn't correctly reject extended h264 profiles and thus doesn't fallback to software decoding. The hw supports only up to the high profile, and will e.g. return garbage for Hi10P video. This sets a precedent of enabling hw decoding by default, but only if RPI support is compiled (which most hopefully it will be disabled on desktop Linux platforms). While it's more or less required to use hw decoding on the weak RPI, it causes more problems than it solves on real platforms (Linux has the Intel GPU problem, OSX still has some cases with broken decoding.) So I can live with this compromise of having different defaults depending on the platform. Raspberry Pi 2 is required. This wasn't tested on the original RPI, though at least decoding itself seems to work (but full playback was not tested).
* command: add property returning detected hwdec APIwm42015-02-021-12/+0
| | | | | | | | | This is somewhat imperfect, because detection of hw decoding APIs is mostly done on demand, and often avoided if not necessary. (For example, we know very well that there are no hw decoders for certain codecs.) This also requires every hwdec backend to identify itself (see hwdec.h changes).
* video: initial dxva2 supportwm42014-10-251-0/+1
| | | | | Shamelessly stolen from ffmpeg. It probably doesn't work - you can debug it yourself.
* vaapi: try dealing with Intel's braindamaged shit driverswm42014-08-211-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | So talking to a certain Intel dev, it sounded like modern VA-API drivers are reasonable thread-safe. But apparently that is not the case. Not at all. So add approximate locking around all vaapi API calls. The problem appeared once we moved decoding and display to different threads. That means the "vaapi-copy" mode was unaffected, but decoding with vo_vaapi or vo_opengl lead to random crashes. Untested on real Intel hardware. With the vdpau emulation, it seems to work fine - but actually it worked fine even before this commit, because vdpau was written and designed not by morons, but competent people (vdpau is guaranteed to be fully thread-safe). There is some probability that this commit doesn't fix things entirely. One problem is that locking might not be complete. For one, libavcodec _also_ accesses vaapi, so we have to rely on our own guesses how and when lavc uses vaapi (since we disable multithreading when doing hw decoding, our guess should be relatively good, but it's still a lavc implementation detail). One other reason that this commit might not help is Intel's amazing potential to fuckup anything that is good and holy.
* video: warn if an emulated hwdec API is usedwm42014-05-281-0/+1
| | | | | | | | | | | | | | | | mpv supports two hardware decoding APIs on Linux: vdpau and vaapi. Each of these has emulation wrappers. The wrappers are usually slower and have fewer features than their native opposites. In particular the libva vdpau driver is practically unmaintained. Check the vendor string and print a warning if emulation is detected. Checking vendor strings is a very stupid thing to do, but I find the thought of people using an emulated API for no reason worse. Also, make --hwdec=auto never use an API that is detected as emulated. This doesn't work quite right yet, because once one API is loaded, vo_opengl doesn't unload it, so no hardware decoding will be used if the first probed API (usually vdpau) is rejected. But good enough.
* video: add a "hwdec" property to enable or disable hw decoding at runtimewm42014-04-231-0/+1
|
* vd_lavc: reinit hwdec on profile changeswm42014-03-171-0/+1
| | | | | Needed in theory. I don't know if there are even any real-world files which change the profile mid-stream.
* vd_lavc: remove unused fieldwm42014-03-161-1/+0
|
* vd_lavc: remove compatibility crapwm42014-03-161-25/+3
| | | | | | | All this code was needed for compatibility with very old libavcodec versions only (such as Libav 9). Includes some now-possible simplifications too.
* vd_lavc: ridiculous workaround for Libav 9 compatibilitywm42014-03-161-0/+1
| | | | | | | | This "sometimes" crashed when seeking. The fault apparently lies in libavcodec: the decoder returns an unreferenced frame! This is completely insane, but somehow I'm apparently still expected to work this around. As a reaction, I will drop Libav 9 support in the next commit. (While this commit will go into release/0.3.)
* video: initialize hw decoder in get_formatwm42014-03-101-0/+7
| | | | | | | | | | | | | | | | Apparently the "right" place to initialize the hardware decoder is in the libavcodec get_format callback. This doesn't change vda.c and vdpau_old.c, because I don't have OSX, and vdpau_old.c is probably going to be removed soon (if Libav ever manages to release Libav 10). So for now the init_decoder callback added with this commit is optional. This also means vdpau.c and vaapi.c don't have to manage and check the image parameters anymore. This change is probably needed for when libavcodec VDA supports gets a new iteration of its API.
* video/decode: mp_msg conversionswm42013-12-211-0/+1
| | | | Doesn't cover vdpau/vaapi parts yet, because these are a bit messier.
* Reduce recursive config.h inclusions in headerswm42013-12-181-2/+0
| | | | |