summaryrefslogtreecommitdiffstats
path: root/video/decode
Commit message (Collapse)AuthorAgeFilesLines
* video: use demuxer-signaled duration for last video framewm42016-12-211-0/+5
| | | | | | | | | Helps with gif, probably does unwanted things with other formats. This doesn't handle --end quite correctly, but this could be added later. Fixes #3924.
* ad_lavc, vd_lavc: don't set AVCodecContext.refcounted_frameswm42016-12-181-1/+0
| | | | | This field is (or should be) deprecated, and there's no need to set it with the new API.
* Remove compatibility thingswm42016-12-071-8/+1
| | | | | | Possible with bumped FFmpeg/Libav. These are just the simple cases.
* vdpau: fix vaapi probing if libvdpau-va-gl1 is presentwm42016-12-021-5/+7
| | | | | | | | | | | Needs explicit logic. Fixes a pretty bad regression which prefers vdpau-copy over native vaapi with direct rendering (with --hwdec=auto) if libvdpau-va-gl1 is present. The reason is that vdpau-copy is above vaapi, simply because all vdpau hwdecs are grouped and happened to be listed before vaapi. Although this is not that bad for copy-mode (unlike the case described above), it's still a good idea to use our native vaapi code instead.
* vo_opengl: hwdec_cuda: Use dynamic loading for cuda functionsPhilip Langdale2016-11-231-24/+6
| | | | | This change applies the pattern used in ffmpeg to dynamically load cuda, to avoid requiring the CUDA SDK at build time.
* vo_opengl: hwdec_cuda: Support P016 output surfacesPhilip Langdale2016-11-221-1/+2
| | | | | | | | | The latest 375.xx nvidia drivers add support for P016 output surfaces. In combination with an ffmpeg change to return those surfaces, we can display them. The bulk of the work is related to knowing which format you're dealing with at the right time. Once you know, it's straight forward.
* d3d11va: unconditionally load D3D DLLsJames Ross-Gowan2016-11-231-1/+5
| | | | | | | | At least with Nvidia drivers, some thread tries to access D3D11 objects after ANGLE unloads d3d11.dll. Fix this by holding a reference to d3d11.dll ourselves. Might fix the crash in #3348. (I wish I knew why though.)
* vdpau: fix hwdec uninitwm42016-11-102-1/+2
| | | | | | | | | | | | This is a bit unintuitiv, but it appears hwdec backends have to unset hwdec_priv manually in their uninit function. Normally with this idiom you'd expect the common code to do this (and maybe even freeing the priv struct). Since other hwdec backends do this quite consistently, just fix vdpau for now. Also add an assert to detect similar bugs sooner. Fixes #3788.
* dec_video: don't spam missing PTS warningswm42016-11-092-2/+11
| | | | | Only print at most 2. Just because with some decoders, we will always hit this code path, such as playing avi of vfw-muxed mkv on RPI.
* dec_video, dec_audio: avoid full reinit on switches to the same segmentwm42016-11-091-6/+9
| | | | | | Same deal as with the previous commit. (Unfortunately, this code is still duplicated.)
* demux: expose demuxer colorimetry metadata to playerNiklas Haas2016-11-081-0/+1
| | | | | | Implementation-wise, the values from the demuxer/codec header are merged with the values from the decoder such that the former are used only where the latter are unknown (0/auto).
* video: add --hwdec=vdpau-copy modewm42016-10-202-1/+76
| | | | | | | At this point, all other hwaccels provide -copy modes, and vdpau is the exception with not having one. Although there is vf_vdpaurb, it's less convenient in certain situations, and exposes some issues with the filter chain code as well.
* ad_lavc, vd_lavc: fix a recent libavcodec deprecation warningwm42016-10-171-1/+1
| | | | | | | | | | | | | | Both AVFrame.pts and AVFrame.pkt_pts have existed for a long time. Until now, decoders always returned the pts via the pkt_pts field, while the pts field was used for encoding and libavfilter only. Recently, pkt_pts was deprecated, and pts was switched to always carry the pts. This means we have to be careful not to accidentally use the wrong field, depending on the libavcodec version. We have to explicitly check the version numbers. Of course the version numbers are completely idiotic, because idiotically the pkg-config and library names are the same for FFmpeg and Libav, so we have to deal with this explicitly as well.
* vd_lavc: Add hwdec wrapper for crystalhdPhilip Langdale2016-10-151-0/+7
| | | | This hardware decodes to system memory so it only requires a wrapper.
* vaapi: support drm devices when running in vaapi-copy modeBernhard Frauendienst2016-10-021-0/+53
| | | | | | | | | | | | | | | | When the vaapi decoder is used in copy mode, it creates a dummy display to render to. In theory, this should support hardware decoding on on a separate GPU that is not actually connected to any output (like an iGPU which supports more formats than the external GPU to which the monitor is connected). However, before this change, only X11 displays were supported as dummy displays. This caused some graphics drivers (namely intel-driver) to core dump when they were not actually used as X11 module. This change introduces support for drm libav displays, which allows vaapi-copy to run on such cards which are not actually rendering the X11 output.
* vd_lavc: log if hw decoding selects a different underlying decoderwm42016-09-301-0/+3
| | | | | Less confusing to see what's going on. I think there were more than one users who got tricked by this, including myself.
* rpi: add --hwdec=rpi-copywm42016-09-301-0/+6
| | | | | | This means it can be used with normal video filters. Might help out with #3604.
* cuda: initialize hwframes formatPhilip Langdale2016-09-281-0/+4
| | | | | In retrospect, this seems obvious, but ffmpeg didn't complain until a recent update.
* win32: build with -DINITGUIDJames Ross-Gowan2016-09-283-4/+2
| | | | | | | | | | | | We always want to use __declspec(selectany) to declare GUIDs, but manually including <initguid.h> in every file that used GUIDs was error-prone. Since all <initguid.h> does is define INITGUID and include <guiddef.h>, we can remove all references to <initguid.h> and just compile with -DINITGUID to get the same effect. Also, this partially reverts 622bcb0 by re-adding libuuid.a to the build, since apparently some GUIDs (such as GUID_NULL) are not declared in the source file, even when INITGUID is set.
* command: add a video-dec-params propertywm42016-09-202-1/+8
| | | | | | This is the actual decoder output, with no overrides applied. (Maybe video-params shouldn't contain the overrides in the first place, but damage done.)
* video: handle override video parameters in a better placewm42016-09-202-8/+7
| | | | | | | | This really shouldn't be in vd_lavc.c - move it to dec_video.c, where it also applies aspect overrides. This makes all overrides in one place. The previous commit contains some required changes for resetting the image parameters change detection (i.e. it's not done only on video aspect override changes).
* command: change update handling of some video-related propertieswm42016-09-202-2/+2
| | | | | | | | Use the new mechanism, instead of wrapped properties. As usual, extend the update handling to some options that were forgotten/neglected before. Rename video_reset_aspect() to video_reset_params() to make it more "general" (and we can amazingly include write access to video-aspect as well in this).
* hwdec_cuda: Rename config variable to be more consistentPhilip Langdale2016-09-161-2/+2
| | | | | | 'cuda-gl' isn't right - you can turn this on without any GL and get some non-zero benefit (with the cuda-copy hwaccel). So 'cuda-hwaccel' seems more consistent with everything else.
* hwdec_cuda: Add trivial cuda-copy wrapperPhilip Langdale2016-09-111-0/+9
| | | | | | | | | The cuvid decoder already knows how to copy back to system memory if NV12 frames are requested, and this will happen if the decoder is used without the hwdec. For convenience, let's add a wrapper hwdec so people don't have to explicitly pick the cuvid decoder if they want this behaviour.
* hwdec/opengl: Add support for CUDA and cuvid/NvDecodePhilip Langdale2016-09-082-0/+130
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Nvidia's "NvDecode" API (up until recently called "cuvid" is a cross platform, but nvidia proprietary API that exposes their hardware video decoding capabilities. It is analogous to their DXVA or VDPAU support on Windows or Linux but without using platform specific API calls. As a rule, you'd rather use DXVA or VDPAU as these are more mature and well supported APIs, but on Linux, VDPAU is falling behind the hardware capabilities, and there's no sign that nvidia are making the investments to update it. Most concretely, this means that there is no VP8/9 or HEVC Main10 support in VDPAU. On the other hand, NvDecode does export vp8/9 and partial support for HEVC Main10 (more on that below). ffmpeg already has support in the form of the "cuvid" family of decoders. Due to the design of the API, it is best exposed as a full decoder rather than an hwaccel. As such, there are decoders like h264_cuvid, hevc_cuvid, etc. These decoders support two output paths today - in both cases, NV12 frames are returned, either in CUDA device memory or regular system memory. In the case of the system memory path, the decoders can be used as-is in mpv today with a command line like: mpv --vd=lavc:h264_cuvid foobar.mp4 Doing this will take advantage of hardware decoding, but the cost of the memcpy to system memory adds up, especially for high resolution video (4K etc). To avoid that, we need an hwdec that takes advantage of CUDA's OpenGL interop to copy from device memory into OpenGL textures. That is what this change implements. The process is relatively simple as only basic device context aquisition needs to be done by us - the CUDA buffer pool is managed by the decoder - thankfully. The hwdec looks a bit like the vdpau interop one - the hwdec maintains a single set of plane textures and each output frame is repeatedly mapped into these textures to pass on. The frames are always in NV12 format, at least until 10bit output supports emerges. The only slightly interesting part of the copying process is that CUDA works by associating PBOs, so we need to define these for each of the textures. TODO Items: * I need to add a download_image function for screenshots. This would do the same copy to system memory that the decoder's system memory output does. * There are items to investigate on the ffmpeg side. There appears to be a problem with timestamps for some content. Final note: I mentioned HEVC Main10. While there is no 10bit output support, NvDecode can return dithered 8bit NV12 so you can take advantage of the hardware acceleration. This particular mode requires compiling ffmpeg with a modified header (or possibly the CUDA 8 RC) and is not upstream in ffmpeg yet. Usage: You will need to specify vo=opengl and hwdec=cuda. Note that hwdec=auto will probably not work as it will try to use vdpau first. mpv --hwdec=cuda --vo=opengl foobar.mp4 If you want to use filters that require frames in system memory, just use the decoder directly without the hwdec, as documented above.
* vd_lavc: always force milliseconds for MMALwm42016-08-291-0/+5
| | | | | This libavcodec wrapper should rescale the API timestamps to whatever it internally needs, but it doesn't yet. So restore this code.
* vd_lavc, ad_lavc: set pkt_timebase, not time_basewm42016-08-291-1/+4
| | | | | | | | | These are different AVCodecContext fields. pkt_timebase is the correct one for identifying the unit of packet/frame timestamps when decoding, while time_base is for encoding. Some decoders also overwrite the time_base field with some unrelated codec metadata. pkt_timebase does not exist in Libav, so an #if is required.
* vd_lavc: minor simplificationwm42016-08-231-3/+1
| | | | The timebase is now always valid.
* vd_lavc: remove unnecessary initializationwm42016-08-191-1/+0
| | | | This is already the default value.
* video/audio: always provide "proper" timestamps to libavcodecwm42016-08-191-4/+1
| | | | | | | | | | | | | | | | | | | Instead of passing through double float timestamps opaquely, pass real timestamps. Do so by always setting a valid timebase on the AVCodecContext for audio and video decoding. Specifically try not to round timestamps to a too coarse timebase, which could round off small adjustments to timestamps (such as for start time rebasing or demux_timeline). If the timebase is considered too coarse, make it finer. This gets rid of the need to do this specifically for some hardware decoding wrapper. The old method of passing through double timestamps was also a bit questionable. While libavcodec is not supposed to interpret timestamps at all if no timebase is provided, it was needlessly tricky. Also, it actually does compare them with AV_NOPTS_VALUE. This change will probably also reduce confusion in the future.
* video: change hw_subfmt meaningwm42016-07-151-17/+15
| | | | | | | | | | | | | | | | | | The hw_subfmt field roughly corresponds to the field AVHWFramesContext.sw_format in ffmpeg. The ffmpeg one is of the type AVPixelFormat (instead of the underlying hardware format), so it's a good idea to switch to this too for preparation. Now the hw_subfmt field is an mp_imgfmt instead of an opaque/API- specific number. VDPAU and Direct3D11 already used mp_imgfmt, but Videotoolbox and VAAPI had to be switched. One somewhat user-visible change is that the verbose log will now always show the hw_subfmt as image format, instead of as nonsensical number. (In the end it would be good if we could switch to AVHWFramesContext completely, but the upstream API is incomplete and doesn't cover Direct3D11 and Videotoolbox.)
* videotoolbox: add --hwdec=videotoolbox-copy for h/w accelerated decoding ↵Aman Gupta2016-07-152-9/+115
| | | | with video filters
* vd_lavc: expose mastering display side data reference peakNiklas Haas2016-07-032-1/+26
| | | | | | | | | | | | | | | | | | | This greatly improves the result when decoding typical (ST.2084) HDR content, since the job of tone mapping gets significantly easier when you're only mapping from 1000 to 250, rather than 10000 to 250. The difference is so drastic that we can now even reasonably use `hdr-tone-mapping=linear` and get a very perceptually uniform result that is only slightly darker than normal. (To compensate for the extra dynamic range) Due to weird implementation details, this only seems to be present on keyframes (or something like that), so we have to cache the last seen value for the frames in between. Also, in some files the metadata is just completely broken / nonsensical, so I decided to apply a simple heuristic to detect completely broken metadata.
* vo_opengl: generalize HDR tone mapping mechanismNiklas Haas2016-07-031-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | This involves multiple changes: 1. Brightness metadata is split into nominal peak and signal peak. For a quick and dirty explanation: nominal peak is the brightest value that your color space can represent (i.e. the brightness of an encoded 1.0), and signal peak is the brightest value that actually occurs in the video (i.e. the brightest thing that's displayed). 2. vo_opengl uses a new decision logic to figure out the right nom_peak and sig_peak for all situations. It also does a better job of picking the right target gamut/colorspace to use for the OSD. (Which still is and still should be treated as sRGB). This change in logic also fixes #3293 en passant. 3. Since it was growing rapidly, the logic for auto-guessing / inferring the right colorimetry configuration (in pass_colormanage) was split from the logic for actually performing the adaptation (now pass_color_map). Right now, the new logic doesn't do a whole lot since HDR metadata is still ignored (but not for long).
* mp_image: split colorimetry metadata into its own structNiklas Haas2016-07-031-4/+6
| | | | | | | | | | | | | | | | | | This has two reasons: 1. I tend to add new fields to this metadata, and every time I've done so I've consistently forgotten to update all of the dozens of places in which this colorimetry metadata might end up getting used. While most usages don't really care about most of the metadata, sometimes the intend was simply to “copy” the colorimetry metadata from one struct to another. With this being inside a substruct, those lines of code can now simply read a.color = b.color without having to care about added or removed fields. 2. It makes the type definitions nicer for upcoming refactors. In going through all of the usages, I also expanded a few where I felt that omitting the “young” fields was a bug.
* vd_lavc: hide structs behind platform flagsBen Boeckel2016-07-011-0/+4
| | | | Otherwise, warnings about them being unused appear.
* d3d: implement screenshots for --hwdec=d3d11vawm42016-06-282-0/+85
| | | | | | | | | | | | | | No method of taking a screenshot was implemented at all. vo_opengl lacked window screenshotting, because ANGLE doesn't allow reading the frontbuffer. There was no way to read back from a D3D11 texture either. Implement reading image data from D3D11 textures. This is a low-quality effort to get basic screenshots done. Eventually there will be a better implementation: once we use AVHWFramesContext natively, the readback implementation will be in libavcodec, and will be able to cache the staging texture correctly. Hopefully. (For now it doesn't even have a AVHWFramesContext for D3D11 yet. But the abstraction is more appropriate for this purpose.)
* d3d: merge angle_common.h into d3d.hwm42016-06-282-0/+17
| | | | | | | | OK, this was dumb. The file didn't have much to do with ANGLE, and the functionality can simply be moved to d3d.c. That file contains helpers for decoding, but can always be present (on Windows) since it doesn't access any D3D specific libavcodec APIs. Thus it doesn't need to be conditionally built like the actual hwaccel wrappers.
* Fix misspellingsstepshal2016-06-261-1/+1
|
* vo_opengl: vdpau interop without RGB conversionwm42016-06-191-0/+12
| | | | | | | | | | | | | | | | | | | | | | Until now, we've always converted vdpau video surfaces to RGB, and then mapped the resulting RGB texture. Change this so that the surface is mapped as NV12 plane textures. The reason this wasn't done until now is because vdpau surfaces are mapped in an "interlaced" way as separate fields, even for progressive video. This requires messy reinterleraving. It turns out that even though it's an extra processing step, the result can be faster than going through the video mixer for RGB conversion. Other than some potential speed-gain, doing this has multiple other advantages. We can apply our own color conversion, which is important in more complex cases. We can correctly apply debanding and potentially other processing that requires chroma-specific or in-YUV handling. If deinterlacing is enabled, this switches back to the old RGB conversion method. Until we have at least a primitive deinterlacer in vo_opengl, this will stay this way. The d3d11 and vaapi code paths are similar. (Of course these don't require any crazy field reinterleaving.)
* d3d11va: remove unused d3d11va_surface.subindex fieldwm42016-06-161-3/+1
| | | | This is now stored within the AVFrame/mp_image.
* dxva2: remove dead code in failure caseJames Ross-Gowan2016-06-071-3/+0
| | | | | This was made unnecessary by a02d77b, since the only way the function could fail was by failing to add a reference to the DirectX DLLs.
* video: remove d3d11 video processor use from OpenGL interopwm42016-05-291-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We now have a video filter that uses the d3d11 video processor, so it makes no sense to have one in the VO interop code. The VO uses it for formats not directly supported by ANGLE (so the video data is converted to a RGB texture, which ANGLE can take in). Change this so that the video filter is automatically inserted if needed. Move the code that maps RGB surfaces to its own inteorp backend. Add a bunch of new image formats, which are used to enforce the new constraints, and to automatically insert the filter only when needed. The added vf mechanism to auto-insert the d3d11vpp filter is very dumb and primitive, and will work only for this specific purpose. The format negotiation mechanism in the filter chain is generally not very pretty, and mostly broken as well. (libavfilter has a different mechanism, and these mechanisms don't match well, so vf_lavfi uses some sort of hack. It only works because hwaccel and non-hwaccel formats are strictly separated.) The RGB interop is now only used with older ANGLE versions. The only reason I'm keeping it is because it's relatively isolated (uses only existing mechanisms and adds no new concepts), and because I want to be able to compare the behavior of the old code with the new one for testing. It will be removed eventually. If ANGLE has NV12 interop, P010 is now handled by converting to NV12 with the video processor, instead of converting it to RGB and using the old mechanism to import that as a texture.
* d3d: simplify DLL loadingwm42016-05-174-49/+34
| | | | | | | | | | | | | | | For some reason, the d3d9/dxva2/d3d11 DLLs are still optional. But we don't need to try so hard to keep exact references. In fact, there's no reason to unload them at all. So load them once in a central place. For simplicity, the d3d9/d3d11 backends both load all DLLs. (They will error out only if the required DLLs could not be loaded.) In theory, we could just call LoadLibrary multiple times (without calling FreeLibrary), but I'm slightly worried that this could be detected as a "bug", or that the reference count could even have a low static limit that could be hit soon.
* video: merge dxva2 source fileswm42016-05-171-2/+62
| | | | | | | video/dxva2.c exported only 2 functions, both used only by video/decode/dxva2.c. The same was already done for d3d11.
* vaapi: avoid forward declaration of variablewm42016-05-151-9/+7
| | | | Why is everything so horrible.
* video: add --hwdec=auto-copy modewm42016-05-115-3/+14
| | | | | | | | This uses the normal autoprobing rules like "auto", but rejects anything that isn't flagged as copying data back to system memory. The chunk in command.c was dead code, so remove it instead of updating it.
* build: merge d3d11va and dxva2 hwaccel checkswm42016-05-111-5/+1
| | | | | | We don't have any reason to disable either. Both are loaded dynamically at runtime anyway. There is also no reason why dxva2 would disappear from libavcodec any time soon.
* vo_opengl: d3d11egl: native NV12 sampling supportwm42016-05-101-1/+25
| | | | | | | | | | | | | | | | | This uses EGL_ANGLE_stream_producer_d3d_texture_nv12 and related extensions to map the D3D textures coming from the hardware decoder directly in GL. In theory this would be trivial to achieve, but unfortunately ANGLE does not have a mechanism to "import" D3D textures as GL textures. Instead, an awkward mechanism via EGL_KHR_stream was implemented, which involves at least 5 extensions and a lot of glue code. (Even worse than VAAPI EGL interop, and very far from the simplicity you get on OSX.) The ANGLE mechanism so far supports only the NV12 texture format, which means 10 bit won't work. It also does not work in ES3 mode yet. For these reasons, the "old" ID3D11VideoProcessor code is kept and used as a fallback.
* video: refacto