summaryrefslogtreecommitdiffstats
path: root/video/decode/d3d11va.c
Commit message (Collapse)AuthorAgeFilesLines
* vo_opengl: d3d11egl: native NV12 sampling supportwm42016-05-101-1/+25
| | | | | | | | | | | | | | | | | This uses EGL_ANGLE_stream_producer_d3d_texture_nv12 and related extensions to map the D3D textures coming from the hardware decoder directly in GL. In theory this would be trivial to achieve, but unfortunately ANGLE does not have a mechanism to "import" D3D textures as GL textures. Instead, an awkward mechanism via EGL_KHR_stream was implemented, which involves at least 5 extensions and a lot of glue code. (Even worse than VAAPI EGL interop, and very far from the simplicity you get on OSX.) The ANGLE mechanism so far supports only the NV12 texture format, which means 10 bit won't work. It also does not work in ES3 mode yet. For these reasons, the "old" ID3D11VideoProcessor code is kept and used as a fallback.
* video: refactor how VO exports hwdec device handleswm42016-05-091-9/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | The main change is with video/hwdec.h. mp_hwdec_info is made opaque (and renamed to mp_hwdec_devices). Its accessors are mainly thread-safe (or documented where not), which makes the whole thing saner and cleaner. In particular, thread-safety rules become less subtle and more obvious. The new internal API makes it easier to support multiple OpenGL interop backends. (Although this is not done yet, and it's not clear whether it ever will.) This also removes all the API-specific fields from mp_hwdec_ctx and replaces them with a "ctx" field. For d3d in particular, we drop the mp_d3d_ctx struct completely, and pass the interfaces directly. Remove the emulation checks from vaapi.c and vdpau.c; they are pointless, and the checks that matter are done on the VO layer. The d3d hardware decoders might slightly change behavior: dxva2-copy will not use the VO device anymore if the VO supports proper interop. This pretty much assumes that any in such cases the VO will not use any form of exclusive mode, which makes using the VO device in copy mode unnecessary. This is a big refactor. Some things may be untested and could be broken.
* win32: replace libuuid.a usage with initguid.hJames Ross-Gowan2016-05-011-0/+1
| | | | | | | | | | | | | | | Including initguid.h at the top of a file that uses references to GUIDs causes the GUIDs to be declared globally with __declspec(selectany). The 'selectany' attribute tells the linker to consolidate multiple definitions of each GUID, which would be great except that, in Cygwin and MinGW GCC 6.1, this method of linking makes the GUIDs conflict with the ones declared in libuuid.a. Since initguid.h obsoletes libuuid.a in modern compilers that support __declspec(selectany), add initguid.h to all files that use GUIDs and remove libuuid.a from the build. Fixes #3097
* d3d11va: fix invalid deref on decoder init failureKevin Mitchell2016-04-291-1/+1
| | | | fixes #3092
* d3d11va, dxva2: return the format struct directlywm42016-04-291-4/+4
| | | | Slight simplification, IMHO.
* d3d11va, dxva2: simplify decoder selectionwm42016-04-291-31/+11
| | | | | | | | In particular, this moves the depth test to common code. Should be functionally equivalent, except that for DXVA2, the IDirectXVideoDecoderService_GetDecoderRenderTargets API is called more often potentially.
* d3d11va: store texture/subindex in IMGFMT_D3D11VA plane pointerswm42016-04-271-7/+68
| | | | | | | | | Basically this gets rid of the need for the accessors in d3d11va.h, and the code can be cleaned up a little bit. Note that libavcodec only defines a ID3D11VideoDecoderOutputView pointer in the last plane pointers, but it tolerates/passes through the other plane pointers we set.
* vo_opengl: D3D11VA + ANGLE interopwm42016-04-271-1/+32
| | | | | | | | | | | | | | | | | | | This uses ID3D11VideoProcessor to convert the video to a RGBA surface, which is then bound to ANGLE. Currently ANGLE does not provide any way to bind nv12 surfaces directly, so this will have to do. ID3D11VideoContext1 would give us slightly more control about the colorspace conversion, though it's still not good, and not available in MinGW headers yet. The video processor is created lazily, because we need to have the coded frame size, of which AVFrame and mp_image have no concept of. Doing the creation lazily is less of a pain than somehow hacking the coded frame size into mp_image. I'm not really sure how ID3D11VideoProcessorInputView is supposed to work. We recreate it on every frame, which is simple and hopefully doesn't affect performance.
* d3dva: include selected decoder and format in verbose outputKevin Mitchell2016-04-171-0/+4
|
* vd_lavc: let hardware decoder request delaying frames explicitlywm42016-04-071-0/+1
| | | | | | | | | | Until now, the presence of the process_image() callback was used to set a delay queue with a hardcoded size. Change this to a vd_lavc_hwdec field instead, so the decoder can explicitly set this if it's really needed. Do this so process_image() can be used in the VideoToolbox glue code for something entirely unrelated.
* vd_lavc: fix codec vs. decoder confusionwm42016-04-071-2/+2
| | | | | | | | | | Some functions which expected a codec name (i.e. the name of the video format itself) were passed a decoder name. Most "native" libavcodec decoders have the same name as the codec, so this was never an issue. This should mean that e.g. using "--vd=lavc:h264_mmal --hwdec=mmal" should now actually enable native surface mode (instead of doing copy- back).
* vd_lavc: add d3d11va hwdecKevin Mitchell2016-03-301-0/+498
This commit adds the d3d11va-copy hwdec mode using the ffmpeg d3d11va api. Functions in common with dxva2 are handled in a separate decode/d3d.c file. A future commit will rewrite decode/dxva2.c to share this code.