summaryrefslogtreecommitdiffstats
path: root/video/d3d.h
diff options
context:
space:
mode:
authorwm4 <wm4@nowhere>2016-04-27 13:49:47 +0200
committerwm4 <wm4@nowhere>2016-04-27 13:49:47 +0200
commit3706918311ef4cc57b1241e87dcc43d699e960f9 (patch)
treed887022bd92e40274ab956b705a808f02104a705 /video/d3d.h
parentcf9b415173b57befb410ecbe92c298cfe36f0451 (diff)
downloadmpv-3706918311ef4cc57b1241e87dcc43d699e960f9.tar.bz2
mpv-3706918311ef4cc57b1241e87dcc43d699e960f9.tar.xz
vo_opengl: D3D11VA + ANGLE interop
This uses ID3D11VideoProcessor to convert the video to a RGBA surface, which is then bound to ANGLE. Currently ANGLE does not provide any way to bind nv12 surfaces directly, so this will have to do. ID3D11VideoContext1 would give us slightly more control about the colorspace conversion, though it's still not good, and not available in MinGW headers yet. The video processor is created lazily, because we need to have the coded frame size, of which AVFrame and mp_image have no concept of. Doing the creation lazily is less of a pain than somehow hacking the coded frame size into mp_image. I'm not really sure how ID3D11VideoProcessorInputView is supposed to work. We recreate it on every frame, which is simple and hopefully doesn't affect performance.
Diffstat (limited to 'video/d3d.h')
-rw-r--r--video/d3d.h2
1 files changed, 2 insertions, 0 deletions
diff --git a/video/d3d.h b/video/d3d.h
index 30bee49adc..b5cf365f7f 100644
--- a/video/d3d.h
+++ b/video/d3d.h
@@ -2,12 +2,14 @@
#define MP_D3D_H_
#include <d3d9.h>
+#include <d3d11.h>
#include "hwdec.h"
struct mp_d3d_ctx {
struct mp_hwdec_ctx hwctx;
IDirect3DDevice9 *d3d9_device;
+ ID3D11Device *d3d11_device;
};
#endif