diff options
author | wm4 <wm4@nowhere> | 2017-01-13 13:36:02 +0100 |
---|---|---|
committer | wm4 <wm4@nowhere> | 2017-01-13 18:43:35 +0100 |
commit | 812128bab7d14e92d778d97fd34b9f97bd07acc5 (patch) | |
tree | a2870d87990920ed05c25276d7d721b55e215d91 /video/decode | |
parent | d9376fc86ffec26f134e89293a4342c4b99f0e5b (diff) | |
download | mpv-812128bab7d14e92d778d97fd34b9f97bd07acc5.tar.bz2 mpv-812128bab7d14e92d778d97fd34b9f97bd07acc5.tar.xz |
vo_opengl, vaapi: properly probe 10 bit rendering support
There are going to be users who have a Mesa installation which do not
support 10 bit, but a GPU which can decode to 10 bit. So it's probably
better not to hardcode whether it is supported.
Introduce a more general way to signal supported formats from renderer
to decoder. Obviously this is imperfect, because it still isn't part of
proper format negotation (for example, what if there's a vavpp filter,
which accepts anything). Still slightly better than before.
I don't know any way to probe for vaapi dmabuf/EGL dmabuf support
properly (in particular testing specific formats, not just general
availability). So we stay with the current approach and try to create
and map dummy surfaces on init to probe for support. Overdo it and check
all formats that AVHWFramesConstraints reports, instead of only NV12 and
P010 surfaces.
Since we can support unknown formats now, add explicitly checks to the
EGL/dmabuf mapper code to reject unsupported formats. I also noticed
that libavutil signals support for RGB0/BGR0, but couldn't get it to
work. Remove the DRM formats that are unused/didn't work the way I tried
to use them.
With this, 10 bit decoding + rendering should work, provided you have
a capable CPU and a patched Mesa. The required Mesa patch adds support
for the R16 and GR32 formats. It was sent by a Kodi developer to the
Mesa developer mailing list and was not accepted yet.
Diffstat (limited to 'video/decode')
-rw-r--r-- | video/decode/vaapi.c | 29 |
1 files changed, 20 insertions, 9 deletions
diff --git a/video/decode/vaapi.c b/video/decode/vaapi.c index 8a7331fc26..13e38f2258 100644 --- a/video/decode/vaapi.c +++ b/video/decode/vaapi.c @@ -39,7 +39,7 @@ struct priv { struct mp_log *log; struct mp_vaapi_ctx *ctx; - bool own_ctx; + struct mp_hwdec_ctx *hwdev; AVBufferRef *frames_ref; }; @@ -54,11 +54,22 @@ static int init_decoder(struct lavc_ctx *ctx, int w, int h) assert(!ctx->avctx->hw_frames_ctx); - // If we use direct rendering, disallow 10 bit - it's probably not - // implemented yet, and our downstream components can't deal with it. - if (!p->own_ctx && required_sw_format != AV_PIX_FMT_NV12) { - MP_WARN(ctx, "10 bit surfaces are currently unsupported.\n"); - return -1; + // The video output might not support all formats. + // Note that supported_formats==NULL means any are accepted. + if (p->hwdev && p->hwdev->supported_formats) { + int mp_format = pixfmt2imgfmt(required_sw_format); + bool found = false; + for (int n = 0; p->hwdev->supported_formats[n]; n++) { + if (p->hwdev->supported_formats[n] == mp_format) { + found = true; + break; + } + } + if (!found) { + MP_WARN(ctx, "Surface format %s not supported for direct rendering.\n", + mp_imgfmt_to_name(mp_format)); + return -1; + } } if (p->frames_ref) { @@ -114,7 +125,7 @@ static void uninit(struct lavc_ctx *ctx) av_buffer_unref(&p->frames_ref); - if (p->own_ctx) + if (!p->hwdev) va_destroy(p->ctx); talloc_free(p); @@ -129,14 +140,14 @@ static int init(struct lavc_ctx *ctx, bool direct) }; if (direct) { - p->ctx = hwdec_devices_get(ctx->hwdec_devs, HWDEC_VAAPI)->ctx; + p->hwdev = hwdec_devices_get(ctx->hwdec_devs, HWDEC_VAAPI); + p->ctx = p->hwdev->ctx; } else { p->ctx = va_create_standalone(ctx->log, false); if (!p->ctx) { talloc_free(p); return -1; } - p->own_ctx = true; } ctx->hwdec_priv = p; |