| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
MPlayer traditionally had completely separate sh_ structs for
audio/video/subs, without a good way to share fields. This meant that
fields shared across all these headers had to be duplicated. This commit
deduplicates essentially the last remaining duplicated fields.
|
|
|
|
|
| |
Why not. "format" sounds too misleading for the actual importance and
meaning of this field.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When using --hwdec=auto, about half of all systems will print:
"[vdpau] Error when calling vdp_device_create_x11: 1"
this happens because usually mpv will be linked against both vdpau and
vaapi libs, but the drivers are not necessarily available. Then trying
to load a driver will fail. This is a normal part of probing, but the
error messages were printed anyway. Silence them by explicitly
distinguishing probing.
This pretty much goes through all the layers. We actually consider
loading hw backends for vo_opengl always "auto probed", even if a hw
backend is explicitly requested. In this case vd_lavc will print a
warning message anyway (adjust this message a bit).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When showing cover art, the decoding logic pretends that the source has
an infinite number of frames. This slightly simplifies dealing with
filter data flow. It was done by feeding the same packet repeatedly to
the decoder (each decode run produces new output).
Change this by decoding once at the video initialization. This is easier
to follow, and increases robustness in case of broken images. Usually,
we try to tolerate decoding errors, so decoding normally continues, but
in this case it would just burn the CPU for no reason.
Fixes #2056.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This must have been some non-sense in the original vaapi mplayer patch.
While I still have no good idea what this "direct mapping" business is
about, it appears to be pretty much pointless. Nothing can hold
additional "real" surface references (due to how the API and mpv/lavc
refcounting work), so removing the additional surfaces won't break
anything. It still could be that this was for achieving additional
buffering (not reusing surfaces as soon), but we buffer some additional
data anyway. Plus, the original intention of the vaapi mplayer code was
probably increasing surface count just by 1 or 2, not actually doubling
it, and/or it was a "trick" to get to the maximum count of 21 when h264
is in use.
gstreamer-vaapi uses "ref_frames + SCRATCH_SURFACES_COUNT" here, with
SCRATCH_SURFACES_COUNT defined to 4. It doesn't appear to check the
overlay attributes at all in the decoder.
In any case, remove this non-sense.
|
|
|
|
|
|
|
| |
On hw decoder reinit failure we did not actually always return a sw
format, because the first format (fmt[0]) is not always a sw format.
This broke some cases of fallback. We must go through the trouble to
determine the first actual sw format.
|
|
|
|
|
|
|
|
|
| |
Yet another of these dozens of hwaccel changes. This time, libavcodec
provides utility functions, which initialize the vdpau decoder and map
codec profiles. So a lot of work the API user had to do falls away.
This also will give us support for high bit depth profiles, and possibly
HEVC once libavcodec supports it.
|
|
|
|
| |
vdpau.c will be used for new code.
|
|
|
|
|
|
| |
...instead of relying on the hw decoding API to align it for us. The old
method could in theory have gone wrong if the video is cropped by an
amount large enough to step over several blocks.
|
|
|
|
|
|
|
| |
There's not much of a reason to keep get_surface_hwdec() and
get_buffer2_hwdec() separate. Actually, the way the mpi->AVFrame
referencing is done makes this confusing. The separation is probably
an artifact of the pre-libavcodec-refcounting compatibility glue.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Most of hardware decoding is initialized lazily. When the first packet
is parsed, libavcodec will call get_format() to check whether hw or sw
decoding is wanted. Until now, we've returned AV_PIX_FMT_NONE from
get_format() if hw decoder initialization failed. This caused the
avcodec_decode_video2() call to fail, which in turn let us trigger the
fallback. We didn't return a sw format from get_format(), because we
didn't want to continue decoding at all. (The reason being that full
reinitialization is more robust when continuing sw decoding.)
This has some disadvantages. libavcodec vomited some unwanted error
messages. Sometimes the failures are more severe, like it happened with
HEVC. In this case, the error code path simply acted up in a way that
was extremely inconvenient (and had to be fixed by myself). In general,
libavcodec is not designed to fallback this way.
Make it a bit less violent from the API usage point of view. Return a sw
format if hw decoder initialization fails. In this case, we let
get_buffer2() call avcodec_default_get_buffer2() as well. libavcodec is
allowed to perform its own sw fallback. But once the decode function
returns, we do the full reinitialization we wanted to do.
The result is that the fallback is more robust, and doesn't trigger any
decoder error codepaths or messages either. Change our own fallback
message to a warning, since there are no other messages with error
severity anymore.
|
|
|
|
|
| |
Instead of the requested one, which can be just "auto", and which is
rather useless.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is pretty much copy&pasted from Libav commit
a7e0380497306d9723dec8440a4c52e8bf0263cf.
Note that if FFmpeg was not compiled with HEVC DXVA2 support or your
video drivers do not support HEVC, the player will not fallback and
just fail decoding any video. This is because libavcodec appears not
to return an error in this case. The situation is made worse by the
fact that MSYS2 is on an ancient MinGW-w64 release, which does not
have the required headers for HEVC DXVA2 support.
|
|
|
|
|
|
|
|
|
| |
The hardware always decodes to nv12 so using this image format causes less cpu
usage than uyvy (which we are currently using, since Apple examples and other
free software use that). The reduction in cpu usage can add up to quite a bit,
especially for 4k or high fps video.
This needs an accompaning commit in libavcodec.
|
|
|
|
|
|
| |
gpu_mempcy should to be called from code which targets SSE
Signed-off-by: wm4 <wm4@nowhere>
|
|
|
|
|
| |
These are basically MPlayer leftovers, and barely useful due to being
redundant with other messages. The FPS message is used somewhere else.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove the old implementation for these properties. It was never very
good, often returned very innaccurate values or just 0, and was static
even if the source was variable bitrate. Replace it with the
implementation of "packet-video-bitrate". Mark the "packet-..."
properties as deprecated. (The effective difference is different
formatting, and returning the raw value in bits instead of kilobits.)
Also extend the documentation a little.
It appears at least some decoders (sipr?) need the
AVCodecContext.bit_rate field set, so this one is still passed through.
|
|
|
|
| |
Signed-off-by: wm4 <wm4@nowhere>
|
|
|
|
| |
Signed-off-by: wm4 <wm4@nowhere>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove the colorspace-related top-level options, add them to vf_format.
They are rather obscure and not needed often, so it's better to get them
out of the way. In particular, this gets rid of the semi-complicated
logic in command.c (most of which was needed for OSD display and the
direct feedback from the VO). It removes the duplicated color-related
name mappings.
This removes the ability to write the colormatrix and related
properties. Since filters can be changed at runtime, there's no loss of
functionality, except that you can't cycle automatically through the
color constants anymore (but who needs to do this).
This also changes the type of the mp_csp_names and related variables, so
they can directly be used with OPT_CHOICE. This probably ended up a bit
awkward, for the sake of not adding a new option type which would have
used the previous format.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This requires FFmpeg git master for accelerated hardware decoding.
Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav
will also work.
Most things work. Screenshots don't work with accelerated/opaque
decoding (except using full window screenshot mode). Subtitles are
very slow - even simple but huge overlays can cause frame drops.
This always uses fullscreen mode. It uses dispmanx and mmal directly,
and there are no window managers or anything on this level.
vo_opengl also kind of works, but is pretty useless and slow. It can't
use opaque hardware decoding (copy back can be used by forcing the
option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend
is preferred over the X11 ones in case you're trying on X11; but X11
is even more useless on RPI.
This doesn't correctly reject extended h264 profiles and thus doesn't
fallback to software decoding. The hw supports only up to the high
profile, and will e.g. return garbage for Hi10P video.
This sets a precedent of enabling hw decoding by default, but only
if RPI support is compiled (which most hopefully it will be disabled
on desktop Linux platforms). While it's more or less required to use
hw decoding on the weak RPI, it causes more problems than it solves
on real platforms (Linux has the Intel GPU problem, OSX still has
some cases with broken decoding.) So I can live with this compromise
of having different defaults depending on the platform.
Raspberry Pi 2 is required. This wasn't tested on the original RPI,
though at least decoding itself seems to work (but full playback was
not tested).
|
|
|
|
|
|
| |
Codecs for hardware acceleration are not blacklisted, but whitelisted.
Also, if this emssage is printed, the codec might not have any hardware
acceleration support in the first place.
|
|
|
|
|
|
|
|
|
|
|
| |
Trying to handle such video is almost worthless, but it was requested by
at least 2 users.
If there are no timestamps, enable byte seeking by setting
ts_resets_possible. Use the video FPS (wherever it comes from) and the
audio samplerate for timing. The latter was already done by making the
first packet emit DTS=0; remove this again and do it "properly" in a
higher level.
|
|
|
|
|
|
|
|
| |
For some reason there were two points in the code where it warned
against non-monotonic video PTS. The one in video.c triggered on PTS
going backwards or making large jumps forwards, while dec_video.c
triggered on PTS going backwards or PTS not changing. Merge them into a
single check, which warns against all cases.
|
|
|
|
| |
This was requested. Apparently some find the old mesage confusing.
|
|
|
|
|
|
|
|
|
| |
This played e.g. a 1264x722 file as 1264x720. There was some code which
dropped the aspect ratio if the video (in original resolution) wasn't
scaled by more than 4 pixels. Commit 5f3c3f8c introduced this (although
I'm not really sure what the code replaced by it did).
Just remove this "feature".
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of "vaapi", simply by changing the probe order.
"vaapi" uses the GLX GL interop, which has causing us more problems than
it solved.
Unfortunately this leads also to copying if "--hwdec=auto --vo=vaapi" is
used, even though GLX is not involved in this case - but I don't care
enough to make the probe logic cleverer just for this. You can still get
the zero-copy path with --hwdec=vaapi.
|
|
|
|
|
|
| |
All of these are now in the supported FFmpeg and Libav versions.
The 3 remaining API checks are for FFmpeg-only things.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Omitted a simple, but devastasting check. Fixed the relevant commits
now.
This reverts commit 8d24e9d9b8ad1b5d82139980eca148dc0f4a1eab.
diff --git a/video/out/gl_video.c b/video/out/gl_video.c
index 9c8a643..f1ea03e 100644
--- a/video/out/gl_video.c
+++ b/video/out/gl_video.c
@@ -1034,9 +1034,9 @@ static void compile_shaders(struct gl_video *p)
shader_def_opt(&header_conv, "USE_CONV_GAMMA", use_conv_gamma);
shader_def_opt(&header_conv, "USE_CONST_LUMA", use_const_luma);
shader_def_opt(&header_conv, "USE_LINEAR_LIGHT_BT1886",
- gamma_fun == MP_CSP_TRC_BT_1886);
+ use_linear_light && gamma_fun == MP_CSP_TRC_BT_1886);
shader_def_opt(&header_conv, "USE_LINEAR_LIGHT_SRGB",
- gamma_fun == MP_CSP_TRC_SRGB);
+ use_linear_light && gamma_fun == MP_CSP_TRC_SRGB);
shader_def_opt(&header_conv, "USE_SIGMOID", use_sigmoid);
if (p->opts.alpha_mode > 0 && p->has_alpha && p->plane_count > 3)
shader_def(&header_conv, "USE_ALPHA_PLANE", "3");
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Breaks vo_opengl by default. I'm hot able to fix this myself, because I
have no clue about the overcomplicated color management logic. Also,
whilethis is apparently caused by commit fbacd5, the following commits
all depend on it, so revert them too.
This reverts the following commits:
e141caa97dade07f4d7e0d6c208bcd3493e712ed
653b0dd5295453d9661f673b4ebd02c5ceacf645
729c8b3f641e633474be612e66388c131a1b5c92
fbacd5de31de964f7cd562304ab1c9b4a0d76015
Fixes #1636.
|
|
|
|
|
| |
We now actually use the TRC tagging information lavc provides us with,
instead of always manually guessing.
|
|
|
|
|
|
|
| |
Remove coded_width and coded_height. This was originally added in commit
fd7dde40, when BITMAPINFOHEADER was killed. The separate fields became
redundant in commit e68f4be1. Remove them (nothing passed to the
decoders actually changes with _this_ commit).
|
|
|
|
|
| |
Maybe I don't know what I'm doing. I'm fairly certain though that Intel
does not know what they're doing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A recent behavior change in libavcodec's h264 decoder keeps at least 1
surface even after avcodec_flush_buffers() has been called. We used to
flush the decoder in order to make sure all surfaces are free'd, so that
the hw decoder can be safely uninitialized. This doesn't work anymore.
Fix it by closing the AVCodecContext before the hw decoder is
uninitialized. This is actually simpler and more robust. It seems to be
well-supported too.
Fixes invalid read accesses with vaapi-copy and dxva2-copy. These
destroyed the hwdec API fully on uninit, and could not deal with
surfaces surviving the decoder.
Probably fixes #1587.
|
|
|
|
|
|
|
|
| |
The intention is that we can test vo_opengl with high bit depth PNGs
better. This throws libswscale completely out of the loop, which before
was needed in order to convert from big endian to little endian.
Also apply a minimal cleanup to fmt-conversion.c (unrelated).
|
|
|
|
|
|
|
|
|
| |
This is somewhat imperfect, because detection of hw decoding APIs is
mostly done on demand, and often avoided if not necessary. (For example,
we know very well that there are no hw decoders for certain codecs.)
This also requires every hwdec backend to identify itself (see hwdec.h
changes).
|
|
|
|
|
|
|
|
|
|
|
| |
Before this commit, each hw backend had their own specific struct types
for context, and some, like VDA, had none at all. Add a context struct
(mp_hwdec_ctx) that provides a somewhat generic way to pass the hwdec
context around. Some things get slightly better, some slightly more
verbose.
mp_hwdec_info is still around; it's still needed, but is reduced to its
role of handling delayed loading of the hwdec backend.
|
|
|
|
|
| |
Put the Vista+ (_WIN32_WINNT) and the COM C (COBJMACROS) defines into
the build system, instead of defining them over and over in the code.
|
| |
|
|
|
|
| |
Also, don't use av_log() for mpv output.
|
|
|
|
| |
The ctx->pic check must uninitialize the decoder.
|
|
|
|
| |
Fixes #1337.
|
|
|
|
|
| |
This is "better", although nobody seems to know how this API is supposed
to work at all.
|
|
|
|
| |
Fixes #1307.
|
|
|
|
|
|
| |
This way, no surfaces are in use when uninitializing the hw decoders,
which might help with -copy hw decoders (normal hw decoding is not
affected).
|
|
|
|
| |
Like memcpy_pic, this checks if the strides match first.
|
| |
|
|
|
|
| |
If dxva2_init() fails, dxva2_uninit() will be called twice.
|
| |
|
|
|
|
|
|
| |
At least on my machine, reading back the frame with system memcpy is
slower than just using software rendering. Use the optimized gpu_memcpy
from LAV to speed things up.
|
|
|
|
|
| |
Shamelessly stolen from ffmpeg. It probably doesn't work - you can debug
it yourself.
|
|
|
|
|
| |
The private context wasn't free'd when codec init failed. Restructure
the code so that it can't happen.
|
|
|
|
|
|
| |
This was once central, but now it's almost unused. Only vf_divtc still
uses it for extremely weird and incomprehensible reasons. The use in
stream.c is trivial. Replace these, and remove mpbswap.h.
|
|
|
|
|
|
|
|
|
|
| |
MPlayer traditionally did this because it made sense: the most important
formats (avi, asf/wmv) used Microsoft formats, and many important
decoders (win32 binary codecs) also did. But the world has changed, and
I've always wanted to get rid of this thing from the codebase.
demux_mkv.c internally still uses it, because, guess what, Matroska has
a VfW muxing mode, which uses these data structures natively.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The oldest supported FFmpeg release doesn't provide
av_vdpau_alloc_context(). With these versions, the application has no
other choice than to hard code the size of AVVDPAUContext. (On the other
hand, there's av_alloc_vdpaucontext(), which does the same thing, but is
FFmpeg specific - not sure if it was available early enough, so I'm not
touching it.)
Newer FFmpeg and Libav releases require you to call this function, for
ABI compatibility reasons. It's the typcal lakc of foresight that make
FFmpeg APIs terrible. mpv successfully pretended that this crap didn't
exist (ABI compat. is near impossible to reach anyway) - but it appears
newer developments in Libav change the function from initializing the
struct with all-zeros to something else, and mpv vdpau decoding would
stop working as soon as this new work is relewased.
So, add a configure test (sigh).
CC: @mpv-player/stable
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |