| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
|
|
|
|
| |
Get it out of the way in the common code. MPGLContext.dwm_flush_opt can
be removed as well as soon as the option system gets overhauled.
|
|
|
|
| |
This was preventing compilation on systems without X11 headers.
|
|
|
|
|
| |
This was previously trying to use the video_output_vaapi symbol despite
vo_vaapi.c being guarded by the vaapi-x11 option.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Make handling of metadata slightly more generic, and add reading of the
"PERFORMER" fields. There are some more fields, but for now let's leave
it at this.
TRACK-specific PERFORMER fields have to be read from the per-chapter
metadata (somewhat obscure).
Fixes #2328.
|
|
|
|
| |
A minor, but apparently common feature request. Fixes #2360.
|
|
|
|
|
|
|
| |
You needed to select a GL backend with the backend suboption. This was
confusing.
Fixes #2361.
|
|
|
|
|
|
| |
We have to disable it, or shader compilation will fail.
Fixes #2362.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This gets rid of an old hack, VOFLAG_HIDDEN. Although handling of it has
been sane for a while, it used to cause much pain, and is still
unintuitive and weird even today.
The main reason for this hack is that OpenGL selects a X11 Visual for
you, and you're supposed to use this Visual when creating the X window
for the OpenGL context. Which means the X window can't be created early
in the common X11 init code, but the OpenGL code needs to do something
before that. API-wise you need separate functions for X11 init and X11
window creation. The VOFLAG_HIDDEN hack conflated window creation and
the entrypoint for resizing on video resolution change into one
function, vo_x11_config_vo_window(). This required all platform backends
to handle this flag, even if they didn't need this mechanism.
Wayland still uses this for minor reasons (alpha support?), so the
wayland backend must be changed before the flag can be entirely removed.
|
| |
|
|
|
|
|
|
|
| |
If interpolation is enabled, then this causes heavy artifacts if done
while unpaused. It's preferable to allow a latency of a few frames for
the change to take full effect instead. If this is done paused, the
frame is fully redrawn anyway.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit d11184a256ed709a03fa94a4e3940eed1b76d76f.
Unfortunately, there was a lot of unexpected resistance.
Do note that this is still extremely slow, crappy, etc.
Note that vo_x11.c was further edited. Compared to the removed vo_x11.c,
an additional ~200 lines of code was removed in order to simplify it. I
tried to strip it down as much as possible. In particular, support for
odd non-32 bit formats (24, 16, 15, 8 bit) is dropped.
Closes #2300.
|
|
|
|
|
|
| |
Don't print the URL that is opened twice. stream.c and stream_lavf.c
each printed it once. Remove the logging from stream_lavf.c, and move
the log call to a more interesting point.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
The vf_format suboption is replaced with --video-output-levels (a global
option and property). In particular, the parameter is removed from
mp_image_params. The mechanism is moved to the "video equalizer", which
also handles common video output customization like brightness and
contrast controls.
The new code is slightly cleaner, and the top-level option is slightly
more user-friendly than as vf_format sub-option.
|
|
|
|
|
|
| |
This AVPacket field was a hack against the fact that the duration field
was merely an int (too small for things like subtitle durations). Newer
libavcodec drops this field and makes duration 64 bit.
|
|
|
|
|
|
|
|
|
|
| |
This essentially reverts commit 009dfbe3. FFmpeg VideoToolbox support
is being wacky, and can cause major issues, such as not being able
to decode a single frame. (E.g. by playing a .ts file. This should be
fixed in FFmpeg eventually.)
This is not a straight revert of the commit; just a functional one. We
keep the slightly simpler code structure.
|
|
|
|
| |
Get rid of the VDA specifics like naming or ancient pixel formats.
|
|
|
|
|
|
|
| |
It doesn't deal with VDA at all anymore. Rename it to hwdec_osx.c. Not
using hwdec_videotoolbox.c, because that would give it the longest
source path in this project yet. (Also, this code isn't even
VideoToolox-specific, other than the name of the pixel format used.)
|
|
|
|
|
|
|
|
|
| |
VideoToolbox is preferred. Now that FFmpeg released 2.8, there's no
reason to support VDA anymore. In fact, we had a bug that made VDA not
useable with older FFmpeg versions in some newer mpv releases.
VideoToolbox is supported even on slightly older OSX versions, and if
not, you still can run mpv without hw decoding.
|
|
|
|
|
|
|
|
| |
Definitely not needed anymore, and fixes a crash in some weird corner-
cases.
The extradata freeing is apparently still needed, though. (Because a
codec context can be opened again, which makes no sense, but ok.)
|
|
|
|
|
|
|
| |
This can happen with USB audio. There was already code for this, but
something in mpv and ALSA changed - and now the old code is not
necessarily triggered anymore. It probably depends on the exact
situation.
|
|
|
|
|
|
|
|
| |
This could sometimes cause crashes in hotplug events. (Apparently in
cases when CoreAudio changes its state asynchronously, or such.)
CA_GET_STR() does not set the string if there was an error, so errors
have to be strictly checked before using it.
|
|
|
|
|
|
| |
Pretty trivial with the new EGL interop.
Fixes #478.
|
|
|
|
|
| |
Move the ugliness from x11egl.c to common.c, so that the ugliness
doesn't have to be duplicated in wayland.c.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are at least 2 ways of using VAAPI without X11 (Wayland, DRM).
Remove the X11 requirement from the decoder part and the EGL interop.
This will be used by a following commit, which adds Wayland support.
The worst about this is the decoder part, which includes a bad hack for
using the decoder without any VO interop (also known as "vaapi-copy"
mode). Separate the X11 parts so that they're self-contained. For the
EGL interop code we do something similar (it's kept slightly simpler,
because it essentially only has to translate between our silly
MPGetNativeDisplay abstraction and the vaGetDisplay...() call).
|
|
|
|
| |
Fixes #2353.
|
|
|
|
|
| |
We also could just check at build time, but since it's not much, just
redefine them inline if not present.
|
|
|
|
|
|
|
| |
It looks like my hope that we can unconditionally include EGL headers in
the OpenGL code is not coming true, because OSX does not support EGL at
all. So I prefer loading the VAAPI EGL/GL specific extensions manually,
because it's less of a mess. Partially reverts commit d47dff3f.
|
|
|
|
|
|
| |
While EGL 1.4 seemed a bit ambiguous about this to me, it actually says
quite clearly that core functions are not supported with
eglGetProcAddress() in the following paragraph.
|
|
|
|
|
|
|
|
|
|
|
| |
Normally, we prefer GLX on X11. But for the VAAPI EGL interop, we
obviously want EGL. Since nvidia does not provide EGL with desktop GL
yet, we can leave it to the autoprobing. Just make sure some failure
messages don't unnecessarily show up in the nvidia case.
This breaks VAAPI GLX interop by default, but I don't care much. If
you use --hwdec=auto (which you should if you want hw decoding), this
should fallback to vaapi-copy instead.
|
|
|
|
| |
Get rid of the config_window_x11_egl() indirection.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Probe the surface format, and check whether it's really something we
support. This also does a complete check whether the EGL interop works
at all (the only way to find this out is actually running this code).
Also, support YV12. Under some circumstances, vaapi (with Intel
drivers) can be made to use this format.
Unfortunately, the Intel drivers show some very weird behavior, which
is hopefully a bug. insane_hack() provides a very evil workaround (see
comments). A proper solution might be passing the hw format as part of
mp_image_params, but as long as hw surfaces appear to be able to change
the format on the fly, attempting this is probably not worth the extra
complexity and likely fragility. The hack allows us to pretend that
there is sane behavior for now.
|
|
|
|
| |
Fixes #2344.
|
|
|
|
|
|
|
|
|
| |
This makes it much faster if the surface is really mapped from GPU
memory. It's slightly slower than system memcpy if used on system
memory. We don't really know definitely in which type of memory
it's located, so we use the GPU memcpy in all cases.
Fixes #2317.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make the GPU memcpy from the dxva2 code generally useful to other parts
of the player.
We need to check at configure time whether SSE intrinsics work at all.
(At least in this form, they won't work on clang, for example. It also
won't work on non-x86.)
Introduce a mp_image_copy_gpu(), and make the dxva2 code use it. Do some
awkward stuff to share the existing code used by mp_image_copy(). I'm
hoping that FFmpeg will sooner or later provide a function like this, so
we can remove most of this again. (There is a patch, bit it's stuck in
limbo since forever.)
All this is used by the following commit.
|
|
|
|
|
| |
Flushing buffers, and thereby triggering decoder reinitialisation
needs to happen before attempting, and failing, to decode.
|
|
|
|
|
|
| |
Broken by commit d47dff3f. If something is going to include EGL.h,
header_fixes.h has to know. This definitely affected vo_rpi, and
probably affects wayland builds (with x11egl didabled) as well.
|
|
|
|
|
|
|
| |
Checking and resetting the VAImage.buf field is non-sense, even if it
happened to work out in the normal case. buf is actually freed when
vaDestroyImage() is called (not quite intuitive), and we need an extra
field to know whether vaReleaseBufferHandle() has to be called.
|
|
|
|
|
|
|
|
|
|
| |
Printing "Using vaDeriveImage()" every frame is too verbose, so raise
the log level.
mp_image strides are in int and not unsigned int; fix this. It's not
like it actually matters, though.
Finish a comment.
|
|
|
|
| |
This specific FourCC has its planes swapped compared to FFmpeg yuv420p.
|
|
|
|
|
|
|
|
|
|
|
| |
Add the upstream symbolic names as comments. Normally, these should be
defined in libdrm's drm_fourcc.h header. But DRM_FORMAT_R8 and
DRM_FORMAT_GR88 are not defined anywhere, except in the kernel userland
headers of Linux 4.3 (!). We don't want mpv to depend on bleeding-edge
Linux kernel headers, so this will have to do.
Also, just for completeness, add fourccs for the 3 and 4 channel
formats. I didn't manage to test them, though.
|
|
|
|
|
|
|
| |
Reduces the amount of hardcoded assumptions about the layout
drastically. (Now adding yuv420 support would be just adjusting an if,
if you ignore the other problems, such as determining the hw format at
all early enough.)
|
|
|
|
| |
Not sure what I was thinking.
|
|
|
|
|
|
|
| |
Don't call eglDestroyImageKHR() on the same ID possibly more than once.
Clear the image reference on termination, or we would leak up to 1 image
per VO recreation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Should work much better than the old GLX interop code. Requires Mesa 11,
and explicitly selecting the X11 EGL backend with:
--vo=opengl:backend=x11egl
Should it turn out that the new interop works well, we will try to
autodetect EGL by default.
This code still uses some bad assumptions, like expecting surfaces to be
in NV12. (This is probably ok, because virtually all HW will use this
format. But we should at least check this on init or so, instead of
failing to render an image if our assumption doesn't hold up.)
This repo was a lot of help: https://github.com/gbeauchesne/ffvademo
The kodi code was also helpful (the magic FourCC it uses for
EGL_LINUX_DRM_FOURCC_EXT are nowhere documented, and
EGL_IMAGE_INTERNAL_FORMAT_EXT as used in ffvademo does
not actually exist).
(This is the 3rd VAAPI GL interop that was implemented in this player.)
|
|
|
|
|
|
|
|
|
|
| |
The VAAPI EGL interop code will need access to the X11 Display. While
GLX could return it from the current GLX context, EGL has no such
mechanism. (At least no standard one supported by all implementations.)
So mpv makes up such a mechanism.
For internal purposes, this is very rather awkward solution, but it's
needed for libmpv anyway.
|
|
|
|
|
|
|
|
|
| |
These extensions use a bunch of EGL types, so we need to include the EGL
headers in common.h to use our GL function loader with this.
In the future, we should probably require presence of the EGL headers to
reduce the hacks. This might be not so simple at least with OSX, so for
now this has to do.
|
|
|
|
|
| |
Originally, this was disabled, because it was useless and I was afraid
it could possibly break autodetection and proper fallback.
|
|
|
|
|
|
|
|
|
|
|
| |
Surfaces used by hardware decoding formats can be mapped exactly like a
specific software pixel format, e.g. RGBA or NV12. p->image_params is
supposed to be set to this format, but it wasn't.
(How did this ever work?)
Also, setting params->imgfmt in the hwdec interop drivers is pointless
and redundant. (Change them to asserts, because why not.)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a pseudo-OpenGL extension for letting libmpv query native
windowing system handles from the API user. (It uses the OpenGL
extension mechanism because I'm lazy. In theory it would be nicer to let
the user pass them with mpv_opengl_cb_init_gl(), but this would require
a more intrusive API change to extend its argument list.)
The naming of the extension and associated function was unnecessarily
Windows specific (using "D3D"), even though it would work just fine for
other platforms. So deprecate the old names and introduce new ones. The
old ones still work.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This turns the old scalers (inherited from MPlayer) into a pre-
processing step (after color conversion and before scaling). The code
for the "sharpen5" scaler is reused for this.
The main reason MPlayer implemented this as scalers was perhaps because
FBOs were too expensive, and making it a scaler allowed to implement
this in 1 pass. But unsharp masking is not really a scaler, and I would
guess the result is more like combining bilinear scaling and unsharp
masking.
|
|
|
|
| |
At least one thing the current option code can do right.
|
|
|
|
|
| |
This was redundant to forcing the value with vf_format, so the vo_opengl
sub-option was removed. This field is just a leftover.
|
|
|
|
|
|
| |
It's just about loading and cachign small files, not does not
necessarily have anything to do with shaders. Move it to video.c where
it's used.
|
|
|
|
|
|
| |
Will this shit ever actually work?
Fixes #2339.
|
|
|
|
|
|
|
|
|
|
| |
Usually, libavcodec ignores errors reported by the hardware decoding
API, so it's not like we can actually escape if the hardware is somehow
acting up.
For normal fallback purposes, or if parts of the hw decoding API which
we actually check fails, we do this by setting and checking the
hwdec_failed flag anyway.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This can happen if the hw decoder allocates padded surfaces (e.g.
mod16), but the VPP output surface was allocated with the exact size.
Apparently VPP requires matching input and output sizes, or it will add
artifacts. In this case, it added mirrored pixels to the bottom few
pixels.
Note that the previous commit should have fixed this. But it didn't
work, while this commit does.
Fixes #2320.
|
|
|
|
|
|
|
|
| |
If not set, VPP will use the whole surface. This is a problem if the
surfaces are padded, and especially if the surfaces are padded by
different amounts.
This is an attempt to fix #2320, but it appears to do nothing at all.
|
|
|
|
|
|
|
|
|
| |
The comment was largely outdated, and described the old situation when
we used a "violent" fallback by making get_buffer2 fail completely.
Also, for the c |