| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
The X11 error handler is global, and not per-display. If another Xlib
user exists in the process, they can conflict. In theory, it might
happen that e.g. another library sets an error handler (overwriting the
mpv one), and some time after mpv closes its display, restores the error
handler to mpv's one. To mitigate this, check if the error log instance
is actually set, instead of possibly crashing.
The change in vo_x11_uninit() is mostly cosmetic.
|
|
|
|
| |
Also add missing documentation for fs-only, and correct the default.
|
|
|
|
|
|
|
| |
In order to honor the differences between OpenGL and Direct3D coordinate
systems, ANGLE uses a full FBO copy merely to flip the final frame
vertically. This can be avoided with the EGL_ANGLE_surface_orientation
extension.
|
|
|
|
|
|
|
|
|
|
|
|
| |
I hope that this does what we expect it does: destroy the EGLDisplay
specific to our HDC. (Some implementations will terminate all EGL
contexts in the whole process.)
eglReleaseThread() merely calls eglMakeCurrent(0, 0, 0, 0), which is
not enough.
This commit also fixes the problem fixed with the previous commit,
but I think both changes are needed to make our API usage clean.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If ANGLE was probed before (but rejected), the ANGLE API can remain
"initialized", and eglGetCurrentDisplay() will return a non-NULL
EGLDisplay. Then if a native GL context is used, the ANGLE/EGL API will
then (apparently) keep working alongside native OpenGL API. Since GL
objects are just numbers, they'll simply fail to interact, and OpenGL
will get invalid textures. For some reason this will result in black
textures.
With VAAPI-EGL, something similar could happen in theory, but didn't in
practice.
|
|
|
|
|
|
|
|
| |
Introduce hwdec-current and hwdec-interop properties.
Deprecate hwdec-detected, which never made a lot of sense, and which is
replaced by the new properties. hwdec-active also becomes useless, as
hwdec-current is a superset, so it's deprecated too (for now).
|
|
|
|
|
|
|
|
| |
Cache misses are a normal and expected part of the operation of a cache.
It doesn't really make sense to show a user-visible warning for them.
To work-around this, just skip trying to open the cache if it doesn't
exist yet.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
First of all, black point compensation is now on by default. This is
really rather harmless and only improves the result (where "improvement"
means "less black clipping").
Second, this adds an option to limit the ICC profile's contrast, which
helps for untagged matrix profiles that are implicitly black scaled even
in colorimetric intent. (Note that this relies on BPC being enabled to
work properly, which is why the two changes are tied together)
Third, this uses the LittleCMS built in black point estimator instead of
relying on the presence of accurate A2B tables. This also checks tags
and does some amounts of noise elimination.
If the option is unspecified and the profile is missing black point
information, print a warning instructing the user to set the option, and
fall back to 1000 otherwise.
|
|
|
|
| |
Fixes hardware decoding of most mpeg2 things.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The vdpau_mixer could fail to be recreated properly if preemption
occured at some point before playback initialization (like when using
--hwdec-preload and the opengl-cb API).
Normally, the vdpau_mixer was supposed to be marked invalid when the
components using it detect a preemption, e.g. in hwdec_vdpau.c. This one
didn't mark the vdpau_mixer as invalid if preemption was detected in
reinit(), only in map_image().
It's cleaner to detect preemption directly in the vdpau_mixer, which
ensures it's always recreated correctly.
|
|
|
|
|
|
|
|
|
|
| |
The "fs-only" choice sets the _NET_WM_BYPASS_COMPOSITOR to 1 if the
window is fullscreened, and 0 otherwise. (0 is specified to be the
implicit default - i.e. no change is requested in windowed mode.)
In particular, change the default to "fs-only".
Fixes #2582.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Including initguid.h at the top of a file that uses references to GUIDs
causes the GUIDs to be declared globally with __declspec(selectany). The
'selectany' attribute tells the linker to consolidate multiple
definitions of each GUID, which would be great except that, in Cygwin
and MinGW GCC 6.1, this method of linking makes the GUIDs conflict with
the ones declared in libuuid.a.
Since initguid.h obsoletes libuuid.a in modern compilers that support
__declspec(selectany), add initguid.h to all files that use GUIDs and
remove libuuid.a from the build.
Fixes #3097
|
|
|
|
|
|
| |
Should fix #3076 (partially).
Signed-off-by: wm4 <wm4@nowhere>
|
|
|
|
|
|
| |
Fit whole window or just a client area in accord with value of --fit-border option.
Fixes #2935.
|
|
|
|
|
|
|
| |
Substraction of 1 is not necessary due to .right and .bottom values of RECT
struct describing a point one pixel outside of the rectangle.
Fixes #2935. (2.b)
|
|
|
|
| |
fixes #3092
|
|
|
|
| |
Slight simplification, IMHO.
|
|
|
|
|
|
|
|
| |
In particular, this moves the depth test to common code.
Should be functionally equivalent, except that for DXVA2, the
IDirectXVideoDecoderService_GetDecoderRenderTargets API is called
more often potentially.
|
|
|
|
| |
Gets rid of some silliness, and might be useful in the future.
|
|
|
|
| |
Legacy desktop GL only symbols. Broken by the previous commit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This gives us 16 bit fixed-point integer texture formats, including
ability to sample from them with linear filtering, and using them as FBO
attachments.
The integer texture format path is still there for the sake of ANGLE,
which does not support GL_EXT_texture_norm16 yet.
The change to pass_dither() is needed, because the code path using
GL_R16 for the dither texture relies on glTexImage2D being able to
convert from GL_FLOAT to GL_R16. GLES does not allow this. This could be
trivially fixed by doing the conversion ourselves, but I'm too lazy to
do this now.
|
|
|
|
|
| |
This shouldn't make much of a difference, but should make the following
commit simpler.
|
|
|
|
|
| |
This should be ok. eglBindTexImage() just associates the texture, and
does not make a copy (not even a conceptual one).
|
|
|
|
|
| |
Not much we can do about. If there are many complaints, a mechanism to
automatically disable interop in such cases could be added.
|
|
|
|
|
|
|
|
|
| |
Basically this gets rid of the need for the accessors in d3d11va.h, and
the code can be cleaned up a little bit.
Note that libavcodec only defines a ID3D11VideoDecoderOutputView pointer
in the last plane pointers, but it tolerates/passes through the other
plane pointers we set.
|
|
|
|
|
|
| |
We want to prefer d3d11va over dxva2 anything. But since dxva2 copyback
is more efficient than d3d11va's currently, d3d11va-copy should come
last.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This uses ID3D11VideoProcessor to convert the video to a RGBA surface,
which is then bound to ANGLE. Currently ANGLE does not provide any way
to bind nv12 surfaces directly, so this will have to do.
ID3D11VideoContext1 would give us slightly more control about the
colorspace conversion, though it's still not good, and not available
in MinGW headers yet.
The video processor is created lazily, because we need to have the coded
frame size, of which AVFrame and mp_image have no concept of. Doing the
creation lazily is less of a pain than somehow hacking the coded frame
size into mp_image.
I'm not really sure how ID3D11VideoProcessorInputView is supposed to
work. We recreate it on every frame, which is simple and hopefully
doesn't affect performance.
|
|
|
|
|
|
|
| |
They don't actually mean anything.
Just HWDEC_NONE should remain 0, because it's the default init value for
structs etc.
|
| |
|
|
|
|
|
| |
I guess this won't ever be fixed properly in FFmpeg. Too hairy, and the
alternative (using VideoToolbox as "full decoder") is too attractive.
|
|
|
|
| |
They're now used for the TV callback too, not just vsync.
|
|
|
|
|
|
| |
Sucks, but better than freezing forever given the (to me) unpredictable
RPI behavior. This will be good enough to drop out of vsync timing mode,
or to abort playback.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Recreate all dispmanx objects after mode changes signalled by the TV
callback. This is needed since dispmanx objects are marked as invalid
and cease working.
One important point is that the vsync callbacks will stop coming when
this happens, so restoring the callback is important.
Note that the MMAL renderer itself does not get trashed by the firmware
on such events, but we completely reconfigure it anyway when it happens.
|
|
|
|
|
| |
This also moves the p->background check into the top if (the code
effectively didn't do anything when this false).
|
|
|
|
| |
These were for ancient libavcodec versions.
|
|
|
|
|
|
|
| |
For Mediacodec in particular we don't care about the format. It can just
decode to whatever it wants. The only case we would care about is it not
returning an opaque format if we don't have proper interop, but
libavcodec always returns non-opaque formats by default.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use the recently added lavc_suffix mechanism to select the wrapper
decoder.
With all hwdec callbacks being optional, and RPI/Mediacodec having only
dummy callbacks, all the callbacks can be removed as well.
The result is that the vd_lavc_hwdec struct for both of them is tiny.
It's better to move them to vd_lavc.c directly, because they are so
trivial and small.
|
| |
|
|
|
|
|
| |
This is a bit sketchy, as there isn't a truly standard way to
communicate the timebase.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is intended for cases when --hwdec needs to override the decoder
implementation in use, like for example on the RPI.
It does two things:
1. Allow the hwdec to indicate a decoder suffix. libavcodec by
convention adds a suffix to all wrapper decoders, and here we start
relying on it. While not necessarily the best idea, it's the only
thing we got. libavcodec's hwaccel list is useless, because it only
has the codec ID, not the associated decoder's name.
2. Make --hwdec=auto work properly. It shouldn't fail anymore, and hwdec
probing should reliably work, even if a different decoder is selected
with --vd. The semantics of --hwdec should dictate that it overrides
the default decoder.
|
|
|
|
| |
In case of errors or whatever.
|
|
|
|
|
|
|
| |
A minor simplification. Most callers don't need this, and there's no
good reason why the caller should provide an "initializer" like this.
(This function calls mp_image_new_dummy_ref(), which has no reason
for an initializer either.)
|
|
|
|
| |
Damn.
|
|
|
|
|
|
| |
The active texture and some pixelstore parameters are now always reset
to defaults when entering and leaving the renderer. Could be important
for libmpv.
|
|
|
|
|
|
| |
This seems to cause problems, so only use it if H264_E is not available.
fixes #3059
|
| |
|
| |
|
|
|
|
| |
This wasn't updated over multiple iterations.
|
|
|
|
|
| |
OF COURSE Libav doesn't have AV_PICTURE_TYPE_NONE. Why the fuck would
it?
|
|
|
|
|
| |
Future code should always use mp_image_{to,from}_av_frame(). Everything
else is way too messy and fragile.
|
|
|
|
|
|
|
|
| |
As of ffmpeg git master, only the libavdevice decklink wrapper supports
this. Everything else has dropped support.
You're now supposed to use AV_CODEC_ID_WRAPPED_AVFRAME, which works a
bit differently. Normal AVFrames should still work for these encoders.
|
|
|
|
|
|
| |
This potentially makes it more efficient, and actually makes it simpler.
Yes, AV_PICTURE_TYPE_NONE is the default for pict_type.
|
|
|
|
|
| |
What mp_image_to_av_frame_and_unref() should have been. (The _unref
variant is still useful though.)
|
|
|
|
| |
Why was this so complex.
|
|
|
|
| |
In both directions.
|
|
|
|
| |
Fixes #3053.
|
|
|
|
| |
Copy & paste bug.
|
|
|
|
|
|
|
|
|
| |
Since what we're doing is a linear blend of the four colors, we can just
do it for free by using GPU sampling.
This requires significantly fewer texture fetches and calculations to
compute the final color, making it much more efficient. The code is also
much shorter and simpler.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, we have made the assumption that a driver will use only 1
hardware surface format. the format is dictated by the driver (you
don't create surfaces with a specific format - you just pass a
rt_format and get a surface that will be in a specific driver-chosen
format).
In particular, the renderer created a dummy surface to probe the format,
and hoped the decoder would produce the same format. Due to a driver
bug this required a workaround to actually get the same format as the
driver did.
Change this so that the format is determined in the decoder. The format
is then passed down as hw_subfmt, which allows the renderer to configure
itself with the correct format. If the hardware surface changes its
format midstream, the renderer can be reconfigured using the normal
mechanisms.
This calls va_surface_init_subformat() each time after the decoder
returns a surface. Since libavcodec/AVFrame has no concept of sub-
formats, this is unavoidable. It creates and destroys a derived
VAImage, but this shouldn't have any bad performance effects (at
least I didn't notice any measurable effects).
Note that vaDeriveImage() failures are silently ignored as some
drivers (the vdpau wrapper) support neither vaDeriveImage, nor EGL
interop. In addition, we still probe whether we can map an image
in the EGL interop code. This is important as it's the only way
to determine whether EGL interop is supported at all. With respect
to the driver bug mentioned above, it doesn't matter which format
the test surface has.
In vf_vavpp, also remove the rt_format guessing business. I think the
existing logic was a bit meaningless anyway. It's not even a given
that vavpp produces the same rt_format for output.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the past, --video-unscaled also disabled zooming and aspect ratio
corrections. But this didn't make much sense in terms of being a useful
option. The new behavior just sets the initial video size to be
unscaled, but it's still affected by zoom commands and aspect ratio
corrections.
To get the old behavior back, --video-aspect=0 --video-zoom=0 need to be
added as well (in the general case). Most of the time it should not make
a difference though.
Also, there seems to have been some additional dst_rect clamping code
inside src_dst_split_scaling that didn't seem to either be necessary nor
ever get triggered. (The code immediately above it already makes sure to
crop the video if it's larger than the dst_rect)
No idea why it was there, but I just removed it.
|
|
|
|
| |
Never needed them. This makes the code slightly more readable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Apply basic transformations like rotation by 90° and mirroring when
sampling from the source textures. The original idea was making this
part of img_tex.transform, but this didn't work: lots of code plays
tricks on the transform, so manipulating it is not necessarily
transparent, especially when width/height are switched. So add a new
pre_transform field, which is strictly applied before the normal
transform.
This fixes most glitches involved with rotating the image.
Cropping and rotation are now weirdly separated, even though they could
be done in the same step. I think this is not much of a problem, and
has the advantage that changing panscan does not trigger FBO
reallocations (I |