| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The wayland backend needs to keep track of whether or not a window is
hidden for presentation time. There is no presentation feedback when a
window is hidden which means we shouldn't be sending information to the
vo_sync_info structure (i.e. just leave it all at -1). This seemed to
work fine, but recent changes to presentation time in one notable
compositor (Sway; it was probably always broken in Weston actually)
changed the presentation time behavior.
For reasons that aren't clear, there is a greater than 16.666ms delay
between the first presentation time event and the second presentation
time event (compositor latency?) when you switch back to an mpv window
after it is hidden for long enough (a few seconds). When using
presentation time, this causes mpv to feed in some bad values in its
vsync timing mechanism thus causing the A/V desync spike as described in
issue #7223.
This solution is not really ideal. It would be better if the
presentation time events received by the compositors did not have the
aforementioned inconsistency. However since this occurs in both Sway and
Weston and clients can't really fight compositors in wayland-world,
here's a reasonable enough workaround. Basically, just add a slight
delay before we start feeding information into the vo_sync_info again.
We already do this when the window is hidden, so it's not a huge leap.
The delay chosen here is arbitrary, and it basically just recycles the
same parameters used to detect if a window is hidden. If
vo_wayland_wait_frame times out 60 times in a row (or whatever your
monitor's refresh rate is), then we assume the window is hidden. This is
a pretty safe assumption; something has to be terribly wrong for you to
miss 60 vblanks in a row while a window is on the screen.
In this case, we basically just do the reverse of that. If mpv receives
60 frame callbacks in a row (or whatever your monitor's refresh rate
is), then it assumes the window is not hidden. Previously, as soon as it
received 1 frame callback it was declared not hidden. Essentially,
there's just 1 second of delay after reshowing a window before the
presentation time statistics are used again. This should be more than
enough time to skip over the weird inconsistent behavior presentation
time behavior and avoid the A/V desync spike.
Fixes #7223
|
|
|
|
|
|
| |
drmModeAddFB is legacy, and might not pick the pixel format you
expect, depending on your driver. Use drmModeAddFB2 which specifies
this explicitly using a fourcc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Seems like some drivers only increment msc every other page flip when
running in interlaced mode (I'm looking at you nouveau). I.e. it seems
to be incremented at the frame rate, rather than the field rate.
Obviously we can't work with this, so shame the driver and bail.
On intel this isn't an issue, as msc is incremented at field rate
there.
This means presentation feedback won't work correctly in interlaced
modes with those drivers, but who in their right mind uses an
interlaced mode these days, anyway?
|
| |
|
|
|
|
| |
If the capability is available it may still be 0 to signal absence of support.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
In theory, using strstr() to search for extensions is a bad idea,
because some extension names might be prefixes for other names, so you
could get false positives. gl_check_extension() avoids this case.
It's not clear whether this is really needed; maybe not. Surely the EGL
committee is aware of these practices (many GL clients do this, which is
why it's widely considered bad practice), and would avoid defining new
extension names which contain existing names as sub-strings, but
whatever.
|
|
|
|
|
|
|
| |
Adding an ifdef mess to deal with insufficient system headers is kind of
a mess. It's easier to just provide the definitions manually. This sucks
a bit too, but it's the approach we've been using with OpenGL headers in
general, and I think that worked pretty well.
|
|
|
|
|
|
|
|
|
|
|
|
| |
On systems whose EGL headers do not define these extensions, the build
still failed due to missing ..._PLANE3_... defines. Although we supplied
missing EGL_LINUX_DMA_BUF_EXT defines manually, the PLANE3 ones are
actually from a separate extension, which explains why they were not
added to the fallback defines in the first place.
Add them, now it builds without the eglext.h include.
See #6838.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When frame-stepping with display-sync mode enabled in high framerate
video, the frame was sometimes not redrawn correctly. Only the first OSD
interaction (or something similar) made it visible.
In this case, the core schedules many frames as dropped (because it's
ignorant of pausing/frame-stepping, as in theory the player is _not_
paused during frame-stepping, only at the end of it). There's a race
between the VO rendering the queued frame, and the core calling
vo_set_paused() after it has queued the frame. If the latter happens
first, the existing logic to redraw the previous dropped frame does
things correctly. If the former happens, the frame is not redrawn
automatically, but will be redrawn on the next user input (or if OSD is
enabled, and the pause state change updates it, which leads to an
immediate redraw).
Fix this by never actually dropping a frame in paused mode. The request
by the core to drop it is simply ignored.
Maybe this could be done slightly nicer by updating the pause state with
the VO atomically. Then we wouldn't have the frame drop counter going up
either (it's actually dropped, but then redrawn; but I doubt any user,
or me in a few weeks, would understand this). But I'm not really
interested in polishing this by increasing the complexity of the
frame-step code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To aid in discoverability, and to address the most common case
directly, I'm adding an 'auto' mode for the window controls. In
this case, we will show the controls if there is no window border
and hide them if there are borders. This also respects the option
being toggled at runtime.
To ensure that it works in the wayland case, I've also made sure
that the wayland code explicitly forces the option to false if
decoration support is missing.
Based on feedback, I've split the config in two, with one option
for whether controls are active, and one for alignment. These are
new enough that we can get away with ignoring compatibility.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This small regression was introduced by #7216. Previously, the wayland
backend used a trick which kept track of the previous fullscreen state
and used that logic for showing the cursor. Since vo_opts now keeps
track of the current fullscreen state, most of this stopped being
neccessary.
However, there was one edge case where the cursor didn't
behave the same: passing a fullscreen flag for the inital window. The
cursor would initially be visible here which is not desirable. This can
be remedied pretty easily by just setting the cursor visiblity to false
if the pointer entry event occurs on fullscreen. The only thing we need
to do is to make sure that the autohide delay isn't completely disabled
(i.e. the cursor is always visible). Hence the need for the previous
commit.
|
|
|
|
|
|
|
|
|
|
|
| |
The remaining legacy VOCTRLs are for the fullscreen and border
properties. For fullscreen this largely just replacing the private
state field with the vo option but there are small semantic
differences that we need to be careful of.
For the border setting, it's trivial as we don't have external
mechanisms for changing the state, but I also can't test it as
I'm not using a compositor that supports it.
|
|
|
|
|
|
|
|
|
|
|
| |
I wanted to get this done quickly as I introduced the new VOCTRL
behaviour for minimize and maximize and it was immediately made
legacy, so best to purge it before anyone gets confused.
I did not sort out fullscreen as that's more involved and not something
I've educated myself about yet. But I did replace the VOCTRL_FULLSCREEN
usage with the new option change mechanism as that seemed simple
enough.
|
|
|
|
|
|
|
| |
Pretty annoying affair. The vo_gpu code could of course not trigger
rendering from filters yet, so it needed to be extended. Also, this uses
some icky stuff made for vf_sub (and this was the reason I marked vf_sub
as deprecated), so everything is terrible.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
glx.h recursively includes gl.h, and there is no way to prevent this.
Old Mesa defines some GL symbols, but not all which mpv needs. In
particular, one user who was too lazy to update his ancient Ubuntu and
preferred to bother us with obscure bug reports, had Mesa headers which
did not define GL 3.2, so GLsync was not defined.
All in all I still think the idea of providing the GL API definitions
ourselves was a good idea; just GLX should have been isolated better.
But isolating GLX now is too much effort.
Not sure why I'm bothering with this at all.
Fixes: #7201 (unconfirmed)
|
|
|
|
|
|
|
|
|
| |
Probably pretty useless in this form (see: the wall of warnings), but
someone wanted this.
I think this should be useful to perform some automated tests, maybe.
Fixes: #7194
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This function always expects the GL struct pointer to be a talloc
allocation. So far so bad. But the terrible thing is that _lots_ of code
in mpv didn't quite get this (including the code which introduced the
way it is used this way). For example, in context_glx.c you see this:
struct priv {
GL gl;
...
GL is not a talloc allocation, but since it's at the start of a talloc
allocation, it works anyway. So far so bad. But the really terrible
thing is that mpgl_load_functions2() calls talloc_free_children() on the
GL pointer, which means that all of priv's. This would be unintentional
and could create dangling pointers. And this happens at the about 1
dozen of callers. I'm amazed it didn't broke yet anywhere.
Removing this anti-pattern with making GL "implicitly" a talloc
allocation would be too much effort at this point. So just manually free
the only allocation that the function attached to GL.
|
|
|
|
| |
This appears to work with IceWM.
|
|
|
|
|
|
|
|
|
| |
Should restore full functionality.
The initial state setting is a bit shoddy (instead of setting the
properties before map, we use the WM commands to change it after, so you
will see the normal window state for a moment; the WM commands do not
work on unmapped windows, so fixing this would require more code).
|
|
|
|
|
| |
Unfortunately, this breaks window state reporting for all VOs which
supported it. This can be fixed later (for x11 in the next commit).
|
|
|
|
|
| |
Not particularly important and nobody asked for this, but demonstrates
how such things can be easily done now.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- remove VOCTRL_FULLSCREEN and VOCTRL_GET_FULLSCREEN
- have your own m_config_cache for the fullscreen option
(vo->opts_cache cannot be used because you lose per-option change
notifications, and it'd be a mess anyway)
- use VOCTRL_VO_OPTS_CHANGED to update it
(it's used for convenience)
- when updating it, check for the fullscreen option
(wasn't sure how to do it best; currently, it compares the raw
option pointers, but this could be changed)
- do not send VO_EVENT_FULLSCREEN_STATE on FS change
- instead write the option on FS change
(assign in opt. struct + m_config_cache_write_opt)
|
|
|
|
|
| |
This allows the pseudo client side decorations to be used under x11,
which might be desirable when running in border=no mode.
|
|
|
|
|
|
|
| |
We primarily care about pseudo-decorations for wayland, where
the compositor may not support server-side decorations. So let's
implement the minimize and maximize commands and return the
maximized window state.
|
|
|
|
|
|
|
|
|
|
|
| |
If we want to implement window pseudo-decorations via OSC, we need a
way to tell the vo to minimize and maximize the window. Today, we have
minimized as a read-only property, and no property for maximized.
Let's made minimized settable and add a maximized property to go with
it. In turn, that requires us to add VOCTRLs for minimizing or
maximizing a window, and an additional WIN_STATE to indicate a
maximized window.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At least with gnome-shell (I know, I know), the compositor does
not provide the old window size when leaving the maximized state.
Instead, we get a toplevel_config event with a 0x0 size and no
additional states.
Today, we already save the window geometry to restore it when leaving
the fullscreen state, so we just need a small change for it to
kick in for leaving the maximized state. If I read this correctly,
we'll still respect the size passed by a compositor that actually
provides the old size.
|
|
|
|
|
|
|
| |
Rather than hard-coding the edge grab zone width, we can make it
user configurable. It seems worthwhile to have separate configs
for pointer and touch usage as the defaults should be different,
and a user might have both input methods in use.
|
|
|
|
|
|
|
|
| |
Today, we support resizing wayland windows when we detect a touch
event in a defined grab zone. As part of implementing
pseudo-decorations, we should have equivalent functionality for
mouse input. And if we detect support for actual decorations we
will not activate the grab zone as the decorations will provide this.
|
|
|
|
|
|
|
|
|
| |
Use x11->opts instead of vo->opts. This doesn't matter currently, and
x11->opts is actually set to vo->opts. However, there's a chance that
either option access changes, or that the way backends integrate with
struct vo changes. This is just a preemptive change to make this less of
a mess, and it's generally a good idea to reduce accesses to struct vo
anyway.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is preparation to get rid of the option-to-property bridge
(mp_on_set_option). This is a pretty insane thing that redirects
accesses to options to properties. It was needed in the ever ongoing
transition from something to... something else.
A good example for the need of this bridge is applying profiles at
runtime. This obviously goes through the config parser, but should also
make all changes effective, for which traditionally the property layer
is used.
There isn't much left that needs this bridge. This commit changes a
bunch of options (which also have a property implementation) to use
option change notifications instead. Many of the properties are still
left, but perform unrelated functions like OSD formatting.
This should be mostly compatible. There may be some subtle behavior
changes. For example, "hwdec" and "record-file" do not check for changes
anymore before applying them, so writing the current value to them
suddenly does something, while it was ignored before.
DVB changes untested, but should work.
|
|
|
|
|
|
|
|
|
| |
Handling the window with this function makes no sense, since windows
and kernels are not the same thing and don't share the same option list.
The only reason it's done is to make sure the char* points at the static
string rather than the dynamically allocated one, which we can do
manually in this function. Rewrite a bit for clarity/quality.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
We should only be printing errors that occur when not probing, to
avoid creating the impression that something is wrong - and errors
during probing isn't a problem.
|
|
|
|
|
|
|
| |
Surprisingly, we've managed to get this far without context_glx ever
adding the X11 display as a native resource. But with the recent change
to attempt to enable vdpau when using EGL, the hwdec now requires the
display to be added. So let's add it.
|
|
|
|
|
| |
See aacc194. The same logic all applies to Wayland. In fact, we already
require EGL 1.5 for wayland anyway, so it's better to do it right.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
eglGetPlatform() is a broken API, since it takes a windowing specific
argument, yet is supposed to work for multiple APIs at the same time. On
Linux, it can take both a X11 "Display" and a "wl_display". Obviously
there is no way to specify what kind of display the argument is (it's
just a void*).
Mesa has _eglNativePlatformDetectNativeDisplay, which does funny stuff
to try to guess the display type, including trying to call mincore() to
determine whether the pointer can be accessed at all. I guess this
recently accidentally broke (as a bug), but on the other hand, maybe
it's time to do this properly.
The fix is using eglGetPlaformDisplay(). This requires EGL 1.5, plus
Mesa needs to support the associated platform extension
(EGL_KHR_platform_x11).
Since I see no reasonable way to do this in a compatible way, just
require that EGL 1.5 is available. The problem is that EGL 1.4 seems to
require you to create a display to query EGL version and extension, and
you have a chicken-and-egg problem. It's very stupid. Maybe you could
jump through some more hoops to get something compatible, but fuck that.
Users on "too old" Mesa will fall back to GLX (which we keep around for
a regrettable company known by the name of Nvidia).
I think Wayland and GBM should do the same. They're sufficiently
bleeding-edge that you can expect them to have EGL 1.5. On the other
hand, the cursed RPI code will have to stay with a eglGetDisplay().
Speculative fix for #7154.
(Rant about EGL follows. Actually I deleted it.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pass_color_map() (in video_shaders.c) and pass_colormanage() (video.c)
both duplicate the condition on whether to do peak computation. Peak
computation requires a compute shader, so if the duplicated conditions
don't match, video_shaders.c will generate a compute shader, but video.c
will try to run it as fragment shader. This leads to a "blue screen".
This can be reproduced by playing a HDTV video with --target-peak=99.
It's not clear how to fix this. Should pass_tone_map() be only invoked
if mp_trc_is_hdr() == true (what pass_colormanage() uses to decide
whether to enable peak computation), or should pass_colormanage() just
tell pass_color_map() to skip peak computation? Decide for the latter,
as it's more robust.
Even if not correct, at least it gets rid of the blue shit.
Fixes: #7149
|
| |
|
|
|
|
|
|
|
|
|
| |
This makes the condition for including each init match the condition for
compiling the file that defines it.
It's possible to e.g. HAVE_GL and HAVE_VAAPI without HAVE_VAAPI_EGL,
which resulted in "undefined reference to `vaapi_gl_init'" with the old
code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Probably. It's not like these pixel formats are formally specified -
FFmpeg added them because _some_ file format or decoder supports it, and
while that format/codec may define it precisely, the pixel format is
sort of disconnected and just a FFmpeg thing.
In any case, the yuva sample I had at hand uses the full range the
component data type can provide. The old code used the same "shifted"
range as for Y/U/V components, which must have been wrong.
This will not work correctly for packed YUVA formats, but fortunately
they matter even less.
|
|
|
|
|
| |
Not sure why it assumes that it always succeeds (although generally it
won't fail).
|
|
|
|
| |
Don't know what this was for, but the result doesn't change.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is fragile enough that it warrants getting "monitored".
This takes the commented test program code from img_format.c, makes it
output to a text file, and then compares it to a "ref" file stored in
git.
Originally, I wanted to do the comparison etc. in a shell or Python
script. But why not do it in C. So mpv calls /usr/bin/diff as a
sub-process now.
This test will start producing different output if FFmpeg adds new pixel
formats or pixel format flags, or if mpv adds new IMGFMT (either aliases
to FFmpeg formats or own formats). That is unavoidable, and requires
manual inspection of the results, and then updating the ref file.
The changes in the non-test code are to guarantee that the format ID
conversion functions only translate between valid IDs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The use of glXGetCurrentDisplay() restricted this to the GLX backend.
But actually it works under EGL as well. Removing the GLX-specific call
and using the general mpv-internal method to get the X "Display" makes
it work in mpv.
I didn't know this. Nvidia didn't list this as extension in the EGL
context when I still used their GPUs.
Note that this might in theory break use of vdpau in some libmpv clients
using the render API. But only if MPV_RENDER_PARAM_X11_DISPLAY is not
used, and they relied on mpv using glXGetCurrentDisplay(). EGL does not
provide such an API, and hwdec_vaapi.c also uses what hwdec_vdpau.c uses
now. Considering that vaapi is preferable these days, it's not bad at
all if these clients get "bro |