| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
This is actually all bullshit. There are many factors that can ruin the
video playback experience (and most outside of our or the user's
control).
One thing that makes sense is that this declares incompatibility with
Windows XP (fixes #2473).
|
|
|
|
| |
Well, not that anyone does or should care.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The D3D9 backend does not support GLES 3, which makes it pretty useless.
But it still might be a legitimate replacement of vo_direct3d.c on
Windows 7 machines.
Note that we could just use:
eglGetDisplay(EGL_D3D11_ELSE_D3D9_DISPLAY_ANGLE)
But for now I'll leave the old code. Maybe this can exclude use of
software rendering backends (EGL_PLATFORM_ANGLE_DEVICE_TYPE_WARP_ANGLE).
Since I'm not sure, I won't touch it.
|
|
|
|
|
|
|
|
|
| |
Running mpv with default config will now pick up ANGLE by default. Since
some think ANGLE is still not good enough for hq features, extend the
"es" option to reject GLES backends, and add to to the opengl-hq preset.
One consequence is that mpv will by default use libswscale to convert
10 bit video to 8 bit, before it reaches the VO.
|
|
|
|
| |
Yes, the old build system still exists in-tree.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I decided that I actually can't stand how vo_opengl unnecessarily puts
the video through 3 shader stages (instead of 1). Thus, what was meant
to be a fallback for weak OpenGL implementations, the dumb-mode, now
becomes default if the user settings allow it.
The code required to check for the settings isn't so wild, so I guess
it's manageable. I still hope that one day, our rendering logic can
generate ideal shader stages for this case too.
Note that in theory, dumb-mode could be reenabled at runtime due to a
color management 3D LUT being set, so a separate dumb_mode field is
required. The dumb-mode option can't just be overwritten.
|
|
|
|
|
|
|
| |
Unfortunately, color management can still not work, because no GLES
version specified so far support fixed-point 16 bit textures. Maybe
we could use integer textures, but these don't support filtering.
Using float textures would be another possibility.
|
|
|
|
|
| |
GL_RGB10_A2 is the best fixed-point format we can get on GLES/ANGLE for
now. (Unless we somehow switch to non-normalized integer textures.)
|
|
|
|
|
|
|
|
| |
Polar scalers use 1D textures, because they're slightly faster on some
GPUs than 2D textures. But 2D textures work too, so add support for
them.
Allows using these scalers with ANGLE.
|
|
|
|
|
|
| |
Just like commit f9a2fc59. There are probably some more such cases.
The vec2 constructor calls are probably fine, but don't bother with
confusing inconsistencies.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While desktop GL's glTexImage2D() essentially accepts anything, GLES is
much stricter. The combination of allowed formats/types/internal formats
is exactly specified. The GLES 3.0.4 specification lists them in
table 3.2. (The ANGLE API validation code references this table.)
The table could probably be extended into a general declarative table
about GL formats covering other uses, but this would be a big
non-trivial project, so don't bother and accept a minor degree
of duplication with other tables.
Note that the format and type do (or should) not matter here, because
no image data is transferred to the GPU.
|
|
|
|
|
|
| |
We don't only need float textures for advanced scaling - we also need
them to be filterable with GL_LINEAR. On GLES, this is not supported
until GLES 3.1, but some implementation expose them with extensions.
|
| |
|
| |
|
|
|
|
|
| |
otherwise we were incorrectly adjusting the hardware master volume
in exclusive mode with softvol=auto
|
|
|
|
|
|
|
| |
This makes advanced scaling sort-of work for GLES 3.0 (on ANGLE). It's
still not very advisable, as 8 bits might not be enough to avoid
debanding. (Ironically, the debanding filter can be enabled, and does
not raise any GL errors - but probably doesn't do anything useful.)
|
|
|
|
|
| |
Some GLSL dialects (GLSL ES 3.00) do not have such implicit conversions.
They have to be made floats for the sake of the shader compiler.
|
|
|
|
|
|
| |
Turns out glGetTexLevelParameter, which is missing in ANGLE, is a
GLES3.1 function. Removing it from the list of core GLES3 functions
makes ANGLE work in GLES3 mode.
|
|
|
|
|
|
|
|
|
|
| |
Apparently, some audio drivers do not support the DTS subtype, but
passthrough works anyway if the AC3 subtype is set. Just retry with
AC3 if the proper format doesn't work. The audio device which
exposed this behavior reported itself as
"M601d-A3/A3R (Intel(R) Display Audio)".
xbmc/kodi even always passes DTS as AC3.
|
|
|
|
|
|
|
| |
Maybe this is a good idea. Also add an option to disable it again, for
the sake of testing.
Fixes #2502.
|
|
|
|
| |
I think this is much more informative. Maybe.
|
|
|
|
| |
But not much.
|
| |
|
| |
|
|
|
|
|
|
| |
Place speakers in standard positions equidistant from the listener.
use standard coordinate system
|
|
|
| |
`--vd--lavc-o becomes --vd-lavc-o`
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
ANGLE is a GLES2 implementation for Windows that uses Direct3D 11 for
rendering, enabling vo_opengl to work on systems with poor OpenGL
drivers and bypassing some of the problems with native GL, such as VSync
in fullscreen mode.
Unfortunately, using GLES2 means that most of vo_opengl's advanced
features will not work, however ANGLE is under rapid development and
GLES3 support is supposed to be coming soon.
|
|
|
|
|
|
| |
These are very much inspired by the hardcoded Cocoa bindings on OSX.
Fixes #2500.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is another regression of the recently added start time probing. If
a seek is executed after opening the file (but before reading any
packets), the first block is discarded instead of indexed. If there are
no other keyframes in the file, seeking will fail completely.
Fix it by seeking to the cluster start if there aren't any index entries
yet. This will read the skipped packet again.
Fixes #2498.
|
|
|
|
|
|
|
|
|
|
| |
Because apparently there's no ideal universally working format.
The weird OpenGL texture format for kCVPixelFormatType_32BGRA is from:
http://stackoverflow.com/questions/22077544/draw-an-iosurface-to-an-opengl-context
(Which apparently got it from the linked Apple example code.)
|
| |
|
|
|
|
|
|
| |
Try and and choose the closest sample format to the one requested.
fixes #2494
|
|
|
|
| |
the next commit will use uninit within init
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was used with --no-sub-ass (aka --no-ass). This option (which is
not yet removed) strips all styling from the subtitles, and renders them
as plaintext only. For some reason, it originally seemed convenient to
reuse all the OSD text rendering code (osd_libass.c). While this was
indeed simple, it had a bad influence on the rest of the code. For
example, it had to decide whether to go through the OSD code path, or
the proper subtitle renderer in sd_ass.c.
Kill the OSD subtitle renderer. Reimplement --no-sub-ass and also
"secondary" subtitles in sd_ass.c. fill_plaintext() contains some rather
minor code duplication with osd_libass.c for setting up a dummy
ASS_Event and escaping the stripped text. Since sd_ass.c already has to
handle "normal" text subtitles, and has code for stripping ASS tags,
this remains all relatively simple.
Remove all the unnecessary crap from the rest of the code.
|
|
|
|
|
|
|
|
|
| |
Use the demux_set_ts_offset() added in the previous commit to base each
timeline segment to use timestamps according to its relative position
within the overall timeline. As a consequence we don't need to care
about these timestamps anymore, and everything becomes simpler.
(Another minor but delicious nugget of sanity.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Most of this is explained in the DOCS additions.
This gives us slightly more sanity, because there is less interaction
between the various parts. The goal is getting rid of the video_offset
entirely.
The simplification extends to the user API. In particular, we don't need
to fix missing parts in the API, such as the lack for a seek command
that seeks relatively to the start time. All these things are now
transparent.
(If someone really wants to know the real timestamps/start time, new
properties would have to be added.)
|
|
|
|
|
|
|
| |
Something goes wrong somewhere. Don't bother, it's only needed for
compatibility with our absolute baseline (GL 2.1/GLES 2).
On the other hand, we can process nv12 formats just fine.
|
|
|
|
|
| |
Enca is dead, uchardet is better (in half of all cases; on others it's
worse).
|
|
|
|
| |
This happens to be handled in a better way in another place now.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For the sake of vaapi interop, we want to use EGL, but on the other
hand, but because driver developers are full of shit, vdpau interop will
not work on EGL (even if the driver supports EGL). The latter happens
with both nvidia and AMD Mesa drivers.
Additionally, EGL vaapi interop support can apparently only detected at
runtime by actually using it. While hwdec_vaegl.c already does this, it
would require initializing libva on _every_ system, which will cause
libav to print an unpreventable bullshit message to the terminal.
Try to counter these huge loads of bullshit by adding more fucking
bullshit.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We want the following behavior:
- VO probed, backend probed: only accept non-sw, fail completely
otherwise
- VO forced, backend probed: use the first non-sw, or if none is found,
fall back to the first working sw backend
- VO probed, backend forced: (I don't care about this case)
- VO forced, backend forced: just use that backend
Also, on backend probe failure the vo->probed field was left in its old
state.
|
|
|
|
|
|
|
|
|
|
|
| |
This adds support for the progress indicator taskbar extension
that was introduced with Windows 7 and Windows Server 2008 R2.
I don’t like this solution because it keeps its own state and
introduces another VOCTRL, but I couldn’t come up with anything
less messy.
closes #2399
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the display-sync, non-interpolation case, and if the display refresh
rate is higher than the video framerate, we duplicate display frames by
rendering exactly the same screen again. The redrawing is cached with a
FBO to speed up the repeat.
Use glBlitFramebuffer() instead of another shader pass. It should be
faster.
For some reason, post-process was run again on each display refresh.
Stop doing this, which should also be slightly faster. The only
disadvantage is that temporal dithering will be run only once per video
frame, but I can live with this.
One aspect is messy: clearing the background is done at the start on the
target framebuffer, so to avoid clearing twice and duplicating the code,
only copy the part of the framebuffer that contains the rendered video.
(Which also gets slightly messy - needs to compensate for coordinate
system flipping.)
|
|
|
|
|
|
|
| |
Currently, vo.c will always continue to render the currently queued
frame, which sets last_flip, which in turn confuses vo_get_delay(),
which in turn will show a bogus A/V desync message on unpause. So just
reset it again on unpause.
|
|
|
|
|
|
| |
I guess the removed code is an old leftover, and makes no sense anymore.
Should fix weird A/V diff dropouts when frames are being dropped with
display-sync.
|
|
|
|
|
|
|
|
|
| |
If the player sends a frame with duration==0 to the VO, it can trivially
underrun. Don't panic, but keep the correct time.
Also, returning the absolute time from vo_get_next_frame_start_time()
just to turn it into a float with relative time was silly. Rename it and
make it return what the caller needs.
|
| |
|
|
|
|
|
| |
This computed nonsense if the user set a playback speed other than 1
(in addition to the display-sync speed change).
|
|
|
|
|
|
| |
80ms allowable desync was a bit too much. It'd allow for a range of
160ms, which everyone can notice. It might also be a bother to apply
compensation resampling speed for that long.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We always let audio slowly desync until a threshold is reached, and then
pushed it back by applying a maximum compensation speed. Refine what
comes afterwards: instead of playing with the nominal video speed, use
the actual required audio speed for keeping sync as measured by the A/V
difference. (The "actual" speed is the ideal speed with A/V differences
added.)
Although this works in theory, it's somewhat questionable how much this
works in practice. The ideal time value is actually not exact, but is
the time at which the frame is scheduled (could be compensated by using
the time_left calculations in handle_display_sync_frame()). It doesn't
account for speed changes or catastrophic discontinuities. It uses only
10 past frames.
|
|
|
|
|
|
|
|
|
|
| |
As long as it's within the desync tolerance, do not change the audio
speed at all for resampling. This reduces speed changes which might be
caused by jittering timestamps and similar cases.
(While in theory you could just not care and change speed every single
frame, I'm afraid that such changes could possibly cause audio
artifacts. So better just avoid it in the first place.)
|
|
|
|
| |
We can implement it differently and drop a tiny bit of state.
|
|
|
|
|
|
|
|
| |
This is very "illustrative", unlike the video-speed-correction
property, and thus useful. It can also be used to observe scheduling
errors, which are not detected by the core. (These happen due to
rounding errors; possibly not evne our fault, but coming from
files with rounded timestamps and so on.)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of looking at the current frame duration for the intended
speedup, look at all past frames, and find a good average speed. This
ties in with not wanting to average _all_ frame durations, which
doesn't make sense in VFR situations.
This is currently done in the most naive way possible, but already sort
of works for VFR which switches between frame durations that are
integer multiples of a base rate. Certainly more improvements could
be made, such as trying to adjust directly on FPS changes, instead of
averaging everything, but for now this is not needed at all.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Helps somewhat with muxer-rounded timestamps.
There is some danger that this introduces a timestamp drift. But since
they are averaged values (unlike as when using an incorrect container
framerate hint), any potential drift shouldn't be too brutal, or
compensate itself soon. So I won't bother yet with comparing the results
with the real timestamp, unless we run into actual problems.
Of course we still prefer potentially real timestamps over the
approximated ones. But unless the timestamps match the container FPS,
we can't know whether they are (no, checking whether the they have
microsecond components would be cheating). Perhaps in future, we could
let the demuxer export the timebase - if the timebase is not 1000 (or
divisible by it), we know that millisecond-rounded timestamps won't
happen.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Get rid of get_past_frame_durations(), which was a bit too messy. Add
a past_frames array, which contains the same information in a more
reasonable way. This also means that we can get the exact current and
past frame durations without going through awful stuff. (The main
problem is that vo_pts_history contains future frames as well, which is
needed for frame backstepping etc., but gets in the way here.)
Also disable the automatic disabling of display-sync if the frame
duration changes, and extend the frame durations allowed for display
sync. To allow arbitrarily high durations, vo.c needs to be changed
to pause and potentially redraw OSD while showing a single frame, so
they're still limited.
In an attempt to deal with VFR, calculate the overall speed using the
average FPS. The frame scheduling itself does not use the average FPS,
but the duration of the current frame. This does not work too well,
but provides a good base for further improvements.
Where this commit actually helps a lot is dealing with rounded
timestamps, e.g. if the container framerate is wrong or unknown, or
if the muxer wrote incorrectly rounded timestamps. While the rounding
errors apparently can't be get rid of completely in the general case,
this is still much better than e.g. disabling display-sync completely
just because some frame durations go out of bounds.
|
|
|
|
|
|
| |
We need a frame duration even on start, because the number of vsyncs
the frame is shown is predetermined. (vo_opengl actually makes use of
this property in certain cases.)
|
|
|
|
|
|
|
|
|
| |
"Missed" implies the frame was dropped, but what really happens is that
the following frame will be shown later than intended (due to the
current frame skipping a vsync).
(As of this commit, this property is still inactive and always
returns 0. See git blame for details.)
|