| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
If the sampling point is placed diagonally, the radius difference
could be as large as sqrt(2.0). And a loosened check with (radius - 1)
would potentially include pixels out of the range.
Fix the check to handle those corner case properly to avoid
unnecessary texture lookup and improve the performance a bit.
|
|
|
|
| |
All Windows versions we support have this API.
|
|
|
|
|
|
|
|
|
| |
There are claims that nnedi3.c doesn't constitute its own new
implementation, but is derived from existing HLSL or OpenCL shaders
distributed under the LGPLv3 license.
Until these are resolved, do the "correct" thing and require
--enable-gpl3 to build nnedi.
|
|
|
|
| |
"backend=x11" was resolved to x11_probe, which is unintentional.
|
|
|
|
| |
Add credits to several existing implementation of NNEDI3 shader.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It turns out that both UBO and intBitsToFloat() are supported in
OpenGL ES 3.0[1][2], enable them so that NNEDI3 prescaler can be used
in a wider range of backends.
Also fixes some implicit int-to-float conversions so that the shader
actually compiles on GLES.
Tested on Linux desktop (nvidia 358.16) with "es" sub-option.
[1]: https://www.khronos.org/opengles/sdk/docs/man3/html/glGetUniformBlockIndex.xhtml
[2]: https://www.khronos.org/opengles/sdk/docs/manglsl/docbook4/xhtml/intBitsToFloat.xml
|
|
|
|
| |
Looks better than "oversample". tscale-clamp suggested by haasn.
|
|
|
|
|
|
|
| |
Try to avoid user confusion.
Reading the global options in this place is a pretty disgusting hack,
but it's still the most robust way.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
At least I hope so.
Deriving the duration from the pts was not really correct. It doesn't
include speed adjustments, and becomes completely wrong of the user e.g.
changes the playback speed by a huge amount. Pass through the accurate
duration value by adding a new vo_frame field.
The value for vsync_offset was not correct either. We don't need the
error for the next frame, but the error for the current one. This wasn't
noticed because it makes no difference in symmetric cases, like 24 fps
on 60 Hz.
I'm still not entirely confident in the correctness of this, but it sure
is an improvement.
Also, remove the MP_STATS() calls - they're not really useful to debug
anything anymore.
|
|
|
|
|
|
|
| |
This was just converting back and forth between int64_t/microseconds and
double/seconds. Remove this stupidity. The pts/duration fields are still
in microseconds, but they have no meaning in the display-sync case (also
drop printing the pts field from opengl/video.c - it's always 0).
|
|
|
|
|
| |
Without display-sync mode, our guesses wrt. vsync phase etc. are much
worse, and I see no reason to keep the complicated "vsync_timed" code.
|
|
|
|
|
|
|
|
|
|
| |
This is a hack, but unfortunately the DwmGetCompositionTimingInfo
heuristic does not work in all cases (with multiple-monitors on Windows
8.1 and even with a single monitor in Windows 10.) See the comment in
mp_w32_is_in_exclusive_mode() for more details.
It should go without saying that if any better method of doing this
reveals itself, this hack should be dropped.
|
|
|
|
|
| |
ANGLE has EGL_KHR_get_all_proc_addresses, so all GLES core functions can
be queried with eglGetProcAddress.
|
|
|
|
| |
Well, not that anyone does or should care.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The D3D9 backend does not support GLES 3, which makes it pretty useless.
But it still might be a legitimate replacement of vo_direct3d.c on
Windows 7 machines.
Note that we could just use:
eglGetDisplay(EGL_D3D11_ELSE_D3D9_DISPLAY_ANGLE)
But for now I'll leave the old code. Maybe this can exclude use of
software rendering backends (EGL_PLATFORM_ANGLE_DEVICE_TYPE_WARP_ANGLE).
Since I'm not sure, I won't touch it.
|
|
|
|
|
|
|
|
|
| |
Running mpv with default config will now pick up ANGLE by default. Since
some think ANGLE is still not good enough for hq features, extend the
"es" option to reject GLES backends, and add to to the opengl-hq preset.
One consequence is that mpv will by default use libswscale to convert
10 bit video to 8 bit, before it reaches the VO.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I decided that I actually can't stand how vo_opengl unnecessarily puts
the video through 3 shader stages (instead of 1). Thus, what was meant
to be a fallback for weak OpenGL implementations, the dumb-mode, now
becomes default if the user settings allow it.
The code required to check for the settings isn't so wild, so I guess
it's manageable. I still hope that one day, our rendering logic can
generate ideal shader stages for this case too.
Note that in theory, dumb-mode could be reenabled at runtime due to a
color management 3D LUT being set, so a separate dumb_mode field is
required. The dumb-mode option can't just be overwritten.
|
|
|
|
|
|
|
| |
Unfortunately, color management can still not work, because no GLES
version specified so far support fixed-point 16 bit textures. Maybe
we could use integer textures, but these don't support filtering.
Using float textures would be another possibility.
|
|
|
|
|
| |
GL_RGB10_A2 is the best fixed-point format we can get on GLES/ANGLE for
now. (Unless we somehow switch to non-normalized integer textures.)
|
|
|
|
|
|
|
|
| |
Polar scalers use 1D textures, because they're slightly faster on some
GPUs than 2D textures. But 2D textures work too, so add support for
them.
Allows using these scalers with ANGLE.
|
|
|
|
|
|
| |
Just like commit f9a2fc59. There are probably some more such cases.
The vec2 constructor calls are probably fine, but don't bother with
confusing inconsistencies.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
While desktop GL's glTexImage2D() essentially accepts anything, GLES is
much stricter. The combination of allowed formats/types/internal formats
is exactly specified. The GLES 3.0.4 specification lists them in
table 3.2. (The ANGLE API validation code references this table.)
The table could probably be extended into a general declarative table
about GL formats covering other uses, but this would be a big
non-trivial project, so don't bother and accept a minor degree
of duplication with other tables.
Note that the format and type do (or should) not matter here, because
no image data is transferred to the GPU.
|
|
|
|
|
|
| |
We don't only need float textures for advanced scaling - we also need
them to be filterable with GL_LINEAR. On GLES, this is not supported
until GLES 3.1, but some implementation expose them with extensions.
|
| |
|
| |
|
|
|
|
|
|
|
| |
This makes advanced scaling sort-of work for GLES 3.0 (on ANGLE). It's
still not very advisable, as 8 bits might not be enough to avoid
debanding. (Ironically, the debanding filter can be enabled, and does
not raise any GL errors - but probably doesn't do anything useful.)
|
|
|
|
|
| |
Some GLSL dialects (GLSL ES 3.00) do not have such implicit conversions.
They have to be made floats for the sake of the shader compiler.
|
|
|
|
|
|
| |
Turns out glGetTexLevelParameter, which is missing in ANGLE, is a
GLES3.1 function. Removing it from the list of core GLES3 functions
makes ANGLE work in GLES3 mode.
|
|
|
|
|
|
|
|
|
|
|
| |
ANGLE is a GLES2 implementation for Windows that uses Direct3D 11 for
rendering, enabling vo_opengl to work on systems with poor OpenGL
drivers and bypassing some of the problems with native GL, such as VSync
in fullscreen mode.
Unfortunately, using GLES2 means that most of vo_opengl's advanced
features will not work, however ANGLE is under rapid development and
GLES3 support is supposed to be coming soon.
|
|
|
|
|
|
|
|
|
|
| |
Because apparently there's no ideal universally working format.
The weird OpenGL texture format for kCVPixelFormatType_32BGRA is from:
http://stackoverflow.com/questions/22077544/draw-an-iosurface-to-an-opengl-context
(Which apparently got it from the linked Apple example code.)
|
| |
|
|
|
|
|
|
|
| |
Something goes wrong somewhere. Don't bother, it's only needed for
compatibility with our absolute baseline (GL 2.1/GLES 2).
On the other hand, we can process nv12 formats just fine.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For the sake of vaapi interop, we want to use EGL, but on the other
hand, but because driver developers are full of shit, vdpau interop will
not work on EGL (even if the driver supports EGL). The latter happens
with both nvidia and AMD Mesa drivers.
Additionally, EGL vaapi interop support can apparently only detected at
runtime by actually using it. While hwdec_vaegl.c already does this, it
would require initializing libva on _every_ system, which will cause
libav to print an unpreventable bullshit message to the terminal.
Try to counter these huge loads of bullshit by adding more fucking
bullshit.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We want the following behavior:
- VO probed, backend probed: only accept non-sw, fail completely
otherwise
- VO forced, backend probed: use the first non-sw, or if none is found,
fall back to the first working sw backend
- VO probed, backend forced: (I don't care about this case)
- VO forced, backend forced: just use that backend
Also, on backend probe failure the vo->probed field was left in its old
state.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the display-sync, non-interpolation case, and if the display refresh
rate is higher than the video framerate, we duplicate display frames by
rendering exactly the same screen again. The redrawing is cached with a
FBO to speed up the repeat.
Use glBlitFramebuffer() instead of another shader pass. It should be
faster.
For some reason, post-process was run again on each display refresh.
Stop doing this, which should also be slightly faster. The only
disadvantage is that temporal dithering will be run only once per video
frame, but I can live with this.
One aspect is messy: clearing the background is done at the start on the
target framebuffer, so to avoid clearing twice and duplicating the code,
only copy the part of the framebuffer that contains the rendered video.
(Which also gets slightly messy - needs to compensate for coordinate
system flipping.)
|
| |
|
|
|
|
|
| |
Fixes custom shaders, which define their entrypoint as sample()
function.
|
|
|
|
|
|
|
|
|
|
|
| |
The nnedi3 prescaler requires a normalized range to work properly,
but the original implementation did the range normalization after
the first step of the first pass. This could lead to severe quality
degradation when debanding is not enabled for NNEDI3.
Fix this issue by passing `tex_mul` into the shader code.
Fixes #2464
|
|
|
|
| |
Why is this stupid crap being so much a pain for no reason.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Pick the correct GLSL version from the GL_SHADING_LANGUAGE_VERSION
string. Might be somewhat questionable, as we expect the minor version
number not to have leading 0s.
Should help with cases when the reported GLSL version is much higher
than the equivalent of the reported GL version. This problem was
observed in combination with GL_ARB_uniform_buffer_object, which
can't be used if the declared GLSL version is too low.
|
|
|
|
|
|
|
| |
Causes more harm than it helps. Will eventually be removed.
Also rename the "reject_emulated" field to "probing" - this is more
appropriate now.
|
| |
|
| |
|
|
|
|
|
| |
Also removed authorship information (as per convention seen in other
files)
|
|
|
|
|
| |
Since the errors weren't used for anything other than simple
success/fail checks, I simplified things a bit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Notes:
- Unfortunately the only way to talk to EGL from within DRM I could find
involves linking with GBM (generic buffer management for Mesa.)
Because of this, I'm pretty sure it won't work with proprietary NVidia
drivers, but then again, last time I checked NVidia didn't offer
proper screen resolution for VT.
- VT switching doesn't seem to work at all. It's worth mentioning that
using vo_drm before introduction of VT switcher had an anomaly where
user could switch to another VT and input text to it, while video
played on top of that VT. However, that isn't the case with drm_egl:
I can't switch to other VT during playback like this. This makes me
think that it's either a limitation coming from my firmware or from
EGL/KMS itself rather than a bug with my code. Nonetheless, I still
left (untestable) VT switching code in place, in case it's useful to
someone else.
- The mode_id, connector_id and device_path should be configurable for
power users and people who wish to watch videos on nonprimary screen.
Unfortunately I didn't see anything that would allow OpenGL backends
to register their own set of options. At the same time, adding them to
global namespace is pointless.
- A few dozens of lines could be shared with vo_drm (setting up VT
switching, most of code behind page flipping). I don't have any strong
opinion on this.
- Sometimes I get minor visual glitches. I'm not sure if there's a race
condition of some sort, unitialized variable (doubtful), or if it's
buggy driver. (I'm using integrated Intel HD Graphics 4400 with Mesa)
- .config and .control are very minimal.
Signed-off-by: wm4 <wm4@nowhere>
|
| |
|
|
|
|
| |
The old name was stupid. Very stupid.
|
| |
|
|
|
|
|
|
|
|
| |
glXCreateContextAttribsARB() by design can throw some X11 errors. We
ignore these, but we generally still print error messages to the
terminal. This was confusing/annoying users, so silence it. The stupid
part is that the Xlib error handler is global, so we have to be slightly
careful here.
|
|
|
|
|
| |
We don't use any functions that have been deprecated in any later GL or
GLES functions. (This is a leftover of vo_opengl_old support.)
|
|
|
|
|
|
|
|
| |
Commit 27dc834f added it as such.
Also remove the check for glUniformBlockBinding() - it's part of an
extension, and the check glGetUniformBlockIndex() already checks whether
the extension is fully available.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement NNEDI3, a neural network based deinterlacer.
The shader is reimplemented in GLSL and supports both 8x4 and 8x6
sampling window now. This allows the shader to be licensed
under LGPL2.1 so that it can be used in mpv.
The current implementation supports uploading the NN weights (up to
51kb with placebo setting) in two different way, via uniform buffer
object or hard coding into shader source. UBO requires OpenGL 3.1,
which only guarantee 16kb per block. But I find that 64kb seems to be
a default setting for recent card/driver (which nnedi3 is targeting),
so I think we're fine here (with default nnedi3 setting the size of
weights is 9kb). Hard-coding into shader requires OpenGL 3.3, for the
"intBitsToFloat()" built-in function. This is necessary to precisely
represent these weights in GLSL. I tried several human readable
floating point number format (with really high precision as for
single precision float), but for some reason they are not working
nicely, bad pixels (with NaN value) could be produced with some
weights set.
We could also add support to upload these weights with texture, just
for compatibility reason (etc. upscaling a still image with a low end
graphics card). But as I tested, it's rather slow even with 1D
texture (we probably had to use 2D texture due to dimension size
limitation). Since there is always better choice to do NNEDI3
upscaling for still image (vapoursynth plugin), it's not implemented
in this commit. If this turns out to be a popular demand from the
user, it should be easy to add it later.
For those who wants to optimize the performance a bit further, the
bottleneck seems to be:
1. overhead to upload and access these weights, (in particular,
the shader code will be regenerated for each frame, it's on CPU
though).
2. "dot()" performance in the main loop.
3. "exp()" performance in the main loop, there are various fast
implementation with some bit tricks (probably with the help of the
intBitsToFloat function).
The code is tested with nvidia card and driver (355.11), on Linux.
Closes #2230
|
|
|
|
|
|
|
|
|
|
|
| |
Add the Super-xBR filter for image doubling, and the prescaling framework
to support it.
The shader code was ported from MPDN extensions project, with
modification to process luma only.
This commit is largely inspired by code from #2266, with
`gl_transform_trans()` authored by @haasn taken directly.
|
|
|
|
|
| |
This commit marks the image size variables temporary, and renames them
in order to prevent any potential confusion in the future.
|
|
|
|
|
|
|
|
|
| |
next_vsync/prev_vsync was only used to retrieve the vsync duration. We
can get this in a simpler way.
This also removes the vsync duration estimation from vo_opengl_cb.c,
which is probably worthless anyway. (And once interpolation is made
display-sync only, this won't matter at all.)
|
|
|
|
| |
MXE uses an all-lowercase convention for MS headers.
|
|
|