| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 0e0b87b6f3297 fixed that dropped packets did not trigger further
work correctly. But it also made trivial --lavfi-complex freeze. The
reason is that the meaning if DATA_AGAIN was overloaded: the decoders
meant that they should be called again, while lavfi.c meant that other
outputs needed to be checked again. Rename the latter meaning to
DATA_STARVE, which means that the current input will deliver no more
data, until "other" work has been done (like reading other outputs, or
feeding input).
The decoders never return DATA_STARVE, because they don't get input from
the player core (instead, they get it from the demuxer directly, which
is why they still can return DATA_WAIT).
Also document the DATA_* semantics in the enum.
Fixes #4746.
|
|
|
|
|
| |
These were replaced by ra equivalents, and with the recent changes, all
of them became fully unused.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Retrieve the depth for each component and internal texture format
separately. Only for 8 bit per component textures we assume that all
bits are used (or else we would in my opinion create too many probe
textures).
Assuming 8 bit components are always correct also fixes operation in
GLES3, where we assumed that each component had -1 bits depth, and this
all UNORM formats were considered unusable. On GLES, the function to
check the real bit depth is not available. Since GLES has no 16 bit
UNORM textures at all, except with the MPGL_CAP_EXT16 extension, just
drop the special condition for it. (Of course GLES still manages to
introduce a funny special case by allowing GL_LUMINANCE , but not
defining GL_TEXTURE_LUMINANCE_SIZE.)
Should fix #4749.
|
|
|
|
| |
Getting mp_pass_perf seriously requires including vo.h???
|
|
|
|
|
|
|
|
|
|
|
| |
Runtime untested, because I get this:
[vo/rpi] Could not get DISPMANX objects.
This happened even when building older git versions, and on a RPI image
that hasn't changed in the recent years. I don't know how to make this
POS work again, so I guess if there's a bug in the new code, it will
remain broken.
|
|
|
|
| |
So that nothing accidentally accesses these.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This does two separate rather intrusive things:
1. Make the hwdec context (which does initialization, provides the
device to the decoder, and other basic state) and frame mapping
(getting textures from a mp_image) separate. This is more
flexible, and you could map multiple images at once. It will
help removing some hwdec special-casing from video.c.
2. Switch all hwdec API use to ra. Of course all code is still
GL specific, but in theory it would be possible to support other
backends. The most important change is that the hwdec interop
returns ra objects, instead of anything GL specific. This removes
the last dependency on GL-specific header files from video.c.
I'm mixing these separate changes because both requires essentially
rewriting all the glue code, so better do them at once. For the same
reason, this change isn't done incrementally.
hwdec_ios.m is untested, since I can't test it. Apart from superficial
mistakes, this also requires dealing with Apple's texture format
fuckups: they force you to use GL_LUMINANCE[_ALPHA] instead of GL_RED
and GL_RG. We also need to report the correct format via ra_tex to
the renderer, which is done by find_la_variant(). It's unknown whether
this works correctly.
hwdec_rpi.c as well as vo_rpi.c are still broken. (I need to pull my
RPI out of a dusty pile of devices and cables, so, later.)
|
|
|
|
| |
Probably explains quality issues in some cases.
|
|
|
|
|
| |
Just remove one callback, and fold the functionality into the other one.
RPI will still not compile, so the hwdec_rpi.c changes are untested.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Apparently this was broken by the "ctx->hwdec" check in the if condition
guarding the destroy call, and "ctx->hwdec = NULL;" was moved up
earlier, making this always dead code.
This should probably be refcounted or so, although that could make it
worse as well. For now, add a flag whether the device should be
destroyed.
Fixes #4735.
|
|
|
|
|
| |
As seen in hwdec_ios.m, it insists on using the legacy luminance alpha
formats for mapped textures.
|
|
|
|
|
|
|
|
| |
Less flexible than GL_TIMESTAMP but supported by more platforms. This
will mean that nested queries have to be detected and silently omitted,
but oh well. Not much use for them anyway.
Fixes #4721.
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 0ce3dce03aaea3e777ebf68504d5afb3f5e3f9e1.
We actually explicitly add read-only frames in some of the hwaccel code
via mp_image_new_custom_ref(), which sets AV_BUFFER_FLAG_READONLY. It's
probably better to keep it this way.
Fixes #4652 (and some related issues with D3D).
|
|
|
|
|
|
|
|
| |
This is really obnoxious. --include parses into the default profile, but
when used on the command line, it did never get applied. So we have to
apply it when the exact conditions for this are met.
Fixes #4673.
|
|
|
|
|
|
|
|
| |
If --demuxer-mkv-probe-start-time=no is used, and a seek is triggered on
start, then cluster_start will be 0, and the packet reading code will
print an error message about not finding valid data. This fixes itself
since it invokes the resync code, but it's still pretty ugly. Avoid this
by always initializing cluster_start.
|
|
|
|
| |
Broken by a434892208.
|
|
|
|
|
|
|
|
|
|
| |
Just the audio resync code in its normal state: buggy. This time,
AD_NO_PROGRESS was handled about the same as AD_WAIT. But it means the
decoder didn't output data, even though input is still readily
available.
This happened in particular when the timeline code was used (potentially
skipping many packets), and thus should fix #4688.
|
|
|
|
|
|
| |
Causea a simple integer overflow.
Fixes #4650.
|
|
|
|
| |
Noticed in #4717, although the issue might be about something else.
|
|
|
|
|
|
|
|
|
|
|
| |
It's an ancient X11 protocol extension that apparently nobody uses
anymore (desktop environments in particular have replaced it with
equally bad protocols that require tons of dependencies). Users keep
complaining about it being a required dependency.
The impact is likely minimal to none.
Fixes #4706 and other annoying people.
|
|
|
|
|
| |
This replaces previous commit with same intentions. This time, with
proper formating (no tabs in code).
|
| |
|
|
|
|
|
|
|
| |
Fixes #4626. Previously removed because the original smi entry was added
by someone who did not agree to LGPL relicensing. I'm not sure if the
original change was copyrightable, but this commit for sure does not
fall under that author's copyright.
|
| |
|
|
|
|
|
| |
Remove dead declarations. Move macro only used in wasapi_utils.c closer to use.
Rearrange declaration order.
|
| |
|
|
|
|
|
| |
There were too many functions within functions, too much going on in if
clauses and duplicated code. Fix it.
|
|
|
|
|
|
|
|
| |
Any bad HRESULTs should have been printed already and lots of failure modes
don't have an HRESULT leading to awkward hr = E_FAIL business.
This also checks the exit status of GetBufferSize in the align hack. A final
fatal message is added if either of the retry hacks fail.
|
|
|
|
| |
This also fixes a double free in vo_opengl_cb.c.
|
|
|
|
| |
Fixes #4720, I think.
|
|
|
|
| |
Also fix a typo in ra_gl.c. Too greedy for a separate commit.
|
|
|
|
|
|
| |
Generic description of pixel formats is hard. In this case, the Apple
special format for packed YUV could have been interpreted as a RGB
format with funny packing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move multiple GL-specific things from the renderer to other places like
vo_opengl.c, vo_opengl_cb.c, and ra_gl.c.
The vp_w/vp_h parameters to gl_video_resize() make no sense anymore, and
are implicitly part of struct fbodst.
Checking the main framebuffer depth is moved to vo_opengl.c. For
vo_opengl_cb.c it always assumes 8. The API user now has to override
this manually. The previous heuristic didn't make much sense anyway.
The only remaining dependency on GL is the hwdec stuff, which is harder
to change.
|
|
|
|
| |
Don't leak the buffer if glGetProgramBinary() fails.
|
|
|
|
|
|
| |
Completely unnecessary, we can just update the uniforms immediately
after creating the program. In theory, for GLSL 4.20+, we could even
skip this, but oh well.
|
|
|
|
|
| |
Also fixes an issue where 1 << 5 was used twice, probably because of the
terrible formatting obscuring this bug
|
|
|
|
| |
No reason not to.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The vp_w/vp_h variables and parameters were not really used anymore
(they were redundant with ra_tex w/h) - but vp_h was still used to
identify whether rendering should be done mirrored.
Simplify this by adding a fbodst struct (some bad naming), which
contains the render target texture, and some parameters how it should be
rendered to (for now only flipping). It would not be appropriate to make
this a member of ra_tex, so it's a separate struct.
Introduces a weird regression for the first frame rendered after
interpolation is toggled at runtime, but seems to work otherwise. This
is possibly due to the change that blit() now mirrors, instead of just
copying. (This is also why ra_fns.blit is changed.)
Fixes #4719.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This allows us to integrate PBOs and SSBOs into the same abstraction,
with the potential to easily add UBOs if the need arises.
|
| |
|
|
|
|
| |
Fixes #4014
|
|
|
|
|
|
|
| |
Unbreaks segmented DASH with the change in
https://github.com/rg3/youtube-dl/commit/1141e9104 which made each
segment URL only use relative path from fragment_base_url with a
different key.
|
|
|
|
|
|
|
| |
When using dumb mode, we can actually redraw a frame without uploading
it. Marking this as fresh as well results in unpredictable pass
behavior, which is confusing and makes debugging harder. So mark it as a
redraw instead, in that case.
|
|
|
|
|
|
| |
Since the GL *gl is no longer needed for the timers, we can get rid of
the sc->gl dependency. This requires moving a utility function (which is
not GL-specific anyway) out of gl_utils.h and into utils.h
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the past, this always measured the per-shader execution times of the
individual OSD parts, which was thrown off because the shader was reused
anyway. (And apparently recording the OSD shader execution times was
removed completely, probably because of them being so unrealiably
anyway)
Since ra_timer no longer has the restriction of not allowing timers to
run concurrently, we can just wrap the entire OSD block inside a single
osd_timer now, and record that. (Technically, this can still be off when
using --blend-subtitles=video/yes and showing a full-screen OSD at the
same time. Maybe this can be done better?)
|
|
|
|
|
|
|
| |
In order to prevent code duplication and keep the ra abstraction as
small as possible, `ra` only implements the actual timer queries,
it does not do pooling/averaging of the results. This is instead moved
to a ra-neutral struct timer_pool in utils.c.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
This code is pretty much for the sake of vo_opengl_cb API users. It
resets certain state that either the user or our code doesn't reset
correctly. This is somewhat outdated. With GL implicit state being
so awfully large, it seems more reasonable require that any code
restores the default state when returning to the caller. Some
exceptions are defined in opengl_cb.h.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now all GL-specifics of shader compilation are abstracted through ra.
Of course we still have everything hardcoded to GLSL - that isn't going
to change.
Some things will probably change later - in particular, the way we pass
uniforms and textures to the shader. Currently, there is a confusing
mismatch between "primitive" uniforms like floats, and others like
textures.
Also, SSBOs are not abstracted yet.
|
|
|
|
|
|
|
|
| |
Instead of having a mutable ra_tex field (and the only one), move the
flag to struct ra, since we have only 2 tex_upload user calls anyway,
and both want the same PBO behavior. (At first I considered making it
a RA_TEX_UPLOAD_ flag, but why bother. PBOs are a terribly GL-specific
thing, so we can't expect a reasonable abstraction of it anyway.)
|
|
|
|
|
|
|
|
|
|
|
|
| |
This requires a silly extension to ra_fns.tex_upload: since the OSD
texture can be much larger than the actual OSD image data to upload, a
mechanism for uploading only to a small part of the texture is needed.
Otherwise, we'd have to realloc/copy the data, just to pad it, and then
pay for uploading the padding too.
The RA_TEX_UPLOAD_DISCARD flag is not interpreted by GL (not sure how
you'd tell GL about this), but it clarifies the API and might be
helpful if we support other backend APIs in the future.
|
|
|
|
| |
Probably. Untested.
|
|
|
|
|
|
|
|
|
| |
Actually GL-specific parts go into gl_utils.c/h, the shader cache
(gl_sc*) into shader_cache.c/h.
No semantic changes of any kind, except that the VAO helper is made
public again as part of gl_utils.c (all while the goal for gl_utils.c
itself is to be included by GL-specific code).
|
|
|
|
| |
Will make the ra layer _slightly_ simpler.
|
|
|
|
|
|
|
|
|
| |
Another "small" step towards removing GL dependencies from the renderer.
This commit generally passes ra_tex objects instead of GL FBO integer
IDs to various rendering functions. video.c still manually binds the
FBOs when calling shaders.
This also happens to fix a memory leak with output_fbo.
|
|
|
|
|
|
|
|
| |
Further work removing GL dependencies from the actual video renderer,
and moving them into ra backends.
Use of glInvalidateFramebuffer() falls away. I'd like to keep this, but
it's better to readd it once shader runs are in ra.
|
| |
|
|
|
|
|
|
|
|
| |
This currently is only limited to Android. Its stdlib contains the
things that mpv's POSIX check checks for, but unfortunately not
glob().
This fixes Android compilation broken in 70a70b9da .
|
|
|
|
|
|
|
|
| |
* If we have glob() supported, we have `HAVE_GLOB = 1'.
* If we have specifically POSIX glob(), we have
`HAVE_GLOB_POSIX = 1`.
* If we have specifically Win32 glob(), we have
`HAVE_GLOB_WIN32 = 1`
|
|
|
|
|