| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
Setup a dummy image for the given image params, and get the plane sizes
from that. Admittedly not much of a simplification, but conceptually
it's simpler and less error-prone, as the image layout is guaranteed to
be the same, rather than essentially duplicating the way it is
determined.
|
|
|
|
|
|
|
|
| |
This is from times when we supported padded/non-NPOT textures. The
difference is not useful anymore, and theoretical support for different
sizes is most likely buggy and unmaintained. So remove it.
Also remove the tex_ prefix wherever it appears.
|
|
|
|
|
|
|
|
| |
Use mp_image_copy() instead of copying manually. (This function checks
whether the destination is regarded writeable, which it is not, because
the destination is the source image with changed pointers, so
refcounting has to be removed from the destination image by resetting
mpi->bufs.)
|
| |
|
|
|
|
|
|
|
|
| |
This shouldn't be needed anymore. Textures are now always allocated with
the exact size. Any padding (including non-NPOT support) is gone. The
texture sizes will always match the memory plane sizes.
Drop the unused and forgotten "npot" field from the option struct too.
|
|
|
|
| |
Forgotten in the previous commit.
|
|
|
|
|
|
|
|
| |
Undo 292266f2. Reapply 3e12e79b.
An additional copy is not really justified, as it could reduce
performance. On the other hand, we can force API users to create
a GL 3.x context.
|
|
|
|
|
|
|
|
|
| |
eglTerminate() affects the EGLDisplay in all threads. Since the RPI
firmware apparently only ever uses EGL_DEFAULT_DISPLAY, this means it
will trash all other contexts on other threads in the same process.
Thus we don't call eglTerminate() at all, at least on RPI. Call
eglReleaseThread() instead (which may or may not be a NOP).
|
|
|
|
|
|
|
|
|
| |
yuva444p worked, yuva420p didn't. This happened because the chroma pass
discards the alpha plane, which is referenced by the alpha blend code
later.
Add a terrible hack to work this around, actually using the same hack as
was used for the Y plane. (A terrible hack for terrible code.)
|
|
|
|
| |
use the append to playlist functionality if shift is pressed while dropping
|
|
|
|
|
|
|
| |
If the drag and drop action is anything other than
XdndActionCopy, append the dropped files rather than
replacing the existing playlist. With most file managers,
this will mean at least pressing shift while dropping.
|
|
|
|
|
|
| |
This puts in place the machinery to merely append dropped file to the playlist
instead of replacing the existing playlist. In this commit, all front-ends
set this to false preserving the existing behaviour.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This might fix some problems when framestepping with interpolation
enabled. The problem here is that we want to show the non-interpolated
frame while paused. Framestepping is like unpausing the video for a
frame, and then pausing again. This draws an interpolated frame, and
redrawing on pausing is supposed to take care of this.
This possibly didn't always work, because vo->want_redraw is not checked
by the vo_control() code path. So wake up the VO thread (which takes
care of servicing redraw requests, kind of) explicitly.
The correct solution is getting rid of the public-writable want_redraw
field and replacing it with a new vo_request_redraw() function, but this
can come later.
|
|
|
|
|
|
| |
Should not happen, but since we don't control decoder video surface
allocation, anything could happen, and the code should be able to deal
with it. Untested.
|
|
|
|
| |
Fixes #2237.
|
|
|
|
| |
Leftover from 3245bfef.
|
|
|
|
|
| |
Keep glSwapInterval(0) on to avoid blocking on gl calls, but wait for
frame callbacks so we play nice with compositor.
|
|
|
|
|
|
|
| |
Privdes small api for vo_wayland where one can request frame callback
and then wait for it.
This will make vo_wayland play video smoothly.
|
|
|
|
| |
This makes mesa not wait for frame callback internally.
|
|
|
|
|
|
|
|
|
|
| |
This significantly reduces the amount of noticeable flashing when using
tscale kernels with negative lobes, by cutting them off completely.
I'm not sure if this has any negative effects. It needs a bit of
subjective testing over a period of time, so I just made it an option.
Fixes #2155.
|
|
|
|
| |
And add an option to enable it.
|
|
|
|
| |
Most likely doesn't matter much.
|
|
|
|
|
| |
This is a cosmetic change, because the value is exactly the same. (The
old code just duplicates the logic, sort of.)
|
|
|
|
|
| |
The OSD overlay wasn't initialized, so it remained solid black until the
first time a subtitle line or an OSD element became visible.
|
|
|
|
|
|
|
|
|
|
|
| |
Since vo_rpi uses MMAL for video output, which is completely
independent from the GLES overlay, we can just not redraw the
GLES screen if subtitles do not change.
(As a furhter optimization, the dispmanx overlay could be removed
if nothing is visible. But I'm not sure if adding and removing the
overlay frequently is a good idea for performance, so this could
just as well go the other way.)
|
|
|
|
|
|
|
|
| |
Slightly faster than using the dispmanx mess (perhaps to a large amount
due to the rather stupid C-only unoptimized ASS->RGBA blending code).
Do this by reusing vo_opengl's subtitle renderer, and vo_opengl's RPI
backend.
|
|
|
|
| |
To be used by vo_rpi.c. No functional changes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This affects vo_opengl_cb in particular: it'll most likely auto-load
VDA, and then the VideoToolbox decoder won't work. And everything fails.
This is mainly caused by FFmpeg using separate pixfmts for the _same_
thing (CVPixelBuffers), simply because libavcodec's architecture demands
that hwaccel backends are selected by pixfmts. (Which makes no sense,
but now we have the mess.)
So instead of duplicating FFmpeg's misdesign, just change the format to
our own canonical one on the image output by the decoder. Now the GL
interop code is exactly the same for VDA and VT, and we use the VT name
only.
|
|
|
|
| |
Can happen in obscure situations and with hw decoding disabled.
|
|
|
|
| |
Apparently this is sufficient.
|
|
|
|
| |
Small bug, much pain.
|
|
|
|
| |
It's unnecessary and slow. Doesn't help too much, though.
|
|
|
|
|
|
|
| |
We must not use the frame PTS in any case. In this case, it fails
because nothing sets it up to wake up. This typically caused the player
to apparently "pause", until something else waked it up, like moving the
mouse and other events.
|
|
|
|
| |
Fixes #503
|
|
|
|
|
|
|
|
| |
This VO is special because it normally doesn't block on vsync, but can
be made to do so. Supposedly the MMAL video output API merely sets a
"current frame" field when sending an output frame, and the firmware
will pick up whatever frame that field is set to at the time of a
vsync.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If this mode is enabled, the player tries to strictly synchronize video
to display refresh. It will adjust playback speed to match the display,
so if you play 23.976 fps video on a 24 Hz screen, playback speed is
increased by approximately 1/1000. Audio wll be resampled to keep up
with playback.
This is different from the default sync mode, which will sync video to
audio, with the consequence that video might skip or repeat a frame once
in a while to make video keep up with audio.
This is still unpolished. There are some major problems as well; in
particular, mkv VFR files won't work well. The reason is that Matroska
is terrible and rounds timestamps to milliseconds. This makes it rather
hard to guess the framerate of a section of video that is playing. We
could probably fix this by just accepting jittery timestamps (instead
of explicitly disabling the sync code in this case), but I'm not ready
to accept such a solution yet.
Another issue is that we are extremely reliant on OS video and audio
APIs working in an expected manner, which of course is not too often
the case. Consequently, the new sync mode is a bit fragile.
|
|
|
|
|
|
|
|
| |
VDA is being deprecated in OS X 10.11 so this is needed to keep hwdec working.
The code needs libavcodec support which was added recently (to FFmpeg git,
libav doesn't support it).
Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Pretty stupid: vo_get_vsync_interval() returns a negative value if the
display FPS is unknown (e.g. xrandr not compiled), and the comparison
whether the value is below 0 fails later because it's assigned to an
unsigned int.
Regression since commit e3d85ad4.
Also, fix some comments in vo.c.
|
|
|
|
|
|
|
|
| |
When full_redraw is set, we always need to take the draw_image path. If
it's not set, we can try VOCTRL_REDRAW_FRAME (and fallback to draw_image
if that fails).
Fixes #2184.
|
|
|
|
|
|
| |
If the framedrop count happens to be incremented with
vo_increment_drop_count() during rendering, these increments were
counted twice, because these events also set in->dropped_frame.
|
|
|
|
| |
No functional changes.
|
|
|
|
|
|
|
|
|
|
|
| |
Revert "win32: more wchar_t -> WCHAR replacements"
Revert "win32: replace wchar_t with WCHAR"
Doing a "partial" port of this makes no sense anymore from my
perspective. Revert the changes, as they're confusing without
context, maintenance, and progress. These changes were a bit
premature anyway, and might actually cause other issues
(locale neutrality etc. as it was pointed out).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was essentially missing from commit 0b52ac8a.
Since L"..." string literals have the type wchar_t[], we can't use them
for UTF-16 strings. Use C11 u"..." string literals instead. These have
the type char16_t[], but we simply assume char16_t is the same
underlying type as WCHAR. In practice, they're both unsigned short.
For this reason use -std=c11 on Windows. Since Windows is a "special"
environment (we require either MinGW or Cygwin), we don't need to worry
too much about compiler compatibility.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A user complains that it leads to the dxva driver failing, leading to
messages like this:
[ffmpeg/video] h264: Failed to execute: 0x8007000e
[ffmpeg/video] h264: hardware accelerator failed to decode picture
Reportedly, this happens only with vo_direct3d, not with vo_opengl. The
only difference is that vo_direct3d attempts to share the D3D device
with the decoder. Possibly the error is that the device in the VO is not
created with D3DCREATE_MULTITHREADED. Change this.
Probably fixes #2178.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
WCHAR is more portable. While at least MinGW, Cygwin, and MSVC actually
use 16 bit wchar_t, Midipix will have 32 bit wchar_t. In that context,
using WCHAR instead is more portable.
This affects only non-MinGW parts, so not all uses of wchar_t need to
be changed. For example, terminal-win.c won't be used on Midipix at
all. (Most of io.c won't either, so the search & replace here is more
than necessary, but also not harmful.)
(Midipix is not useable yet, so this is just preparation.)
|
|
|
|
|
| |
They are not entirely full-featured in GLES 2, but they appear to
provide all we need. Thus we can enable them.
|
| |
|
|
|
|
| |
Reverse engineered from tvservice.c.
|
|
|
|
|
|
| |
Instead of special-casing hwdec in the place where the video textures
are used, just set the textures in the image upload function. The
renderer code doesn't need to know whether hwdec interop is used at all.
|
|
|
|
|
|
|
|
|
|
| |
This detected whether an OpenGL context still provided legacy OpenGL if
the OpenGL version is modern (>= 3.0). This was actually only needed for
vo_opengl_old, because it relied on legacy functions. Since it's gone,
this code isn't needed either.
(Also, the removed comment about OpenGL 3.0 was wrong: you could just
query GL_CONTEXT_FLAGS and see if the forward compatible bit was set.)
|
|
|
|
|
| |
This resulted in wrong behavior for values of scale-param1 between 0.0
and 0.5 (not inclusive).
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit fb8d15836695e883355c5ec6ff8463e7bbf39461.
Reallocating the FBOs on every resize is very slow. It affects resizing
the window, as well as changing the video size itself with e.g.
panscan. Since the original change was done based on a single user
complaint, but the change itself caused a lot of complaints, we decided
to just revert it.
|
|
|
|
|
|
|
|
|
|
| |
Instead of calling it "future frames" and adding or subtracting 1 from
it, always call it "requested frames". This simplifies it a bit.
MPContext.next_frames had 2 added to it; this was mainly to ensure a
minimum size of 2. Drop it and assume VO_MAX_REQ_FRAMES is at least 2;
together with the other changes, this can be the exact size of the
array.
|
|
|
|
|
| |
This was requested multiple times by users, and it's not hard to
implement and/or maintain.
|
|
|
|
|
| |
This was supposed to have changed back when oversample was reintroduced
in 3007250. Fixes #2155.
|
|
|
|
|
|
|
|
|
|
|
|
| |
I still have no idea why this is needed, maybe some weird off-by-one
in some shitty driver? Either way, the difference for a working setup
shouldn't be too major, the most noticeable effect would be somewhat worse
performance when resizing the video during playback with interpolation
enabled using the mouse.
That's a specific enough side effect for me to not care as much about it.
Fixes #1814.
|
|
|
|
|
|
|
|
| |
There are some situations when redrawing is requested, but the current
frame was deleted. This could happen when switching e.g. hw decoding
mid-stream.
Separate uploading/drawing and fix the condition.
|
|
|
|
|
|
|
| |
Just avoid some code duplication. Also, gl_video_set_options() having a
queue size output parameter is weird at best. While I don't appreciate
that this commit suddenly requires gl_video.c to deal with vo.c directly
in a special case, it's simply the best place to put this function.
|
|
|
|
|
|
| |
That was 2 too many.
Also fix a documentation comment.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The VO will be provided with future frames even if the format changes
mid-stream. This caused a crash if these frames were actually used (i.e.
interpolation mode was enabled).
Fixes a crash when deinterlacing is toggled during playback, and the
deinterlacer changes the stream format (as it can happen e.g. if the
decoder outputs nv12, which in turn happens with hw decoding).
(On a side note, future frames are always non-NULL. Also, the current
frame is of course always in the correct format.)
|
| |
|
|
|
|
|
| |
After recent changes, there is no reason why gl_video_set_image() should
exist anymore. So merge it back into gl_video_upload_image().
|
|
|
|
|
|
|
|
|
| |
Outputting the detected OpenGL features was useless and redundant with
the extension loading output.
Also, remove MPGL_CAP_3D_TEX from OpenGL(ES) 3.0. This block didn't
include the glTexImage3D function, so that was pointless and couldn't
have worked. The OpenGL 2.1 block does it correctly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
VDPAU has explicit support for rotating surfaces, and it is far less
expensive than using the normal rotation filter (which would require
reading video frames back into system memory), it is desirable to
implement the VO rotation capability.
To do this, we need to render the video frames to an output surface,
without rotation, and then render from that surface to the final
output surface to apply the rotation. It is important that the
intermediate surface is the sam |