| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
And remove the strange PACKER_MAX_WH define. This is more convenient for
users which don't care about limits, such as sd_lavc.c.
|
|
|
|
| |
Ooops.
|
|
|
|
| |
Signed-off-by: wm4 <wm4@nowhere>
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. this basically reverts commit de4c74e5a4a996e8ff431c8f33a32c4b580be203.
even with CVDisplayLinkCreateWithActiveCGDisplays and
CVDisplayLinkSetCurrentCGDisplayFromOpenGLContext we still have to
explicitly set the current display ID, otherwise it will just always
choose the display with the lowest refresh rate. another weird thing is,
we still have to set the display ID another time with
CVDisplayLinkSetCurrentCGDisplay after the link was started. otherwise
the display period is 0 and the fallback will be used.
if we ever use the callback method for something useful it's probably
better to use CVDisplayLinkCreateWithActiveCGDisplays since we will need
to keep the display link around instead of releasing it at the end.
in that case we have to call CVDisplayLinkSetCurrentCGDisplay two times,
once before and once after LinkStart.
2. add windowDidChangeScreen delegate to update the display refresh rate
when mpv is moved to a different screen.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have two problems here.
1. CVDisplayLinkGetActualOutputVideoRefreshPeriod, like the name suggests,
returns a frame period and not a refresh rate. using this as screen_fps
just leads to a slideshow. why didn't this break video playback on OS X
completely? the answer to this leads us to the second problem.
2. it seems that CVDisplayLinkGetActualOutputVideoRefreshPeriod always
returns 0 if used without CVDisplayLinkSetOutputCallback and hence always
fell back to CVDisplayLinkGetNominalOutputVideoRefreshPeriod. adding a
callback to CVDisplayLink solves this problem. the callback function at
this moment doesn't do anything but could possibly used in the future.
|
|
|
|
|
|
|
| |
This can also remove all the stuff for lazily attaching the texture. It
doesn't matter if the dxinterop backend changes the bound framebuffer
during a VOCTRL, since the renderer does not rely on the GL state being
preserved.
|
|
|
|
|
|
|
| |
Most of the functionality already exists for the sake of vo_opengl_cb.
We only have to use it.
This will be used by dxinterop in the following commit.
|
|
|
|
|
|
|
| |
Since there are not many sub-rectangles, this doesn't cost too much. On
the other hand, it avoids frequent warnings with vo_xv.
Also, the second copy in mp_blur_rgba_sub_bitmap() can be dropped.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous few commits changed sd_lavc.c's output to packed RGB sub-
images. In particular, this means all sub-bitmaps are part of a larger,
single bitmap. Change the vo_opengl OSD code such that it can make use
of this, and upload the pre-packed image, instead of packing and copying
them again.
This complicates the upload code a bit (4 code paths due to messy PBO
handling). The plan is to make sub-bitmaps always packed, but some more
work is required to reach this point. The plan is to pack libass images
as well. Since this implies a copy, this will make it easy to refcount
the result.
(This is all targeted towards vo_opengl. Other VOs, vo_xv, vo_x11, and
vo_wayland in particular, will become less efficient. Although at least
vo_vdpau and vo_direct3d could be switched to the new method as well.)
|
|
|
|
|
|
|
| |
The sub-bitmaps get extended by --sub-gauss, so we have to compute the
bounding box on the original subs. Not sure if this is really
eqauivalent to what the code did before, and I don't have the sample
anymore. (But this approach sure is a _shitty_ hack.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement it directly in sd_lavc.c as well. Blurring requires extending
the size of the sub-images by the blur radius. Since we now want
sub_bitmaps to be packed into a single image, and we don't want to
repack for blurring, we add some extra padding to each sub-bitmap in the
initial packing, and then extend their size later. This relies on the
previous bitmap_packer commit, which always adds the padding in all
cases.
Since blurring is now done on parts of a large bitmap, the data pointers
can become unaligned, depending on their position. To avoid shitty
libswscale printing a dumb warning, allocate an extra image, so that the
blurring pass is done on two newly allocated images. (I don't find this
feature important enough to waste more time on it.)
The previous refactor accidentally broke this feature due to a logic bug
in osd.c. It didn't matter before it happened to break, and doesn't
matter now since the code paths are different.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, subtitle renderers could export SUBBITMAP_INDEXED, which is a
8 bit per pixel with palette format. sd_lavc.c was the only renderer
doing this, and the result was converted to RGBA in every use-case
(except maybe when the subtitles were hidden.)
Change it so that sd_lavc.c converts to RGBA on its own. This simplifies
everything a bit, and the palette handling can be removed from the
common code.
This is also preparation for making subtitle images refcounted. The
"caching" in img_convert.c is a PITA in this respect, and needs to be
redone. So getting rid of some img_convert.c code is a positive side-
effect. Also related to refcounted subtitles is packing them into a
single mp_image. Fewer objects to refcount is easier, and for the libass
format the same will be done. The plan is to remove manual packing from
the VOs which need single images entirely.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Apply the padding internally to each input bitmap, instead of requiring
this for the semi-public API.
Right now, everything still uses packer_pack_from_subbitmaps() to fill
the input bitmap sizes, but that's going to change with the following
commit. Since bitmap_packer.in is mutated during packing anyway, it's
more convenient to add the padding automatically.
Also, guarantee that every sub-bitmap has a padding border around it.
Don't let the padding overlap. Add padding even on the containing
borders.
This is simpler, doesn't cost much in memory usage, and is convenient
for one of the following commits.
|
|
|
|
| |
No functional changes.
|
|
|
|
|
|
|
| |
I do not understand why github requires adding this crap to source code
repositories themselves, instead of making them part of the repository
configuration. Remove CONTRIBUTING.md to compensate for github crap
accumulating.
|
|
|
|
| |
This is now stored within the AVFrame/mp_image.
|
|
|
|
| |
Reduces interference with pass-through case, where this is not needed.
|
|
|
|
| |
Instead, warn.
|
| |
|
|
|
|
|
|
| |
This is the ES equivalent to ARB_timer_query. It enables the performance
timers on ANGLE. All the added functions should be identical in
semantics to their desktop GL equivalents.
|
|
|
|
|
|
|
|
|
| |
The OpenGL 3.0+ and ES specs are quite clear on what values are
accepted for the attachment object name parameter. And there's no
overlap for the default framebuffer. Sigh.
Probably fixes Mesa raising an error in this case and might fix #3251.
Regression by the previous vo_opengl change.
|
|
|
|
|
|
|
| |
They're different from the Google/WebM subtitle types, and use a new
codec ID.
Fixes #3247.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, we've used system-specific API (GLX, EGL, etc.) to retrieve
the depth of the default framebuffer. (We equal this to display depth
and use the determined depth for dithering.)
We can actually retrieve this value through standard GL API, and it
works everywhere (except GLES 2 of course). This simplifies everything a
great deal.
egl_helpers.c is empty now. But I expect that some EGL boilerplate will
be moved to it, so don't remove it yet.
|
|
|
|
|
|
|
|
| |
This was somehow done under the assumption that ANGLE would somehow
always use RG in ES2 mode. But there's no basis for this. Even if ANGLE
supports NV12 textures with drivers that do not allow for texture_rg,
this cas eis too obscure to worry about. So do the robust and correct
thing instead, and disable this code if texture_rg is not available.
|
|
|
|
|
|
|
| |
Commit 74e3d11 resulted in the background overlay not getting destroyed
when mpv quits. Add back a piece of code that was removed in that commit
to restore correct functionality.
Fixes issue #3100
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This helps with shitty APIs and even shittier drivers (I'm looking at
you, ALSA). Sometimes they won't send proper wakeups. This can be fine
during playback, when for example playing video, because mpv still will
wakeup the AO outside of its own wakeup mechanisms when sending new data
to it. But when draining, it entirely relies on the driver's wakeup
mechanism. So when the driver wakeup mechanism didn't work, it could
hard freeze while waiting for the audio thread to play the rest of the
data.
Avoid this by waiting for an upper bound. We set this upper bound at the
total mpv audio buffer size plus 1 second. We don't use the get_delay
value, because the audio API could return crap for it, and we're being
paranoid here. I couldn't confirm whether this works correctly, because
my driver issue fixed itself.
(In the case that happened to me, the driver somehow stopped getting
interrupts. aplay froze instead of playing audio, and playing audio-only
files resulted in a chop party. Video worked, for reasons mentioned
above, but drainign froze hard. The driver problem was solved when
closing all audio output streams in the system. Might have been a dmix
related problem too.)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we're draining, don't wakeup the core on every buffer fill, since
unlike during normal playback, we won't actually get more data. The
wakeup here conceptually works like wakeups with condition variables, so
redundant wakeups do not hurt, so this is just a minor change and
nothing of consequence.
(Final EOF also requires waking up the core, but there is separate code
to send this notification.)
Also dump the p->still_playing field in trace logging.
|
| |
|
|
|
|
|
|
|
| |
Of course we can't just skip updating the OSD if the playloop was woken
up for the purpose of removing OSD after an OSD timer expired.
Fixes e.g. OSD bars sometimes sticking along when seeking while paused.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Normally, OSD is updated every time the playloop is run. This has to be
done, because the OSD may implicitly reference various properties,
without knowing whether they really need to be updated or not. (There's
a property update mechanism, but it's mostly unavailable, because OSD is
special-cased and can not use the client API mechanism properly.)
Normally, these updates are no problem, because the OSD is only actually
printed when the OSD text actually changes.
But commit d23ffd24 added a rate-limiting mechanism, which tries to
limit OSD updates at most every 50ms (or the next video frame). Since it
can't know in advance whether the OSD is going to change or not, this
simply waked up the player every 50ms.
Change this so that the player is updated only as part of general
updates determined through mp_notify(). (This function also notifies the
client API of changed properties.) The desired result is that the player
will not wake up at all in normal idle mode, but still update properties
that can change when paused, such as the cache.
This is mostly a cosmetic change (in the sense of making runtime
behavior just slightly better). It has the slightly more negative
consequence that properties which update implicitly (such as "clock")
will not update periodically anymore.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a common idiom used in MSDN docs and Raymond Chen's example
programs to get a HINSTANCE for the current module, regardless of
whether it's an .exe or a .dll. Using GetModuleHandle(NULL) for this is
technically incorrect, since it always gets a handle to the .exe, even
when the executing code (in libmpv) is running in a .dll. In this case,
using the wrong HINSTANCE could cause namespace issues with window
classes, since CreateWindowEx uses the HINSTANCE to search for the
matching window class name.
See:
https://blogs.msdn.microsoft.com/oldnewthing/20050418-59/?p=35873
https://blogs.msdn.microsoft.com/oldnewthing/20041025-00/?p=37483
|
|
|
|
|
| |
Maybe it partially helps with #2392 (on dual display setups). Either way, it
makes the code simpler.
|
|
|
|
|
|
|
|
|
|
|
| |
There were two mistakes:
- SDL's RGB565 is always equivalent to ffmpeg's RGB565 (both are packed 16bit
native-endian integers in RGB=565 form) - this was wrongly reversed on
big endian platforms.
- SDL's RGB888 doesn't actually mean RGB24, but XRGB8888 (i.e. 32bit
packed integer, top 8 bits unused).
- Use RGB0 not RGBA when there is no alpha.
|
|
|
|
| |
Avoids that some OpenGL implementation will pin it to 3.0.
|
|
|
|
|
|
| |
This is mainly for the nnedi3 user shader. With all whose NN weights
hardcoded into the shader source code, the shader file could be as
large as 300 kB.
|
|
|
|
|
|
|
| |
This reverts commit b51957fab59fd5a2fde824e1946befd29ed172c1.
Breaks big time. It appears to ignore explicitly configured paths within
the libav* .pc files, which for example breaks mpv-build.
|
|
|
|
| |
Not sure what/if I was thinking there.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Although D3D11 video decoding is unuspported on Windows 7, the
associated APIs almost work. Where they fail is texture creation, where
we try to create D3D11_BIND_DECODER surfaces. So specifically try to
detect this situation.
One issue is that once the hwdec interop is created, the damage is done,
and it can't use another backend (because currently only 1 hwdec backend
is supported). So that's where we prevent attempts to use it.
It still can fail when trying to use d3d11va-copy (since that doesn't
require an interop backend), but at that point we don't care anymore -
dxva2(-copy) is tried before that anyway.
|
|
|
|
| |
Leftovers.
|
|
|
|
|
|
|
|
| |
Distros and users alike should be made aware of the fact that old
FFmpeg versions are bad. When users come to us with FFmpeg-related
trouble, the answer is “update FFmpeg” more often than not
(and no further support will be provided until they have done so),
so instead we just nag them about it here.
|
|
|
|
|
| |
Move unmap() to the top of the function, and replace some duplicated
code with a call to it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of having 9 different properties, requiring 18 different
VOCTRLs to read them all, they are now exposed as a single property.
This is not only cleaner (since they're all together) but also allows
querying all 9 of them with only a single VOCTRL (by using
mp.get_property_native).
(The extra factor of 2 was due to an extra query being needed to get the
type, which is now also unnecessary)
This makes it much easier to access performance metrics from within a
lua script, and also makes it easier to just show a readable, formatted
version via show-text.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
User hooks can now use an extra WHEN expression to specify when the
shader should be run. For example, this can be used to only run a chroma
scaling shader `WHEN CHROMA.w LUMA.w <`.
There's a slight semantics change to user shaders: When trying to bind a
texture that does not exist, a shader will now be silently skipped
(similar to when the condition is false) instead of generating an error.
This allows shader stages to depend on an optional earlier stage without
having to copy/paste the same condition everywhere.
(In other words: there's an implicit condition on all of the bound
textures existing)
|
|
|
|
| |
WTF of the day.
|
|
|
|
|
|
|
|
|
|
| |
When using --hwdec=auto, systems that don't provide
D3D11_CREATE_DEVICE_VIDEO_SUPPORT, which probably includes all Windows
Vista and 7 systems, will print an error message. Reduce the log level
to verbose when probing and skip the error message entirely if d3d11.dll
is not present.
This commit is in a similar spirit to 991af7d.
|
|
|
|
|
| |
If ID3D11Device_QueryInterface fails, "multithread" will be set to NULL.
The _Release would just make it crash with a null pointer deref.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This comes up often, see e.g. #3220. The issue is that if the stream
input is not seekable, the demuxer is marked as not seekable. But if the
stream cache is enabled, the file still _might_ be seekable to a degree.
We recently disabled seeking in this mode because it can cause very
weird issues, mostly because if stream-layer seeking fails, the demuxers
will arbitrarily misbehave. On the other hand, it can work if the seek
is within the cached range, which is why the user can still enable it
with --force-seeking. There is a weird trade-off between allowing this
and not crapping up too easily, so just informing the user about the
possibility seems best.
|
|
|
|
|
|
|
| |
Since the libavformat API is crap, we have to apply tons of heuristics
to check whether seeking will work. (No, checking it at seek time isn't
going to work either, because if a seek fails, the demuxer will be in an
undefined state. Because the libavformat API is crap.)
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
For clang, it's enough to just put (void) around usages we are
intentionally ignoring the result of.
Since GCC does not seem to want to respect this decision, we are forced
to disable the warning globally.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The default behavior of vo_opengl has pretty much always been 'show the
source colors as-is, without caring to adapt it to the target device'.
This decision is mostly based on the fact that if we do anything else,
lots of people will complain.
With the rise of content like BT.2020, however, it turns out more people
complain about this content being very desaturated than people complain
about this content not matching VLC - so let's just map ultra-wide gamut
content back down to standard gamut by default.
|
|
|
|
|
|
| |
Instead of measuring the actual upload time, this instead measures the
time needed to render + map the texture via vdpau. These numbers are
still useful, since they're part of the critical path.
|