| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the past, --video-unscaled also disabled zooming and aspect ratio
corrections. But this didn't make much sense in terms of being a useful
option. The new behavior just sets the initial video size to be
unscaled, but it's still affected by zoom commands and aspect ratio
corrections.
To get the old behavior back, --video-aspect=0 --video-zoom=0 need to be
added as well (in the general case). Most of the time it should not make
a difference though.
Also, there seems to have been some additional dst_rect clamping code
inside src_dst_split_scaling that didn't seem to either be necessary nor
ever get triggered. (The code immediately above it already makes sure to
crop the video if it's larger than the dst_rect)
No idea why it was there, but I just removed it.
|
|
|
|
| |
Never needed them. This makes the code slightly more readable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Apply basic transformations like rotation by 90° and mirroring when
sampling from the source textures. The original idea was making this
part of img_tex.transform, but this didn't work: lots of code plays
tricks on the transform, so manipulating it is not necessarily
transparent, especially when width/height are switched. So add a new
pre_transform field, which is strictly applied before the normal
transform.
This fixes most glitches involved with rotating the image.
Cropping and rotation are now weirdly separated, even though they could
be done in the same step. I think this is not much of a problem, and
has the advantage that changing panscan does not trigger FBO
reallocations (I think...).
|
|
|
|
|
|
| |
Typically happens with some implementations if no context is currrent,
or is otherwise broken. This is particularly relevant to the opengl_cb
API, because the API user will have no other indication what went wrong.
|
|
|
|
|
|
| |
Commit f009d16f accidentally broke it.
Thanks to RiCON for noticing and testing.
|
|
|
|
| |
Why not.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The underlying intention of this code is to make changing
--videotoolbox-format at runtime work. For this reason, the format can't
just be statically setup, but must be read from the option at runtime.
This means the format is not fixed anymore, and we have to make sure the
renderer is property reinitialized if the format changes. There is
currently no way to trigger reinit on this level, which is why the
mp_image_params.hw_subfmt field was introduced.
One sketchy thing remains: normally, the renderer is supposed to be
involved with VO format negotiation, which would ensure that the VO
can take the format at all. Since the hw_subfmt is not part of this
format negotiation, it's implied the get_vt_fmt() callback only
returns formats supported by the renderer. This is not necessarily
clear because vo_opengl checks this with converted_imgfmt separately.
None of this matters in practice though, because we know all formats
are always supported.
(This still requires somehow triggering decoder reinit to make the
change effective.)
|
|
|
|
|
|
|
|
|
|
|
| |
For hwaccel formats, mp_image will merely point to a hardware surface
handle. In these cases, the mp_image_params.imgfmt field describes the
format insufficiently, because it mostly only describes the type of the
hardware format, not its underlying format.
Introduce hw_subfmt to describe the underlying format. It makes sense to
use it with most hwaccels, though for now it will be used with the
following commit only.
|
|
|
|
|
|
|
|
|
|
| |
Until now, the presence of the process_image() callback was used to set
a delay queue with a hardcoded size. Change this to a vd_lavc_hwdec
field instead, so the decoder can explicitly set this if it's really
needed.
Do this so process_image() can be used in the VideoToolbox glue code for
something entirely unrelated.
|
|
|
|
|
|
|
|
|
|
| |
Some functions which expected a codec name (i.e. the name of the video
format itself) were passed a decoder name. Most "native" libavcodec
decoders have the same name as the codec, so this was never an issue.
This should mean that e.g. using "--vd=lavc:h264_mmal --hwdec=mmal"
should now actually enable native surface mode (instead of doing copy-
back).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The sync-by-display mode relies on using the vsync statistics for
timing. As a consequence discontinuities must be handled somehow. Until
now we have done this by completely resetting these statistics.
This can be somewhat annoying, especially if the GL driver's vsync
timing is not ideal. So after e.g. a seek it could take a second until
it locked back to the proper values.
Change it not to reset all statistics. Some state obviously has to be
reset, because it's a discontinuity. To make it worse, the driver's
vsync behavior will also change on such discontinuities. To compensate,
we discard the timings for the first 2 vsyncs after each discontinuity
(via num_successive_vsyncs). This is probably not fully ideal, and
num_total_vsync_samples handling in particular becomes a bit
questionable.
|
| |
|
|
|
|
| |
It's the same functionally.
|
|
|
|
| |
Shader compilation error due to incompatible samplers.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The past behavior was a bit weird, especially when zooming out. There
was no simple way to zoom in or out in consistent increments using
keybindings alone.
The new behavior preserves most of the old behavior's semantics but
scales out to infinity better. It coincidentally also makes it
really easy to get clean power of 2 ratios (e.g. 2x, 4x, 8x and their
inverses).
Fixes #3004.
|
|
|
|
|
|
|
|
|
| |
This makes the black point closer (chromatically) to the white point, by
ensuring channels keep their consistent brightness ratios as they go
down to zero.
I also raised the 3DLUT version as this changes semantics and is a
separate commit from the previous one.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit refactors the 3DLUT loading mechanism to build the 3DLUT
against the original source characteristics of the file. This allows us,
among other things, to use a real BT.1886 profile for the source. This
also allows us to actually use perceptual mappings. Finally, this
reduces errors on standard gamut displays (where the previous 3DLUT
target of BT.2020 was unreasonably wide).
This also improves the overall accuracy of the 3DLUT due to eliminating
rounding errors where possible, and allows for more accurate use of
LUT-based ICC profiles.
The current code is somewhat more ugly than necessary, because the idea
was to implement this commit in a working state first, and then maybe
refactor the profile loading mechanism in a later commit.
Fixes #2815.
|
|
|
|
|
|
|
|
|
| |
AVFormatContext.codec is deprecated now, and you're supposed to use
AVFormatContext.codecpar instead.
Handle this for all of the normal playback code.
Encoding mode isn't touched.
|
| |
|
|
|
|
| |
It's just ugly and unnecessary.
|
|
|
|
| |
This is basically a full rewrite to make it look more like d3d11va.c
|
|
|
|
|
|
| |
This commit adds the d3d11va-copy hwdec mode using the ffmpeg d3d11va
api. Functions in common with dxva2 are handled in a separate decode/d3d.c
file. A future commit will rewrite decode/dxva2.c to share this code.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This also draws it after color management etc. In a nutshell, this
change makes the transparency checkerboard independent of upscaling,
panning, cropping etc. It will always be the same apparent size and
position (relative to the window).
It will also be independent of the video colorspace and such things.
(Note: This might cause white imbalance issues if playing a file with a
white point that does not match the display, in absolute colorimetric
mode. But that's uncommon, especially in conjunction with transparent
image files, so it's not a primary concern here)
|
|
|
|
|
|
| |
Until now, we've let the windowing backend decide. But since they
usually require premultiplied alpha, and premultiplied alpha is easier
to handle, hardcode it.
|
|
|
|
|
|
| |
The recent changes fixed rotation handling, but reversed the rotation
direction. The direction is expected to be counter-clockwise, because
demuxers export video rotation metadata as such.
|
|
|
|
| |
No functional changes.
|
|
|
|
|
| |
Using a single gl_transform variable instead of many float ones makes it
easier to see what it's doing. No functional change.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This has been completely broken since commit 93546f0c. But even before,
rotation handling did not make too much sense. In particular, it rotated
the contents of the cropped image, instead of adjusting the crop
rectangle as well. The result was that things like panscan or zooming
did not behave as expected with rotation applied.
The same is true for vertical flipping. Flipping is triggered by
negative image stride. OpenGL does not support flipping the image on
upload, so it's done as part of the rendering. It can be triggered with
--vf=flip, but other filters and even decoders could setup negative
stride to flip the image.
Fix these issues by applying transforms to texture coordinates properly,
and by making rotation and flipping part of these transforms.
This still doesn't work properly for separated scaling. The issue is
that we'd have to adjust how the passes are done. For now, pick a very
stupid solution by rotating the image to a FBO, and then scaling from
that. This has the avantage that the scale logic doesn't have to be
complicated for such a rare case. It could be improved later.
Prescaling is apparently still broken. I don't know if chroma
positioning works properly either. None of this should affect the case
with no rotation.
|
|
|
|
|
|
|
|
|
| |
gl_transform_vec() assumed column-major, while everything else seemed to
assumed row-major memory organization for gl_transform.m. Also,
gl_transform_trans() seems to contain additional confusion.
This didn't matter until now, as everything has been orthogonal, this
the swapped matrix entries were always 0.
|
|
|
|
|
|
|
|
|
| |
If the texture count is lower than 4, entries in va.textcoord[] will
remain uninitialized. While this is unlikely to be a problem (since
these values are unused on the shader side too), it's not nice and might
explain some things which have shown up in valgrind.
Fix by always initializing the whole thing.
|
| |
|
|
|
|
|
| |
Does the same thing as the rpi one - makes fallback possible by
pretending that h264_mediacodec is a hwdec.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of reallocating almost all of the shader string several times
per pass, build it into a fixed buffer that will be reallocated as
needed.
While this still uses a linear search and full comparison of the shader
text, this will compare the shader's string length first before doing a
full comparison as a nice side effect. (That's also why the fragment
shader is compared first - it's more likely to be different for
different cache entries than the vertex shader stub.)
|
|
|
|
| |
For now only found in Libav.
|
|
|
|
|
|
| |
The mp_set_av_packet()/mp_pts_from_av() functions check whether the
timebase is set at all (i.e. AVRational.num!=0), so there's no need to
fiddle with pointers.
|
|
|
|
|
| |
Use bstr as appending buffer, which should avoid frequent reallocation
and copying. The previous commit should help with this a little.
|
|
|
|
|
| |
Broken in commit d6c99c85. vo_opengl_cb.c adds the corner case that
p->osd can be NULL. This make opengl-cb always crash.
|
|
|
|
|
|
|
|
|
| |
Shader miscompilation and bad output.
Regression probably since commit 93546f0c (or one of the following
ones).
Fixes #2982.
|
|
|
|
|
|
|
|
|
| |
Glitches when resizing are still possible, but are reduced. Other VOs
could support this too, but don't need to do so.
(Totally avoiding glitches would be much more effort, and probably not
worth the trouble. How about you just watch the video the player is
playing, instead of spending your time resizing the window.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, we have tried to create a GL 3.0 context. The main reason for
this is that many Mesa-based drivers did not support anything better.
But some drivers (Mesa AMD) will not report a higher OpenGL version,
because their compatibility mode is restricted. While later GL features
are reported as extensions just fine, there doesn't seem to be a way to
determine or enable higher GLSL versions.
Add some more shitty hacks to try to deal with this messed up situation,
and try to probe each interesting GL version separately (starting with
3.3, then 3.2 etc.). Other backends might suffer from similar problems,
but these will have to deal with it on their own.
Probably fixes #2938, or maybe not.
|
|
|
|
|
|
|
|
|
| |
Prevents an infinite loop of configure event and set_fullscreen
request on Weston and other compositors respecting the protocol.
Fixes #2817
This reverts commit eb6b2b6e50e6e3d3db41190ad818d8b966750734.
|
|
|
|
|
|
| |
This colorspace has been historically used as a calibration target for
most digital projectors and sees some involvement in the UltraHD
standards, so it's a useful addition to mpv.
|
|
|
|
| |
Regression since commit 6640b22a.
|
|
|
|
|
|
|
|
|
|
|
| |
converted_imgfmt will be used by the renderer logic to build an
appropriate shader chain. It doesn't influence the format of any
textures. Thus it doesn't matter whether the hw video surface is mapped
as RGB or RGBA. What matters is if the video actually contains alpha or
not. Since virtually all hardware decoder do not support alpha in any
way, this can be hardcoded as "no alpha".
This avoids unnecessary GPU work.
|
|
|
|
|
|
|
|
| |
This also gets rid of the kind of hard to read texture swizzle setup and
turns it into something dumber.
Assumes that we don't create any FBOs with 2 channel formats. (Only the
video source textures are handled by this commit.)
|
|
|
|
| |
Not quite sure when/why exactly this was broken.
|
|
|
|
|
|
| |
Regression since commit 93546f0c.
Fixes #2956.
|
|
|
|
|
|
|
|
|
| |
Previously, gl->DXOpenDeviceNV was called twice using dxva2 with dxinterop. AMD
drivers refused to allow this. With this commit, context_dxinterop sets its own
implementation of MPGetNativeDisplay, which can return either a
IDirect3DDevice9Ex or a dxinterop_device_HANDLE depending on the "name" request
string. hwdec_dxva2gldx then requests both of these avoiding the need to call
gl->DXOpenDeviceNV a second time.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Drag&drop mechanisms typically support multiple types for the drop data.
Move most of the logic which types are accepted and preferred to
event.c, where the data is also interpreted.
(Maybe sorting the types by assigning scores is over-engineered, since
they're already sorted by preference, but it's actually not much more
code.)
Not very interesting/meaningful yet, but preparation for the next
commit.
|
|
|
|
|
|
| |
Reduces VO access and makes the code more self-contained. (One day the
windowing backend code should not access the VO anymore. We're just not
quite there yet.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of displaying it only on playback start (or after switching
tracks), always display it even after a seek.
This helps with --lavfi-complex. You can now overlay e.g. audio
visualizations over cover art, and it won't break after a seek.
The downside is that this might make seeks with huge cover art slower.
There is also a glitch on seeking: since cover art pictures always
have timestamp 0, the playback time will be 0 for a moment after seek,
and then revert to audio PTS (as video is considered EOF). This is also
due to how lavfi's overlay filter behaves. (I'm not sure how to tell
lavfi that it's just a single frame.)
|
|
|
|
|
| |
Almost only a cosmetic change, although it decreases pointless
referencing/dereferencing of the cover art packet too.
|
|
|
|
|
|
| |
Like dxinterop, this uses StretchRect or RGB conversion. This is unavoidable as
long as we use the dxva2 API, as there is no way to access the raw hardware
decoded Direct3D9 surfaces.
|
|
|
|
| |
Slightly improvement over the previous commit.
|
| |
|
| |
|