| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
NSLock should be unlocked before dealloc is called on it.
|
|
|
|
|
|
|
|
|
|
| |
This means the code that tries to figure out the timestamp from
demuxer and decoder output is now all in dec_video.c. We set the
final timestamp on the returned image (mp_image.pts), as well as
the d_video->pts field.
The way the player uses d_video->pts field is still a bit messy. Maybe
this could be cleaned up later.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It appears PTS sorting was useful only for avi files (and VfW-muxed
mkv). Maybe it was historically also important for decoders with broken
or non-existent PTS reordering (win32 codecs?). But now that we handle
demuxers which outputs DTS only correctly, it just seems dead weight.
Disable it by default. The --pts-association-mode option is now forced
to always use the decoder's PTS value. You can still enable the old
default (auto) or force sorting. But we will probably remove this option
entirely at some point.
Make demux_mkv export timestamps at DTS when it's in VfW mode. This is
needed to get correct timestamps with the new default mode. demux_lavf
already does that.
|
|
|
|
|
|
|
|
|
| |
Having the DTS directly can be useful for restoring PTS values.
The avi file format doesn't actually store PTS values, just DTS. An
older hack explicitly exported the DTS as PTS (ignoring the [I assume]
genpts generated non-sense PTS), which is not necessary anymore due to
this change.
|
|
|
|
|
|
| |
Now the --no-correct-pts mode is like the normal mode, just with
different timestamp calculations. The semantics should be about the
same as before this commit.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before this commit, this mode estimated the frame time by subtracting
successive packet PTS values. This is complete non-sense for video
codecs which use reordering. The code compensated frame times for these
non-sense using the FPS value, but confused the rest of the player with
non-sense jumping around timestamps. So, all in all this mode is not
very useful.
Repurpose this mode for fixed frame rate playback. This gives almost the
same behavior as the old mode with forced framerate (--fps option). The
result is simpler and often more robust.
|
|
|
|
| |
Just why...? And why did this take 7 years?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of passing the PTS as separate field, pass it as part of the
usual data structures. Basically, this removes strange artifacts from
the API. (It's not finished, though: the final decoded PTS goes through
strange paths, and filter_video() finally overwrites the decoded
mp_image's pts field with it.)
We also stop using libavcodec's reordered_opaque fields, and use
AVPacket.pts and AVFrame.pkt_pts. This is slightly unorthodox, because
these pts fields are not "really" opaque anymore, yet we treat them as
such. But the end result should be the same, and reordered_opaque is
marked as partially deprecated (it's not clear whether it's really
deprecated).
|
|
|
|
|
|
| |
This is not needed anymore, because we decided that the PAR of the
decoded video matters, and not the PAR of the filtered video that
arrives at the VO.
|
|
|
|
|
| |
This was way too misleading. osd.c merely calls the subtitle renderers,
instead of actually dealing with subtitles.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When mpv is started with some video filters set (--vf is used), and
hardware decoding is requested, and hardware decoding would be possible,
but is prevented due to video filters that accept software formats only,
the fallback didn't work properly sometimes.
This fallback works rather violently: it tries to initialize the filter
chain, and if it fails it throws away the frame decoded using the
hardware, and retries with software. The case that didn't work was when
decoding the current packet didn't immediately lead to a new frame. Then
the filter chain wouldn't be reinitialized, and the playloop would stop
playback as soon as it encounters the error flag.
Fix this by resetting the filter error flag (back to "uninitialized"),
which is a rather violent, but somewhat working solution.
The fallback in general should perhaps be cleaned up later.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the --fps option was given (MPOpts->force_fps), the demuxer FPS value
was overwritten with the forced value. This was fine, since the demuxer
value wasn't needed anymore. But with the recent changes not to write to
the demuxer stream headers, we don't want to do this anymore. So
maintain the (forced/updated) FPS value in dec_video->fps.
The removed code in loadfile.c is probably redundant, and an artifact
from past refactorings.
Note that sub.c will now always use the demuxer FPS value, instead of
the user override value. I think this is fine, because it used the
demuxer's video size values too. (And it's rare that these values are
used at all.)
|
|
|
|
|
|
|
| |
Now the actual decoder doesn't need to care about this anymore, and it's
handled in generic code instead. This simplifies vd_lavc.c, and in
particular we don't need to detect format changes in the old way
anymore.
|
|
|
|
|
|
|
|
|
|
|
| |
The only reason why these structs were dynamically allocated was to
avoid recursive includes in stheader.h, which is (or was) a very central
file included by almost all other files. (If a struct is referenced via
a pointer type only, it can be forward referenced, and the definition of
the struct is not needed.) Now that they're out of stheader.h, this
difference doesn't matter anymore, and the code can be simplified.
Also sneak in some sanity checks.
|
|
|
|
| |
It's redundant.
|
|
|
|
|
|
|
|
|
| |
This used to be needed to access the generic stream header from the
specific headers, which in turn was needed because the decoders had
access only to the specific headers. This is not the case anymore, so
this can finally be removed again.
Also move the "format" field from the specific headers to sh_stream.
|
|
|
|
|
|
|
|
|
|
| |
This is similar to the sh_audio commit.
This is mostly cosmetic in nature, except that it also adds automatical
freeing of the decoder driver's state struct (which was in
sh_video->context, now in dec_video->priv).
Also remove all the stheader.h fields that are not needed anymore.
|
| |
|
|
|
|
| |
Make it work via --vf=pp:help instead.
|
|
|
|
|
|
|
|
|
|
| |
This drops the --pp option, which was probably broken for a while. The
option automatically inserted the "pp" filter. The value passed to it
was ignored (which is probably broken, it always selected maximal
quality).
Inserting this filter can be done simply with --vf=pp, so this is not
needed anymore.
|
|
|
|
|
| |
I don't feel like the separation ever made sense, and it was hard to
tell which file a function you were looking for was in.
|
|
|
|
|
|
|
|
| |
This means most code accessing this struct must now include hwdec.h
instead of dec_video.h. I just put it into dec_video.h at first because
I thought a separate file would be a waste, but it's more proper to do
it this way, as there are too many files which include dec_video.h only
to get the mp_hwdec_info definition.
|
|
|
|
| |
Never do a trivial change while drunk and without actually testing it.
|
|
|
|
|
|
|
|
|
|
| |
vo_opengl always loads the hwdec backend lazily, so hwdec_request_api()
has to be called to possibly load it. This makes vf_vavpp work with
software decoding. (Hardware decoding loads the backend before the
filter is initialized, so this case is different.)
Also, the VFCTRL_GET_HWDEC_INFO call doesn't need to be checked. If it
fails, the info will be left blank.
|
|
|
|
|
|
| |
This initialized only the load_api and load_api_ctx fields, and left the
other fields as they were. This failed with vf_vavpp, which assumed all
fields are initialized.
|
|
|
|
|
| |
In the cocoa backend you can use cmd+0/1/2 to scale the window. This commit
makes it use the new window-scale functionality.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit adds a new build system based on waf. configure and Makefile
are deprecated effective immediately and someday in the future they will be
removed (they are still available by running ./old-configure).
You can find how the choice for waf came to be in `DOCS/waf-buildsystem.rst`.
TL;DR: we couldn't get the same level of abstraction and customization with
other build systems we tried (CMake and autotools).
For guidance on how to build the software now, take a look at README.md
and the cross compilation guide.
CREDITS:
This is a squash of ~250 commits. Some of them are not by me, so here is the
deserved attribution:
- @wm4 contributed some Windows fixes, renamed configure to old-configure
and contributed to the bootstrap script. Also, GNU/Linux testing.
- @lachs0r contributed some Windows fixes and the bootstrap script.
- @Nikoli contributed a lot of testing and discovered many bugs.
- @CrimsonVoid contributed changes to the bootstrap script.
|
|
|
|
|
| |
OpenGL interop was essentially disabled, because the decoder didn't
request vdpau device creation from vo_opengl.
|
|
|
|
| |
This always bothered me.
|
|
|
|
|
|
| |
Same as MPlayer svn commit r36515 "Chose cheaper alpha blend equation."
No idea if this is actually faster, but can't hurt.
|
|
|
|
| |
rejected it
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The existing code tried to remove the "extra" profile flags for h264.
FF_PROFILE_H264_INTRA doesn't matter for us at all, because it's set
only for profiles the vdpau/vaapi APIs don't support.
The FF_PROFILE_H264_CONSTRAINED flag on the other hand is added to
H264_BASELINE, except that it makes the file a real subset of H264_MAIN
and H264_HIGH. Removing that flag would select the BASELINE profile,
which appears to be rarely supported by hardware decoders. This means we
accidentally rejected perfectly hardware decodable files. Use MAIN for
it instead.
(vaapi has explicit support for CONSTRAINED_BASELINE, but it seems to be
a new thing, and is not reported as supported where I tried. So don't
bother to check it, and do the same as on vdpau.)
See github issue #204.
|
|
|
|
| |
We got rid of this some time ago, but apparently not completely.
|
|
|
|
| |
Fixes a problem where the passed size doesn't match the actuall string.
|
| |
|
|
|
|
| |
Previously, using it led to no single frame being output, ever.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When blending OSD and subtitles onto the video, we write bogus alpha
values. This doesn't normally matter, because these values are normally
unused and discarded. But at least on Wayland, the alpha values are used
by the compositor and leads to transparent windows even with opaque
video on places where the OSD happens to use transparency.
(Also see github issue #338.)
Until now, the alpha basically contained garbage. The source factor
GL_SRC_ALPHA meant that alpha was multiplied with itself. Use GL_ONE
instead (which is why we have to use glBlendFuncSeparate()). This should
give correct results, even with video that has alpha. (Or at least it's
something close to correct, I haven't thought too hard how the
compositor will blend it, and in fact I couldn't manage to test it.)
If glBlendFuncSeparate() is not available, fall back to glBlendFunc(),
which does the same as the code did before this commit. Technically, we
support GL 1.1, but glBlendFuncSeparate is 1.4, and I guess we should
try not to crash if vo_opengl_old runs on a system with GL 1.1 drivers
only.
|
|
|
|
|
| |
VDPAU handles are integers, but for some reasons the VDPAU GL extension
expects them as void*.
|
|
|
|
| |
Obviously I didn't test commit 1b8cd01, and it just crashed. Oops.
|
|
|
|
|
|
|
|
|
| |
This was supposed to handle preemption better. I still think the current
state isn't very nice, since the decoder can "accidentally" call the
previous render function after preemption (instead of calling the
reloaded function), so there might be issues. But all in all, this
dummy_render function is a bit confusing, and still not entirely
correct, so it's not worth it.
|
|
|
|
|
| |
Besides cosmetic changes, also change memcpy_pic return type and remove
config.h include.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This removes "--hwdec=crystalhd".
I doubt anyone even tried to use this. But even if someone wants to
use it, the decoders can still be explicitly invoked with e.g.:
--vd=lavc:h264_crystalhd
The only advantage our special code provided was fallback to
software decoding. (But I'm not sure how the ffmpeg crystalhd
pseudo-decoder actually behaves.)
Removing this will allow some simplifications as soon as we don't need
vdpau_old.c anymore.
|
|
|
|
|
| |
Could possibly leading to failing compilation on systems with headers
that miss the vdpau extension.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This uses vdpau OpenGL interop to convert a vdpau surface to a texture.
Note that this is a bit weak and primitive. Deinterlacing (or any other
form of vdpau postprocessing) is not supported. vo_opengl chroma scaling
and chroma sample position are not supported. Internally, the vdpau
video surfaces are converted to a RGBA surface first, because using the
video surfaces directly is too complicated. (These surfaces are always
split into separate fields, and the vo_opengl core expects progressive
frames or frames with weaved fields.)
|
|
|
|
|
| |
The goal is being able to use vdpau decoding independently from
vo_vdpau.c.
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of checking for resolution and image format changes, always
fully reinit on any parameter change. Let init_video do all required
initializations, which simplifies things a little bit.
Change the gl_video/hardware decoding interop API slightly, so that
hwdec initialization gets the full image parameters.
Also make some cosmetic changes.
|
|
|
|
|
| |
More correct, might make things slightly faster (probably
insignificant).
|
|
|
|
|
|
|
|
| |
These formats are helpful for distinguishing surfaces with and without
alpha. Unfortunately, Libav and older version of FFmpeg don't support
them, so code will break. Fix this by treating these formats specially
on the mpv side, mapping them to RGBA on Libav, and unseting the alpha
bit in the mp_imgfmt_desc struct.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Before the bitstream_buffers field was deprecated, you had to free it,
otherwise you would leak memory.
(Although vdpau.c uses a new API, they managed to introduce a new
deprecation this quickly. This is a complaint.)
This introduces a memory leak of 12 bytes per file on every file on some
_older_ libavcodec versions. This is minor enough that I don't care.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Video has up to 4 textures, if you include obscure formats with alpha.
This means alpha formats could always overwrite the first scaler
texture, leading to corrupted video display. This problem was recently
brought to light, when commit 571e697 started to explicitly unbind all 4
video textures, which broke rendering for non-alpha formats as well.
Fix this by reserving the correct number of texture units.
|
|\ |
|
| | |
|
| |\
| | |
| | |
| | |
| | | |
Conflicts:
configure
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The configure followed 5 different convetions of defines because the next guy
always wanted to introduce a new better way to uniform it[1]. For an
hypothetic feature 'hurr' you could have had:
* #define HAVE_HURR 1 / #undef HAVE_DURR
* #define HAVE_HURR / #undef HAVE_DURR
* #define CONFIG_HURR 1 / #undef CONFIG_DURR
* #define HAVE_HURR 1 / #define HAVE_DURR 0
* #define CONFIG_HURR 1 / #define CONFIG_DURR 0
All is now uniform and uses:
|