| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
This was broken in 3f85094 (probably merge mistake). I guess nobody ever
uses this feature.
|
|
|
|
|
|
|
|
|
|
| |
The decoder returns with AVFrame.format not correctly set for some h264
files (strangely only some). We have to access AVCodecContext.pix_fmt
instead. On newer libavcodec versions, it's the other way around: the
AVCodecContext.pix_fmt may be incorrectly set on pixel format changes,
and you are supposed to use AVFrame.format.
The same problem probably exists on older ffmpeg versions too.
|
|
|
|
|
|
|
| |
If we detect Libav, always use the old builtin vobsub decoder (in
spudec.c). Note that we do not want to use it for newer ffmpeg, as
spudec.c can't handle the vobsub packets as generated by the .idx
demuxer, and we want to get rid of spudec.c in general anyway.
|
| |
|
|
|
|
| |
A patch supporting the newer API AND the older API is in the works.
|
|
|
|
|
|
|
|
| |
Now, when backgrounded, mpv plays and outputs messages to stdout, but
statusline is not output.
Background<->foreground transitions are detected by signals and polling
the process groups.
|
|
|
|
|
|
| |
Exactly the same issue as with the previous commit. Just like the vdpau
code, this was apparently copy-pasted from the vo_x11 code, even though
it doesn't make much sense.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Using vdpau on an X server configured to a bit depth of 30 (10 bit per
component) failed finding a visual. The cause was a hack that tried to
normalize the bit depth to 24 if it was not a known depth. It's unknown
why/if this is needed, but the following things speak against it:
- it prevented unusual bit depths like 30 bit from working
- it wasn't needed with normal bit depth like 24 bit
- it's probably copy-pasted from vo_x11 (where this code possibly makes
sense, unlike in vo_vdpau)
Just remove this code and look for a visual with native depth.
|
|
|
|
|
|
|
| |
This was disabled in 4ea60a3 and 70c455a, when all options were still
forced file local, and resetting fullscreen was annoyingly reset when
switching to the next file. mpv keeps all options by default, so this
isn't needed anymore.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-x/-y were rather useless and obscure. The only use I can see is
forcing a specific aspect ratio without having to calculate the aspect
ratio float value (although --aspect takes values of the form w:h).
This can be also done with --geometry and --no-keepaspect. There was
also a comment that -x/-y is useful for -vm, although I don't see how
this is useful as it still messes up aspect ratio.
-xy is mostly obsolete. It does two things: a) set the window width to
a pixel value, b) scale the window size by a factor. a) is already done
by --autofit (--autofit=num does exactly the same thing as --xy=num, if
num >= 8). b) is not all that useful, so we just drop that
functionality.
|
|
|
|
|
|
|
|
|
| |
--autofit=WxH sets the window size to a maximum width and/or height,
without changing the window's aspect ratio.
--autofit-larger=WxH does the same, but only if the video size is
actually larger than the window size that would result when using
the --autofit=WxH option with the same arguments.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now all numbers in the --geometry specification can take percentages.
Rewrite the parsing of --geometry, because adjusting the sscanf() mess
would require adding all the combinations of using and not using %. As
a side effect, using % and pixel values can be freely mixed.
Keep the aspect if only one of width or height is set. This is more
useful in general.
Note: there is one semantic change: --geometry=num used to mean setting
the window X position, but now it means setting the window width.
Apparently this was a mplayer-specific feature (not part of standard X
geometry specifications), and it doesn't look like an overly useful
feature, so we are fine with breaking it.
In general, the new parsing should still adhere to standard X geometry
specification (as used by XParseGeometry()).
|
|
|
|
|
|
|
|
|
| |
This also means the option is verified on program start, not when the VO
is created. The actual code becomes a bit more complex, because the
screen width/height is not available at program start.
The actual parsing code is still the same, with its unusual sscanf()
usage.
|
|
|
|
|
|
|
|
|
| |
Now the calculations of the final display size are done after the filter
chain. This makes the difference between display aspect ratio and window
size a bit more clear, especially in the -xy case.
With an empty filter chain, the behavior of the options should be the
same, except that they don't affect vo_image and vo_lavc anymore.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Handle all pending events and exit instead of waiting. When there are lots of
input events (for example, scrolling with trackpad), timeout can add up
to make a huge frame delay. In my tests, if I scroll fast enough, that loop
would never exit.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Drop queued frames on seek. Reset the internal state of some filters
that seem to need it as well: at least vf_divtc still produced some
frames using the previous PTS.
This fixes weird behavior with some filters on seeking. In particular,
this could lead to A/V desync or apparent lockups due to the PTS of
filtered frames being too far away from audio PTS.
This commit does only the minimally required work to fix these PTS
related issues. Some filters have state dependent on previously filtered
frames, and these are not automatically reset with this commit (even
vf_divtc and vf_softpulldown reset the PTS info only). Filters that
actually require a full reset can implement VFCTRL_SEEK_RESET.
|
|
|
|
|
|
| |
Format changes within a file can e.g. happen in MPEG-TS streams. This
fix also fixes encoding of such files, because ao_lavc is not capable of
reconfiguring the audio stream.
|
|
|
|
|
| |
The iPhone profiles recursively included themselves. Wonder why it even
worked somewhat...
|
|
|
|
|
| |
This failed with an assert, because the format of the format of the
output image was not set correctly.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Options that take pixel format names now also accept ffmpeg names.
mpv internal names are preferred. We leave this undocumented
intentionally, and may be removed once libswscale stops printing
ffmpeg pixel format names to the terminal (or if we stop passing the
SWS_PRINT_INFO flag to it, which makes it print these).
(We insist on keeping the mpv specific names instead of dropping them
in favor of ffmpeg's name due to NIH, and also because ffmpeg always
appends the endian suffixes "le" and "be".)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The video filter chain traditionally used FourCCs for pixel formats.
This was recently changed, but some parts of the manpage were not
updated properly. Now there are two rypes of options: some which take
a FourCC (as used with raw video formats), and some which take a
symbolic format identifier (as used in the video filter chain).
I realize that it's harder to specify FourCC for RGB formats now (TV
stuff may need RGB). They use non-printable characters as part of the
FourCC, and have to be specified as hexadecimal numbers (instead of
a symbolic identifier). Because I can't be bothered to find out what
these numbers are for the respective formats, just remove the old
pseudo-FourCCs from the documentation.
|
|
|
|
|
|
|
| |
When opening new files in Finder when `mpv` is running from an application
bundle, the new files will now replace the current playlist.
Fixes #14
|
|
|
|
|
| |
`mp_input_queue_cmd` erroneusly added commands to the head of the queue
resulting in LIFO behaviour instead of the intended FIFO.
|
|
|
|
|
| |
This avoids install_name_tool to run out of header space when changing the
paths to the dylibs.
|
|
|
|
| |
Both should be harmless.
|
|
|
|
|
|
| |
<ctype.h> is needed at least for isalnum(). Most time this worked,
because some ffmpeg or Libav versions recursively include this header
from libavutil/common.h. Fix it so it always works.
|
|
|
|
| |
I can no longer reproduce the XVidmode related hang.
|
|
|
|
|
| |
This is a lot cleaner than our current workaround that first queries the
desktop resolution.
|
|
|
|
| |
Width and height were removed from mp_image. Use w and h instead.
|
|
|
|
|
| |
Somewhat useful to see where filters are auto-inserted and which formats
they take.
|
| |
|
|
|
|
| |
Having this in gl_common is confusing.
|
|
|
|
|
|
| |
This printed per-frame statistics into a file, like bitrate or frame
type. Not very useful and accesses obscure AVCodecContext fields
(danger of deprecation/breakage), so get rid of it.
|
|
|
|
|
|
|
|
|
|
| |
This was a "broken misfeature" according to Libav developers. It wasn't
implemented for modern codecs (like h264), and has been removed from
Libav a while ago (the AVCodecContext field has been marked as
deprecated and its value is ignored). FFmpeg still supports it, but
isn't much useful due to aforementioned reasons.
Remove the code to enable it.
|
| |
|
|
|
|
| |
This fixes OSD flicker with vo_xv at high frame rates.
|
| |
|
|
|
|
|
| |
Seems to make it about up to 20% faster in some cases.
Slightly slower in some others.
|
|
|
|
|
|
| |
Should be more efficient in situations both subtitles and toptitles are
shown, because no blending has to be performed for the video between
them.
|
|
|
|
| |
mp_sub_bitmaps_bb is just sub_bitmaps_bb renamed/moved.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Image areas with subtitles are upsampled to 4:4:4 in order to render
the subtitles (makes blending easier). Try not to copy the Y plane on
upsampling. The libswscale API requires this, but this commit works it
around by scaling the chroma planes separately as AV_PIX_FMT_GRAY8. The
Y plane is not touched at all. This is done for 420p8 only, which is the
most commonly needed case. For other formats, the old way is used.
Seems to make ASS rendering faster about 15% in the setup I tested.
|
|
|
|
| |
Doesn't seem to help too much...
|
| |
|
|
|
|
|
|
| |
Allocate it even if it's needed. The actually done work is almost the
same, except that the code is a bit simpler. May need more memory at
once for RGB subs that use more than one part, which is rare.
|
|
|
|
|
|
|
|
|
|
|
| |
This was an awkward hack that attempted to avoid the use of 16 bit
textures, while still allowing rendering 10-16 bit YUV formats. The
idea was that even if the hardware doesn't support 16 bit textures,
an A8L8 textures could be used to convert 10 bit (etc.) to 8 bit in
the shader, instead of doing this on the CPU.
This was an experiment, disabled by default, and was (probably) rarely
used. I've never heard of this being used successfully. Remove it.
|
|
|
|
| |
Use reference counting instead.
|
|
|
|
|
|
| |
This is more correct. Not all frame specific fields are in AVFrame,
such as colorspace and color_range, and these are still queried
through the decoder context.
|
|
|
|
|
|
|
|
| |
This field contained the "average" bit depth per pixel. It serves no
purpose anymore. Remove it.
Only vo_opengl_old still uses this in order to allocate a buffer that is
shared between all planes.
|
|
|
|
|
|
| |
Most of these probably don't have much actual use, but at least allow
images of these formats to be handed to swscale, should any decoder
output them.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This used to mean that there is more than one plane. This is not very
useful: IMGFMT_Y8 was not considered planar, but it's just a Y plane,
and could be treated the same as actual planar formats. On the other
hand, IMGFMT_NV12 is partially packed, and usually requires special
handling, but was considered planar.
Change its meaning. Now the flag is set if the format has a separate
plane for each component. IMGFMT_Y8 is now planar, IMGFMT_NV12 is not.
As an odd special case, IMGFMT_MONO (1 bit per pixel) is like a planar
RGB format with a single plane.
|
|
|
|
|
|
|
|
|
| |
In theory, vf_sub could take any format supported by swscale. But to be
sure that it's reasonably fast, only 420P was allowed. However, other
similar 8 bit planar formats will be just as fast and there's no reason
to exclude them. Even for completely different formats there doesn't
seem to be any significant advantage to force vf_sub to convert to a
simpler/more common format.
|
|
|
|
|
|
|
|
|
|
|
| |
This did random things with some image formats, including 10 bit
formats. Fixes the mp_image_clear() function too.
This still has some caveats:
- doesn't clear alpha to opaque (too hard with packed RGB, and is rarely
needed)
- sets luma to 0 for mpeg-range YUV, instead of the lowest possible
value (e.g. 16 for 8 bit)
|
| |
|
|
|
|
| |
Not really tested.
|
|
|
|
| |
Crashes.
|
| |
|
|
|
|
| |
It supports 420p only, so the check is useless.
|
|
|
|
| |
Also don't use MP_IMGFLAG_PLANAR.
|
|
|
|
|
| |
Properly handle odd image sizes. Probably makes it work with more image
formats.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Needed because mplayer* basically tracks DAR, which makes it harder for
filters which want to keep the SAR.
The contents of this function are duplicated in a few filters, and will
be used instead of custom code as the filters are updated later.
|
|
|
|
| |
Actually stolen from draw_bmp.c.
|
|
|
|
|
|
|
|
| |
This makes it behave like vo_lavc.
Unfortunately, the code for setting up the OSD dimensions (mp_osd_res)
is copied from vo_lavc, but it doesn't look like something that should
be factored out.
|
|
|
|
|
|
|
|
| |
redraw_frame() copied the image into the currently visible buffer. This
resulted in flicker when doing heavy OSD redrawing (like changing the
subtitle size to something absurdly large).
Use the same logic as draw_image instead.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In order to support OSD redrawing for vo_xv and vo_x11, draw_bmp.c
included an awkward "backup" mechanism to copy and restore image
regions that have been changed by OSD/subtitles.
Replace this by a much simpler mechanism: keep a reference to the
original image, and use that to restore the Xv/X framebuffers.
In the worst case, this may increase cache pressure and memory usage,
even if no OSD or subtitles are rendered. In practice, it seems to be
always faster.
|