summaryrefslogtreecommitdiffstats
path: root/options/options.h
<
Commit message (Collapse)AuthorAgeFilesLines
* aspect: add video margin optionswm42019-09-191-0/+2
| | | | | | | | | | | | | | | Semantics a bit questionable. This is done for the OSC (next commit), and a comment added the manpage explicitly states this. Meaning this is probably garbage and needs to revisit when the OSC changes and/or someone wants to use this margin feature for something else. Not sure about the subtitle thing. It's imaginable that someone uses these options to create empty borders for subtitles on the bottom, so subtitles should be located there. On the other hand, this gives a rather unpolished user experience when using the (later added) OSC feature to not overlap with the video. There's not much of a point if the OSC still overlaps the video. However, I'm too lazy to think about this, so it stays like it is.
* demux: add a on-disk cachewm42019-09-191-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Somewhat similar to the old --cache-file, except for the demuxer cache. Instead of keeping packet data in memory, it's written to disk and read back when needed. The idea is to reduce main memory usage, while allowing fast seeking in large cached network streams (especially live streams). Keeping the packet metadata on disk would be rather hard (would use mmap or so, or rewrite the entire demux.c packet queue handling), and since it's relatively small, just keep it in memory. Also for simplicity, the disk cache is append-only. If you're watching really long livestreams, and need pruning, you're probably out of luck. This still could be improved by trying to free unused blocks with fallocate(), but since we're writing multiple streams in an interleaved manner, this is slightly hard. Some rather gross ugliness in packet.h: we want to store the file position of the cached data somewhere, but on 32 bit architectures, we don't have any usable 64 bit members for this, just the buf/len fields, which add up to 64 bit - so the shitty union aliases this memory. Error paths untested. Side data (the complicated part of trying to serialize ffmpeg packets) untested. Stream recording had to be adjusted. Some minor details change due to this, but probably nothing important. The change in attempt_range_joining() is because packets in cache have no valid len field. It was a useful check (heuristically finding broken cases), but not a necessary one. Various other approaches were tried. It would be interesting to list them and to mention the pros and cons, but I don't feel like it.
* options: remove --chapterwm42019-09-191-1/+0
| | | | | | | | | Has been deprecated for almost 3 years. Manpage didn't mention the deprecation, but CLI and release notes did. It wouldn't be much effort to keep this option working, but I just don't see the damn point. --start/--end can specify chapters using special syntax, which is equivalent.
* Implement backwards playbackwm42019-09-191-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | See manpage additions. This is a huge hack. You can bet there are shit tons of bugs. It's literally forcing square pegs into round holes. Hopefully, the manpage wall of text makes it clear enough that the whole shit can easily crash and burn. (Although it shouldn't literally crash. That would be a bug. It possibly _could_ start a fire by entering some sort of endless loop, not a literal one, just something where it tries to do work without making progress.) (Some obvious bugs I simply ignored for this initial version, but there's a number of potential bugs I can't even imagine. Normal playback should remain completely unaffected, though.) How this works is also described in the manpage. Basically, we demux in reverse, then we decode in reverse, then we render in reverse. The decoding part is the simplest: just reorder the decoder output. This weirdly integrates with the timeline/ordered chapter code, which also has special requirements on feeding the packets to the decoder in a non-straightforward way (it doesn't conflict, although a bugmessmass breaks correct slicing of segments, so EDL/ordered chapter playback is broken in backward direction). Backward demuxing is pretty involved. In theory, it could be much easier: simply iterating the usual demuxer output backward. But this just doesn't fit into our code, so there's a cthulhu nightmare of shit. To be specific, each stream (audio, video) is reversed separately. At least this means we can do backward playback within cached content (for example, you could play backwards in a live stream; on that note, it disables prefetching, which would lead to losing new live video, but this could be avoided). The fuckmess also meant that I didn't bother trying to support subtitles. Subtitles are a problem because they're "sparse" streams. They need to be "passively" demuxed: you don't try to read a subtitle packet, you demux audio and video, and then look whether there was a subtitle packet. This means to get subtitles for a time range, you need to know that you demuxed video and audio over this range, which becomes pretty messy when you demux audio and video backwards separately. Backward display is the most weird (and potentially buggy) part. To avoid that we need to touch a LOT of timing code, we negate all timestamps. The basic idea is that due to the navigation, all comparisons and subtractions of timestamps keep working, and you don't need to touch every single of them to "reverse" them. E.g.: bool before = pts_a < pts_b; would need to be: bool before = forward ? pts_a < pts_b : pts_a > pts_b; or: bool before = pts_a * dir < pts_b * dir; or if you, as it's implemented now, just do this after decoding: pts_a *= dir; pts_b *= dir; and then in the normal timing/renderer code: bool before = pts_a < pts_b; Consequently, we don't need many changes in the latter code. But some assumptions inhererently true for forward playback may have been broken anyway. What is mainly needed is fixing places where values are passed between positive and negative "domains". For example, seeking and timestamp user display always uses positive timestamps. The main mess is that it's not obvious which domain a given variable should or does use. Well, in my tests with a single file, it suddenly started to work when I did this. I'm honestly surprised that it did, and that I didn't have to change a single line in the timing code past decoder (just something minor to make external/cached text subtitles display). I committed it immediately while avoiding thinking about it. But there really likely are subtle problems of all sorts. As far as I'm aware, gstreamer also supports backward playback. When I looked at this years ago, I couldn't find a way to actually try this, and I didn't revisit it now. Back then I also read talk slides from the person who implemented it, and I'm not sure if and which ideas I might have taken from it. It's possible that the timestamp reversal is inspired by it, but I didn't check. (I think it claimed that it could avoid large changes by changing a sign?) VapourSynth has some sort of reverse function, which provides a backward view on a video. The function itself is trivial to implement, as VapourSynth aims to provide random access to video by frame numbers (so you just request decreasing frame numbers). From what I remember, it wasn't exactly fluid, but it worked. It's implemented by creating an index, and seeking to the target on demand, and a bunch of caching. mpv could use it, but it would either require using VapourSynth as demuxer and decoder for everything, or replacing the current file every time something is supposed to be played backwards. FFmpeg's libavfilter has reversal filters for audio and video. These require buffering the entire media data of the file, and don't really fit into mpv's architecture. It could be used by playing a libavfilter graph that also demuxes, but that's like VapourSynth but worse.
* player: add --demuxer-cache-wait optionwm42019-09-191-0/+1
|
* Remove classic Linux analog TV support, and DVB runtime controlswm42019-09-131-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | Linux analog TV support (via tv://) was excessively complex, and whenever I attempted to use it (cameras or loopback devices), it didn't work well, or would have required some major work to update it. It's very much stuck in the analog past (my favorite are the frequency tables in frequencies.c for analog TV channels which don't exist anymore). Especially cameras and such work fine with libavdevice and better than tv://, for example: mpv av://v4l2:/dev/video0 (adding --profile=low-latency --untimed even makes it mostly realtime) Adding a new input layer that targets such "modern" uses would be acceptable, if anyone is interested in it. The old TV code is just too focused on actual analog TV. DVB is rather obscure, but has an active maintainer, so don't remove it. However, the demux/stream ctrl layer must go, so remove controls for channel switching. Most of these could be reimplemented by using the normal method for option runtime changes.
* demux, stream: rip out the classic stream cachewm42018-08-311-12/+0
| | | | | | The demuxer cache is the only cache now. Might need another change to combat seeking failures in mp4 etc. The only bad thing is the loss of cache-speed, which was sort of nice to have.
* vd_lavc: move hwdec opts to local config, don't use global MPOptswm42018-05-241-7/+0
| | | | | | | The --hwdec* options are a good fit for the vd_lavc local option struct. This annoyingly requires manual prefixing of most of these options with --vd-lavc (could be avoided by using more sub-struct craziness, but let's not).
* ao: use a local option structwm42018-05-241-4/+1
| | | | Instead of accessing MPOpts.
* player: make playback termination asynchronouswm42018-05-241-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Until now, stopping playback aborted the demuxer and I/O layer violently by signaling mp_cancel (bound to libavformat's AVIOInterruptCB mechanism). Change it to try closing them gracefully. The main purpose is to silence those libavformat errors that happen when you request termination. Most of libavformat barely cares about the termination mechanism (AVIOInterruptCB), and essentially it's like the network connection is abruptly severed, or file I/O suddenly returns I/O errors. There were issues with dumb TLS warnings, parsers complaining about incomplete data, and some special protocols that require server communication to gracefully disconnect. We still want to abort it forcefully if it refuses to terminate on its own, so a timeout is required. Users can set the timeout to 0, which should give them the old behavior. This also removes the old mechanism that treats certain commands (like "quit") specially, and tries to terminate the demuxers even if the core is currently frozen. This is for situations where the core synchronized to the demuxer or stream layer while network is unresponsive. This in turn can only happen due to the "program" or "cache-size" properties in the current code (see one of the previous commits). Also, the old mechanism doesn't fit particularly well with the new one. We wouldn't want to abort playback immediately on a "quit" command - the new code is all about giving it a chance to end it gracefully. We'd need some sort of watchdog thread or something equally complicated to handle this. So just remove it. The change in osd.c is to prevent that it clears the status line while waiting for termination. The normal status line code doesn't output anything useful at this point, and the code path taken clears it, both of which is an annoying behavior change, so just let it show the old one.
* options: remove broken --video-stereo-mode optionwm42018-04-291-1/+0
| | | | | See changelog for minor explanation. Basically, 3D is unused crap and nobody cares.
* vaapi: add option to select a non-default device pathRostislav Pehlivanov2018-03-301-0/+1
| | | | | | | | | | | On machines with multiple GPUs, /dev/dri/renderD128 isn't guaranteed to point to a valid vaapi device. This just adds the option to specify what path to use. The old fallback /dev/dri/card0 is gone but that's not a loss as its a legacy interface no longer accepted as valid by libva. Fixes #4320
* vo: move display-fps internal option value to VO optswm42018-03-151-1/+1
| | | | | | Removes the awkward notification through VO_EVENT_WIN_STATE. Unfortunately, some awkwardness remains in mp_property_display_fps(), because the property has conflicting semantics with the option.
* video: add an option to tune waiting for video timingwm42018-03-151-0/+2
| | | | Probably mostly useful for the libmpv render API.
* video: add option to reduce latency by 1 or 2 frameswm42018-03-031-0/+1
| | | | | | | | | | | | | | | | | | | | The playback start logic explicitly waits until the first frame has been displayed. Usually this will introduce a wait of 1 vsync. For normal playback this doesn't matter, but with respect to low latency needs, this only leads to additional data getting queued up in the demuxer or network buffers. Another thing is that the timing logic decodes 1 frame ahead (= 1 frame extra latency) to determine the exact duration of a frame. To be fair, there doesn't really seem to be a hard reason why this is needed. With the current code, enabling the option does lead to A/V desync sometimes (if the demuxer FPS is too inaccurate), and also frame drops at playback start in some situations. But this all seems to be avoidable, if the timing logic were to be rewritten completely, which should probably happen in the future. Thus the new option comes with the warning that it can be removed any time. This is also why the option has "hack" in the name.
* cocoa-cb: change border and borderless window stylingAkemi2018-02-281-0/+1
| | | | | | | | | | | the title bar is now within the window bounds instead of outside. same as QuickTime Player. it supports several standard styles, two dark and two light ones. additionally we have properly rounded corners now and the borderless window also has the proper window shadow. Also make the earliest supported macOS version 10.10. Fixes #4789, #3944
* audio: rewrite filtering glue codewm42018-01-301-0/+1
| | | | Use the new filtering code for audio too.
* video: rewrite filtering glue codewm42018-01-301-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.
* command: add --osd-on-seek option defaulting to barKevin Mitchell2018-01-261-0/+1
| | | | | | | | | | | | | Restores behaviour prior to aef2ed5dc13e37dec0670c451b4369b151d5c65f. That change was apparently unpopular. However, given the amount of complaining over how hard it is to change the defaults by rebinding every key, I think the extra option introduced by this commit is justified. Technically not all behaviour is restored, because now --no-osd-bar will not instead display the msg text on seek. I think that feature was a little weird and is now easy enough to remedy with the --osd-on-seek option.
* audio: add global options for resampler defaultswm42018-01-131-1/+2
| | | | | | | | This is part of trying to get rid of --af-defaults, and the af resample filter. It requires a complicated mechanism to set the defaults on the resample filter for backwards compatibility.
* player: cosmetics: rename internal variable for consistencywm42018-01-031-1/+1
| | | | This was so annoying.
* player: add --cache-pause-initial option to start in buffering statewm42018-01-031-0/+1
| | | | | | Reasons why you'd want this see manpage additions. Disabled by default, because it would increase latency of live streams by default. (Or well, at least it would be another problem when trying getting lower latency.)
* player: use fixed timeout for cache pausing (buffering) durationwm42018-01-031-0/+1
| | | | | | | | | | | | | | | This tried to be clever by waiting for a longer time each time the buffer was underrunning, or shorter if it was getting better. I think this was pretty weird behavior and makes no sense. If the user really wants the stream to buffer longer, he/she/it can just pause the player (the network caches will continue to be filled until they're full). Every time I actually noticed this code triggering in my own use, I didn't find it helpful. Apart from that it was pretty hard to test. Some waiting is needed to avoid that the player just plays the available data as fast as possible (to compensate for late frames and underrunning audio). Just use a fixed wait time, which can now be controlled by the new --cache-pause-wait option.
* options: move most subtitle and OSD rendering options to sub structswm42018-01-021-44/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | Remove them from the big MPOpts struct and move them to their sub structs. In the places where their fields are used, create a private copy of the structs, instead of accessing the semi-deprecated global option struct instance (mpv_global.opts) directly. This actually makes accessing these options finally thread-safe. They weren't even if they should have for years. (Including some potential for undefined behavior when e.g. the OSD font was changed at runtime.) This is mostly transparent. All options get moved around, but most users of the options just need to access a different struct (changing sd.opts to a different type changes a lot of uses, for example). One thing which has to be considered and could cause potential regressions is that the new option copies must be explicitly updated. sub_update_opts() takes care of this for example. Another thing is that writing to the option structs manually won't work, because the changes won't be propagated to other copies. Apparently the only affected case is the implementation of the sub-step command, which tries to change sub_delay. Handle this one explicitly (osd_changed() doesn't need to be called anymore, because changing the option triggers UPDATE_OSD, and updates the OSD as a consequence). The way the option value is propagated is rather hacky, but for now this will do.
* vo_gpu/context: Let embedding application handle surface resizessfan52017-12-271-0/+1
| | | | | The callbacks for this are Java-only and EGL does not reliably return the correct values.
* options: drop some previously deprecated optionswm42017-12-251-4/+0
| | | | | | | | A release has been made, so drop options deprecated for that release. Also drop some options which have been deprecated a much longer time before. Also fix a typo in client-api-changes.rst.
* vd_lavc: rewrite how --hwdec is handledwm42017-12-011-1/+4
| | | | | | | | | | | | | | | | Change it from explicit metadata about every hwaccel method to trying to get it from libavcodec. As shown by add_all_hwdec_methods(), this is a quite bumpy road, and a bit worse than expected. This will probably cause a bunch of regressions. In particular I didn't check all the strange decoder wrappers, which all cause some sort of special cases each. You're volunteering for beta testing by using this commit. One interesting thing is that we completely get rid of mp_hwdec_ctx in vd_lavc.c, and that HWDEC_* mostly goes away (some filters still use it, and the VO hwdec interops still have a lot of code to set it up, so it's not going away completely for now).
* vo_gpu: make it possible to load multiple hwdec interop driverswm42017-12-011-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Make the VO<->decoder interface capable of supporting multiple hwdec APIs at once. The main gain is that this simplifies autoprobing a lot. Before this change, it could happen that the VO loaded the "wrong" hwdec API, and the decoder was stuck with the choice (breaking hw decoding). With the change applied, the VO simply loads all available APIs, so autoprobing trickery is left entirely to the decoder. In the past, we were quite careful about not accidentally loading the wrong interop drivers. This was in part to make sure autoprobing works, but also because libva had this obnoxious bug of dumping garbage to stderr when using the API. libva was fixed, so this is not a problem anymore. The --opengl-hwdec-interop option is changed in various ways (again...), and renamed to --gpu-hwdec-interop. It does not have much use anymore, other than debugging. It's notable that the order in the hwdec interop array ra_hwdec_drivers[] still matters if multiple drivers support the same image formats, so the option can explicitly force one, if that should ever be necessary, or more likely, for debugging. One example are the ra_hwdec_d3d11egl and ra_hwdec_d3d11eglrgb drivers, which both support d3d11 input. vo_gpu now always loads the interop lazily by default, but when it does, it loads them all. vo_opengl_cb now always loads them when the GL context handle is initialized. I don't expect that this causes any problems. It's now possible to do things like changing between vdpau and nvdec decoding at runtime. This is also preparation for cleaning up vd_lavc.c hwdec autoprobing. It's another reason why hwdec_devices_request_all() does not take a hwdec type anymore.
* af: remove deprecated audio filterswm42017-11-291-1/+0
| | | | | | | | | | | | These couldn't be relicensed, and won't survive the LGPL transition. The other existing filters are mostly LGPL (except libaf glue code). This remove the deprecated pan option. I guess it could be restored by inserting a libavfilter filter (if there's one), but for now let it be gone. This temporarily breaks volume control (and things related to it, like replaygain).
* vo_gpu: hwdec_d3d11va: allow zero-copy video decodingJames Ross-Gowan2017-11-071-0/+1
| | | | | | | | | | | | | | Like the manual says, this is technically undefined behaviour. See: https://msdn.microsoft.com/en-us/library/windows/desktop/ff476085.aspx In particular, MSDN says texture arrays created with the BIND_DECODER flag cannot be used with CreateShaderResourceView, which means they can't be sampled through SRVs like normal Direct3D textures. However, some programs (Google Chrome included) do this anyway for performance and power-usage reasons, and it appears to work with most drivers. Older AMD drivers had a "bug" with zero-copy decoding, but this appears to have been fixed. See #3255, #3464 and http://crbug.com/623029.
* vo_gpu: d3d11: initial implementationJames Ross-Gowan2017-11-071-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a new RA/vo_gpu backend that uses Direct3D 11. The GLSL generated by vo_gpu is cross-compiled to HLSL with SPIRV-Cross. What works: - All of mpv's internal shaders should work, including compute shaders. - Some external shaders have been tested and work, including RAVU and adaptive-sharpen. - Non-dumb mode works, even on very old hardware. Most features work at feature level 9_3 and all features work at feature level 10_0. Some features also work at feature level 9_1 and 9_2, but without high-bit- depth FBOs, it's not very useful. (Hardware this old is probably not fast enough for advanced features anyway.) Note: This is more compatible than ANGLE, which requires 9_3 to work at all (GLES 2.0,) and 10_1 for non-dumb-mode (GLES 3.0.) - Hardware decoding with D3D11VA, including decoding of 10-bit formats without truncation to 8-bit. What doesn't work / can be improved: - PBO upload and direct rendering does not work yet. Direct rendering requires persistent-mapped PBOs because the decoder needs to be able to read data from images that have already been decoded and uploaded. Unfortunately, it seems like persistent-mapped PBOs are fundamentally incompatible with D3D11, which requires all resources to use driver- managed memory and requires memory to be unmapped (and hence pointers to be invalidated) when a resource is used in a draw or copy operation. However it might be possible to use D3D11's limited multithreading capabilities to emulate some features of PBOs, like asynchronous texture uploading. - The blit() and clear() operations don't have equivalents in the D3D11 API that handle all cases, so in most cases, they have to be emulated with a shader. This is currently done inside ra_d3d11, but ideally it would be done in generic code, so it can take advantage of mpv's shader generation utilities. - SPIRV-Cross is used through a NIH C-compatible wrapper library, since it does not expose a C interface itself. The library is available here: https://github.com/rossy/crossc - The D3D11 context could be made to support more modern DXGI features in future. For example, it should be possible to add support for high-bit-depth and HDR output with DXGI 1.5/1.6.
* video : Move drm options to substruct.Lionel CHAZALLON2017-10-231-3/+1
| | | | | This allows to group them and most of all query the group config when needed and when we don't have the access to vo.
* Add DRM_PRIME Format Handling and Display for RockChip MPP decodersLionel CHAZALLON2017-10-231-0/+1
| | | | | | | | | | | This commit allows to use the AV_PIX_FMT_DRM_PRIME newly introduced format in ffmpeg that allows decoders to provide an AVDRMFrameDescriptor struct. That struct holds dmabuf fds and information allowing zerocopy rendering using KMS / DRM Atomic. This has been tested on RockChip ROCK64 device.
* video: make it possible to always override hardware decoding formatwm42017-10-161-0/+1