summaryrefslogtreecommitdiffstats
path: root/filters/filter.h
Commit message (Collapse)AuthorAgeFilesLines
* af/vf-command: add ability to target a specific lavfi filterAshyni2023-10-051-0/+1
| | | | fixes: #11180
* vf_vapoursynth: save display resolution as a variableDudemanguy2023-08-131-0/+1
| | | | | | | | | | | | | | mpv has a generic method for getting the display resolution, so we can save it in vf_vapoursynth without too much trouble. Unfortunately, the resolution won't actually be available in many cases (like my own) because the windowing backend doesn't actually know it yet. It looks like at least windows always returns the default monitor (maybe we should do something similar for x11 and wayland), so there's at least some value. Of course, this still has a bunch of pitfalls like not being able to cope with multi monitor setups at all but so does display_fps. As an aside, the vapoursynth API this uses apparently requires R26 (an ancient version anyway), so bump the build to compensate for this. Fixes #11510
* various: fix typosHarri Nieminen2023-03-281-1/+1
| | | | Found by codespell
* filters: lavfi: allow hwdec_interop selection for filtersPhilip Langdale2022-09-211-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Today, lavfi filters are provided a hw_device from the first hwdec_interop that was loaded, regardless of whether it's the right one or not. In most situations where a hardware based filter is used, we need more control over the device. In this change, a `hwdec_interop` option is added to the lavfi wrapper filter configuration and this is used to pick the correct hw_device to inject into the filter or graph (in the case of a graph, all filters get the same device). Note that this requires the use of the explicit lavfi syntax to allow for the extra configuration. eg: ``` mpv --vf=hwupload ``` becomes ``` mpv --vf=lavfi=[hwupload]:hwdec_interop=cuda-nvdec ``` or ``` mpv --vf=lavfi-bridge=[hwupload]:hwdec_interop=cuda-nvdec ```
* vo: hwdec: do hwdec interop lookup by image formatPhilip Langdale2022-09-211-1/+1
| | | | | | | | | | | | | | It turns out that it's generally more useful to look up hwdecs by image format, rather than device type. In the situations where we need to find one, we generally know the image format we're dealing with. Doing this avoids us having to create mappings from image format to device type. The most significant part of this change is filling in the image format for the various hw interops. There is a hw_imgfmt field today today, but only a couple of the interops fill it in, and that seems to be because we've never actually used this piece of metadata before. Well, now we have a good use for it.
* filter: add filter priority thingwm42020-08-281-0/+4
| | | | | | | This is a kind of bad hack (with bad implementation) to paint over other problems of the filter system. The main problem is that some filters might be left with pending frames if the filter runner is "paused", which we don't want. To be used in a later commit.
* audio: redo video-sync=display-adropwm42020-05-231-0/+1
| | | | | | | | | | | | | | | | | This mode drops or repeats audio data to adapt to video speed, instead of resampling it or such. It was added to deal with SPDIF. The implementation was part of fill_audio_out_buffers() - the entire function is something whose complexity exploded in my face, and which I want to clean up, and this is hopefully a first step. Put it in a filter, and mess with the shitty glue code. It's all sort of roundabout and illogical, but that can be rectified later. The important part is that it works much like the resample or scaletempo filters. For PCM audio, this does not work on samples anymore. This makes it much worse. But for PCM you can use saner mechanisms that sound better. Also, something about PTS tracking is wrong. But not wasting more time on this.
* filter: minor cosmetic naming issuewm42020-03-081-27/+31
| | | | | Just putting some more lipstick on the pig, maybe it looks a bit nicer now.
* filter: add functions to suspend filtering temporarilywm42020-03-051-0/+25
| | | | | | | | | | | | | | Filtering is integrated into an event loop, which is something the filter API user provides. To make interacting with the event loop easier, and in particular to avoid filtering to block event handling, add functions the event loop code can suspend filtering. While we cannot actually suspend a single filter, it's pretty easy to suspend the filter graph run loop itself, which is responsible for selecting which filter to run next. This commit shouldn't change behavior at all, but the functions will be used in later commits.
* filter: add async queue filterwm42020-02-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | This is supposed to enable communication between filter graphs on separate threads. Having multiple threads makes only sense if they can run concurrently with each other, which requires such an asynchronous queue as a building block. (Probably.) The basic idea is that you have two independent filters, which can be each part of separate filter graphs, but which communicate into one direction with an explicit queue. This is rather similar to unix pipes. Just like unix pipes, the queue is limited in size, so that still some data flow control enforced, and runaway memory usage is avoided. This implementation is pretty dumb. In theory, you could avoid avoid waking up the filter graphs in quite a lot of situations. For example, you don't need to wake up the consumer filter if there are already frames queued. Also, you could add "watermarks" that set a threshold at which producer or consumer should be woken up to produce/consume more frames (this would generally serve to "batch" multiple frames at once, instead of performing high-frequency wakeups). But this is hard, so the code is dumb. (I just deleted all related code when I still got situations where wakeups were lost.) This is actually salvaged and modified from a much older branch I had lying around. It will be used in the next commit.
* filter: decide how multi-threading is supposed to workwm42020-02-291-8/+13
| | | | | | | | | | Instead of vague ideas about making different filter graphs on different threads interact directly, this have no direct support. Instead, helpers are required (such as added with the next commit). Document it. Different root filters (i.e. separate filter graphs) are now considered to be part of separate threads, so assert() if they're found to accidentally interact.
* f_output_chain: log status of auto filterswm42018-04-291-0/+4
| | | | | Just so users don't think the filters do anything when they don't insert any filters.
* video: pass through container fps to filterswm42018-04-191-1/+0
| | | | | | | | | | | | This means vf_vapoursynth doesn't need a hack to work around the filter code, and libavfilter filters now actually get the frame_rate field on input pads set. The libavfilter doxygen says the frame_rate field is only to be set if the frame rate is known to be constant, and uses the word "must" (which probably means they really mean it?) - but ffmpeg.c sets the field to mere guesses anyway, and it looks like this normally won't lead to problems.
* filter: extend documentation commentswm42018-02-131-6/+43
| | | | Add more explanations, and also fix some blatantly wrong things.
* filter: add/use a convenience functionwm42018-02-031-0/+5
| | | | | I guess this is generally useful for filters which buffer data internally.
* video: make decoder wrapper a filterwm42018-01-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | Move dec_video.c to filters/f_decoder_wrapper.c. It essentially becomes a source filter. vd.h mostly disappears, because mp_filter takes care of the dataflow, but its remains are in struct mp_decoder_fns. One goal is to simplify dataflow by letting the filter framework handle it (or more accurately, using its conventions). One result is that the decode calls disappear from video.c, because we simply connect the decoder wrapper and the filter chain with mp_pin_connect(). Another goal is to eventually remove the code duplication between the audio and video paths for this. This commit prepares for this by trying to make f_decoder_wrapper.c extensible, so it can be used for audio as well later. Decoder framedropping changes a bit. It doesn't seem to be worse than before, and it's an obscure feature, so I'm content with its new state. Some special code that was apparently meant to avoid dropping too many frames in a row is removed, though. I'm not sure how the source code tree should be organized. For one, video/decode/vd_lavc.c is the only file in its directory, which is a bit annoying.
* audio: rewrite filtering glue codewm42018-01-301-0/+5
| | | | Use the new filtering code for audio too.
* video: rewrite filtering glue codewm42018-01-301-0/+379
Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.