summaryrefslogtreecommitdiffstats
path: root/filters
Commit message (Collapse)AuthorAgeFilesLines
* player/command: add track-list/N/decoderKacper Michajłow6 days2-22/+6
|
* various: make filter internal function names more descriptivenanahi13 days6-42/+42
| | | | | | | | | Lots of filters have generic internal function names like "process". On a stack trace, all of the different filters use this name, which causes confusion of the actual filter being processed. This renames these internal function names to carry the filter names. This matches what had already been done for some filters.
* meson: get rid of 'egl-helpers' featuresfan52024-03-211-1/+1
| | | | This was just redundant because it's always present together with 'egl'.
* mp_image: add mp_image_params_static_equal for finer comparisionKacper Michajłow2024-03-093-23/+32
| | | | In case of dynamic HDR metadata is present.
* filters/f_lavfi: rename channellayout to ch_layoutDudemanguy2024-03-081-2/+2
| | | | To better match upstream naming.
* swresample: stop using deprecated {in,out}_channel_layout optionsDudemanguy2024-03-081-0/+16
| | | | | | | | | | | | These options were deprecated with the addition of the channel layout API about a couple of years ago*. Unfortunately, we never saw the deprecation messages so it went unnoticed until they were completely removed with the recent major version bump. Fix this by setting in_chlayout and out_chlayout instead if we have AV_CHANNEL_LAYOUT. Fixes #13662. *: https://github.com/FFmpeg/FFmpeg/commit/8a5896ec1f635ccf0d726f7ba7a06649ebeebf25
* filters/f_lavfi: handle removed AV_OPT_TYPE_CHANNEL_LAYOUTDudemanguy2024-03-071-0/+4
| | | | See: https://github.com/FFmpeg/FFmpeg/commit/65ddc74988245a01421a63c5cffa4d900c47117c
* video/filter: add field order support for built in deinterlacers1nsane0002024-03-041-3/+6
| | | | | | refqueue gets the field of the frame from mp_image which almost always(if not always) assumes bottom field order first. By default this behavior should not change but specifying the field order should bypass this.
* f_auto_filters: pass field parity to lavfi bwdif deinterlacers1nsane0002024-03-041-3/+19
| | | | | Since all of them(software, vulkan, cuda) already have a built in parity parameter
* video: don't define IMGFMT_VULKAN conditionallysfan52024-02-262-4/+0
| | | | | | We generally try to avoid that due to the #ifdef mess. The equivalent format is defined in ffmpeg 4.4 while our interop code requires a higher version, but that doesn't cause any problems.
* f_output_chain: prevent double free of child filtersnanahi2024-02-081-1/+1
| | | | | | | | When mp_output_chain_update_filters() fails, talloc_free() is called on each mp_user_filter. But because the structure doesn't have a talloc destructor, the args aren't freed, resulting in stale references. Fix this by calling the destructor of the wrapped filter instead.
* player/command: add deinterlace-active propertyDudemanguy2024-02-074-0/+22
|
* player: add an auto option to deinterlaceDudemanguy2024-02-071-7/+10
| | | | | | | | | | Deinterlacing required that the user set it on/off themselves, but we actually have handy flags for detecting if a frame is interlaced. So it's pretty simple to make an auto option using that. Unfortunately, life is not quite that simple and there are known cases of false positives from the ffmpeg flags so we can't make auto the default value. However, it still may have some utility for some people, and the detection could potentially be improved upon later. Closes #10358.
* f_lavfi: use libplacebo utils instead of mp_csp_*llyyr2024-01-221-2/+3
|
* csputils: replace mp_colorspace with pl_color_spaceKacper Michajłow2024-01-221-1/+2
|
* f_auto_filters: use bwdif_cuda for deinterlacing with cuda hwdecDudemanguy2024-01-221-1/+1
| | | | | Followup to 45f822593f14d78fa22e74fa4e725a3ffd6f713c. vulkan hwdec uses bwdif_vulkan so no reason to not make cuda also match.
* f_auto_filters: change fallback deinterlace to bwdifDudemanguy2024-01-211-2/+2
| | | | | | | I don't actually deinterlace ever but allegedly this is better than yadif, and there's no real reason to not have this be the fallback deinterlace when we're not using hw frames. Also change various mentions of yadif to bwdif. Ref #12835.
* f_lavfi: provide color_space and color_range params for lavfillyyr2024-01-111-0/+4
| | | | | | | | Only when the lavfi version is sufficiently new enough to actually have these fields. See: https://github.com/FFmpeg/FFmpeg/commit/2d555dc82d4ccd3c54c76e2fb3c861a8652de1c6 Fixes #13234
* hwtransfer: actually treat hardware formats as supported input formatsPhilip Langdale2023-12-151-3/+5
| | | | | | | | | | | | | It's conceptually broken for an hw format to be identified as a supported sw format for an hwdec, but in the case of the drm hwdec, we have no choice but to do so for the weird rpi formats, as these are declared, incorrectly, as hw formats in their forked ffmpegs. This means we can't reject such formats as a matter of principle, as we do today. Instead let's just assume that such formats can always be accepted as-is, and will never require conversion. In practice, this is either true or it will fail in the VO, which is the old behaviour from before I introduced the conversion filter capability.
* various: add some missing error checksKacper Michajłow2023-11-183-2/+11
|
* ALL: use pl_hdr_metadata and nuke sig_peakKacper Michajłow2023-11-051-11/+0
| | | | | | | | This commit replaces all uses of sig_peak and maps all HDR metadata. Form notable changes mixed usage of maxCLL and max_luma is resolved and not always max_luma is used which makes vo_gpu and vo_gpu_next behave the same way.
* ALL: use new mp_thread abstractionKacper Michajłow2023-11-053-80/+79
|
* mp_threads: rename threads for consistent naming across all of themKacper Michajłow2023-10-271-3/+3
| | | | | | | | I'd like some names to be more descriptive, but to work with 15 chars limit we have to make some sacrifice. Also because of the limit, remove the `mpv/` prefix and prioritize actuall thread name.
* options: rename --fps to --container-fps-overrideDudemanguy2023-10-251-6/+7
| | | | | | This better reflects what it actually does. As a bonus, script writers won't be misled into thinking that fps displays the actual video or display fps.
* hwtransfer: handle constraints for hwdec with NULL supported_formatsPhilip Langdale2023-10-221-3/+21
| | | | | | | | | | | | | | | | | | Some hwdecs (eg: dxva) have no frames constraints and no static supported_formats, which ends up segfaulting. This change fixes the segfault, but also includes additional changes to avoid concluding that direct output of hardware decoded video is impossible. In the case where there are no frame constraints and no supported_formats, we basically have no idea what the hardware actually supports. Previously, we just tried to display the frame, but all the work I did to detect incompatible formats causes this scenario to now conclude that no formats can be displayed, and forces a HW download to system memory. I have made a couple of small changes to assume that direct display is possible. If it's not, the VO will error out down the line, which is what happened previously.
* various: sort some standard headersNRK2023-10-202-2/+2
| | | | | | | | | | | | since i was going to fix the include order of stdatomic, might as well sort the surrouding includes in accordance with the project's coding style. some headers can sometime require specific include order. standard library headers usually don't. but mpv might "hack into" the standard headers (e.g pthreads) so that complicates things a bit more. hopefully nothing breaks. if it does, the style guide is to blame.
* various: remove ATOMIC_VAR_INITNRK2023-10-201-1/+1
| | | | | | | | | | | the fallback needed it due to the struct wrapper. but the fallback is now removed so it's no longer needed. as for standard atomics, it was never really needed either, was useless and then made obsolete in C17 and removed in C23. ref: https://gustedt.wordpress.com/2018/08/06/c17-obsoletes-atomic_var_init/ ref: https://en.cppreference.com/w/c/atomic/ATOMIC_VAR_INIT
* osdep: remove atomic.hNRK2023-10-202-3/+3
| | | | | | | replace it with <stdatomic.h> and replace the mp_atomic_* typedefs with explicit _Atomic qualified types. also add missing config.h includes on some files.
* hwtransfer: handle hwcontexts that don't implement frame constraintsPhilip Langdale2023-10-161-3/+49
| | | | | | | | | | | | | | | Some ffmpeg hwcontexts do not implement frame constraints. That means they cannot report which formats are support for given configurations. My previous changes to make hwupload more capable relies on the constraints to be provided to accurately describe what combinations are supported, so we will currently see a bunch of errors failing to configure the hwupload filter on platforms such as v4l2 SoCs with drmprime. This doesn't break playback, but it's confusing. To bridge the gap, this change uses the static format metadata that we include for hwdecs to build a fake constraints struct to keep all the other code working.
* filters: change end time calculation to nanosecondsDudemanguy2023-10-161-2/+2
|
* f_decoder_wrapper: change video-codec to show description or nameKacper Michajłow2023-10-141-2/+2
| | | | | | | | Not both of them. Formating it as `<name> (<desc>)` produced arguably silly string like `hevc (HEVC (High Efficiency Video Coding))`. Unpack this to show only description if available or name otherwise. Produces way nicer results in stats.lua and similar places where this name is printed.
* af/vf-command: add ability to target a specific lavfi filterAshyni2023-10-052-1/+3
| | | | fixes: #11180
* timer: rename mp_add_timeout to reflect what it actually doesKacper Michajłow2023-09-291-1/+1
|
* options: remove --vf-defaults and --af-defaultsDudemanguy2023-09-211-13/+1
| | | | | | | These were deprecated a long time ago and apparently didn't even work with lavfi filters. Go ahead and remove them and additionally clean up some code related to them. m_config_from_obj_desc_and_args becomes much simpler now and a couple of arguments can be completely removed.
* various: add missing include in header fllesllyyr2023-09-212-0/+2
| | | | Mostly cosmetic
* sub/ass_mp: filters/f_lavfi: forward declare mp_logllyyr2023-09-211-0/+1
|
* demux: add crop to mp_codec_paramsKacper Michajłow2023-09-171-0/+19
|
* hwtransfer: make probe_formats logging less spammyllyyr2023-09-151-7/+26
| | | | | This demotes the logs from VERBOSE to DEBUG, as well as making it log formats in one line rather than one per line.
* f_lavfi: don't reject dynamic lavfi ins/outsllyyr2023-08-281-3/+7
| | | | | | | | | | Ideally, users should be using lavfi-complex instead of lavfi-bridge for such a use case, however currently lavfi-complex doesn't support hwdec. So we can allow filters with dynamic inputs to work with lavfi-bridge, at the cost of them only being able to take in only one input. This should probably be reverted when/if lavfi-complex has hwdec, but for now this allows us to use libplacebo as a video filter with hwdec in mpv again.
* hwtransfer: check if the source format is accepted directly by the VOPhilip Langdale2023-08-261-0/+26
| | | | | | | | | | | | | | | | | This may seem obvious in retrospect, but we need to explicitly account for the case where the source format is supported by the VO, but not a valid target format for the conversion filter. In that situation, we would conclude that conversion was necessary, because we relied solely on the conversion filter to identify acceptable target formats. To avoid that, we should go through the VO's reported set of supported formats and if the source format is on the list, ensure that format is also on the target list. This is mostly a no-op as most VOs do not report supported formats (instead assuming that all formats decoders can produce are supported) and in the case where it matters (vaapi), there is only one format that the VO supports which the conversion filter does not (yuv444p).
* hwtransfer: use the right hardware config to find conversion targetsPhilip Langdale2023-08-261-19/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The last piece in the puzzle for doing hardware conversions automatically is ensuring we only consider valid target formats for the conversion. Although it is unintuitive, some vaapi drivers can expose a different set of formats for uploads vs for conversions, and that is the case on the Intel hardware I have here. Before this change, we would use the upload target list, and our selection algorithm would pick a format that doesn't work for conversions, causing everything to fail. Whoops. Successfully obtaining the conversion target format list is a bit of a convoluted process, with only parts of it encapsulated by ffmpeg. Specifically, ffmpeg understands the concept of hardware configurations that can affect the constraints of a device, but does not define what configurations are - that is left up to the specific hwdevice type. In the case of vaapi, we need to create a config for the video processing endpoint, and use that when querying for constraints. I decided to encapsulate creation of the config as part of the hwdec init process, so that the constraint query can be down in the hwtransfer code in an opaque way. I don't know if any other hardware will need this capability, but if so, we'll be able to account for it. Then, when we look at probing, instead of checking for what formats are supported for transfers, we use the results of the constraint query with the conversion config. And as that config doesn't depend on the source format, we only need to do it once.
* autoconvert: destroy sub filter immediately if reconfiguration is neededPhilip Langdale2023-08-261-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I'm currently not convinced that the way the output_chain is handled as part of reconfiguration is correct. If there is an event requiring reconfiguration, such as toggling the use of hwdec, we currently do not ensure that the filter chain is fully drained first. This creates a situation where the filter chain is invalidated while the autoconvert's sub filter (that does the real work) still has a frame to process and pass on. As the autoconvert code calls mp_subfilter_drain_destroy(), it returns early to allow for draining before destroying the subfilter, but that means the subfilter is still present in its original configuration, and no actually draining is done before the existing filters reinitialise themselves. This leads to a situation where, if a hardware scaling filter is being used by autoconvert, that filter is still present and responds when told to reinitialise. But it cannot successfully reinitialise if the triggering event is disabling hw decoding, as the input frame to the filter will now be a software frame, so reinit fails, leading to total failure of the filter chain, which is a fatal error, and we exit. I think this was never noticed before, because I think it's not possible for the hwtransfer filter to be active in a situation where you can dynamically change the state such that the input or output formats of the output chain are invalidated. eg: If the autoconverter is activated because of `--vf=format=vaapi`, it is actually not possible to toggle hwdec off, as the explicit user filter ensure the hwdec is always present and active. So, my solution here is to destroy the sub filter, regardless of whether it needs draining or not. We simply have no opportunity to drain and reconfigure in the correct order, and we must consider the remaining frame in the filter as a casualty of the toggling process. I'm sure there is a more substantial rework of the output_chain reconfiguration process that could ensure draining before reconfiguration begins, but my ambitions do not currently extend that far.
* output_chain: don't reset autoconvert on changes to unrelated filtersPhilip Langdale2023-08-261-1/+5
| | | | | | | | | | | | | | | | | | | | | This has been a standing behaviour for a long time, but I noticed it while implementing the hw->hw autoconvert functionality. Today, the output_chain will reset the autoconvert state when any output_chain filter sees the input format change. This is wasteful as it leads to the image converter having to be reinitialised each time it happens, so we should only do it when the actual "convert" filter sees the input format change. It doesn't matter if one of the other filters in the chain sees a change (although in practice, a format change will basically always propagate down the chain, so they all see a change at the same time). The practical effect of the old behaviour was that a format change would always lead to the image converter being rebuilt twice - once after the "convert" filter sees the format change, and then again after the "out" filter (the end of the chain) sees the change. In this commit, we check which filter is seeing the change, and only reset the autoconvert state for the "convert" filter itself.
* hwtransfer: implement support for hw->hw format conversionPhilip Langdale2023-08-263-48/+127
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Historically, we have not attempted to support hw->hw format conversion in the autoconvert logic. If a user needed to do these kinds of conversions, they needed to manually insert the hw format's conversion filter manually (eg: scale_vaapi). This was usually fine because the general rule is that any format supported by the hardware can be used as well as any other. ie: You would only need to do conversion if you have a specific goal in mind. However, we now have two situations where we can find ourselves with a hardware format being produced by a decoder that cannot be accepted by a VO via hwdec-interop: * dmabuf-wayland can only accept formats that the Wayland compositor accepts. In the case of GNOME, it can only accept a handful of RGB formats. * When decoding via VAAPI on Intel hardware, some of the more unusual video encodings (4:2:2, 10bit 4:4:4) lead to packed frame formats which gpu-next cannot handle, causing rendering to fail. In both these cases (at least when using VAAPI with dmabuf-wayland), if we could detect the failure case and insert a `scale_vaapi` filter, we could get successful output automatically. For `hwdec=drm`, there is currently no conversion filter, so dmabuf-wayland is still out of luck there. The basic approach to implementing this is to detect the case where we are evaluating a hardware format where the VO can accept the hardware format itself, but may not accept the underlying sw format. In the current code, we bypass autoconvert as soon as we see the hardware format is compatible. My first observation was that we actually have logic in autoconvert to detect when the input sw format is not in the list of allowed sw formats passed into the autoconverter. Unfortunately, we never populate this list, and the way you would expect to do that is vo-query-format returning sw format information, which it does not. We could define an extended vo-query-format-2, but we'd still need to implement the probing logic to fill it in. On the other hand, we already have the probing logic in the hwupload filter - and most recently I used that logic to implement conversion on upload. So if we could leverage that, we could both detect when hw->hw conversion is required, and pick the best target format. This exercise is then primarily one of detecting when we are in this case and letting that code run in a useful way. The hwupload filter is a bit awkward to work with today, and so I refactored a bunch of the set up code to actually make it more encapsulated. Now, instead of the caller instantiating it and then triggering the probe, we probe on creation and instantiate the correct underlying filter (hwupload vs conversion) automatically.
* vf_vapoursynth: save display resolution as a variableDudemanguy2023-08-132-0/+9
| | | | | | | | | | | | | | mpv has a generic method for getting the display resolution, so we can save it in vf_vapoursynth without too much trouble. Unfortunately, the resolution won't actually be available in many cases (like my own) because the windowing backend doesn't actually know it yet. It looks like at least windows always returns the default monitor (maybe we should do something similar for x11 and wayland), so there's at least some value. Of course, this still has a bunch of pitfalls like not being able to cope with multi monitor setups at all but so does display_fps. As an aside, the vapoursynth API this uses apparently requires R26 (an ancient version anyway), so bump the build to compensate for this. Fixes #11510
* m_option: change m_option_type_aspect to doubleDudemanguy2023-08-091-1/+1
| | | | | | | | | | | | | This specific option type is only used for the video aspect. The underlying type was a float to represent the inputted value, but it's actually not precise enough. When using something like 4:3, the values of the incorrect digits are actually significant enough to make av_d2q return a very funky numerator and denominator which is close to 4/3 but not quite. This leads to some "off by one pixel" errors. Weirdly, mpv's actual calculations for this were already being done as double, but then converted to floats for this specific type. Just drop the conversion step and leave it all as double which has the precision we need (i.e. AVRational is now 4/3 for the this case). Fixes #8190.
* player/video: check for forced eofDudemanguy2023-07-221-0/+1
| | | | | | | | | | | It's a bit of an edge case, but since we now allow the disabling of the software fallback it's possible to have a situation where hwdec completely fails and the mpv window is still lingering from the previous item in the playlist. What needs to happen is simply that the vo_chain should uninit itself and handle force_window if needed. In order to do that, a new VDCTRL is added that checks vd_lavc if force_eof was set. player/video will then start the uninit process if needed after getting this.
* f_hwtransfer: disable vulkan multiplane images when uploading from cudaPhilip Langdale2023-05-281-2/+3
| | | | | | | | | | | | | | | | Although we can support vulkan multiplane images, cuda lacks any such support, and so cannot natively import such images for interop. It's possible that we can do separate exports for each plane in the image and have it work, but for now, we can selectively disable multiplane when we know that we'll be consuming cuda frames. As a reminder, even though cuda is the frame source, interop is one way so the vulkan images have to be imported to cuda before we copy the frame contents over. This logic here is slightly more complex than I'd like but you can't just set the flag blindly, as it will cause hwframes ctx creation to fail if the format is packed or if it's planar rgb. Oh well.
* hwdec_vulkan: use bwdif_vulkan as deinterlacing auto filterPhilip Langdale2023-05-281-0/+6
| | | | | This is currently the only vulkan deinterlacing filter in ffmpeg and it's a very high quality algorithm.
* hwdec_vulkan: add Vulkan HW InteropPhilip Langdale2023-05-281-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Vulkan Video Decoding has finally become a reality, as it's now showing up in shipping drivers, and the ffmpeg support has been merged. With that in mind, this change introduces HW interop support for ffmpeg Vulkan frames. The implementation is functionally complete - it can display frames produced by hardware decoding, and it can work with ffmpeg vulkan filters. There are still various caveats due to gaps and bugs in drivers, so YMMV, as always. Primary testing has been done on Intel, AMD, and nvidia hardware on Linux with basic Windows testing on nvidia. Notable caveats: * Due to driver bugs, video decoding on nvidia does not work right now, unless you use the Vulkan Beta driver. It can be worked around, but requires ffmpeg changes that are not considered acceptable to merge. * Even if those work-arounds are applied, Vulkan filters will not work on video that was decoded by Vulkan, due to additional bugs in the nvidia drivers. The filters do work correctly on content decoded some other way, and then uploaded to Vulkan (eg: Decode with nvdec, upload with --vf=format=vulkan) * Vulkan filters can only be used with drivers that support VK_EXT_descriptor_buffer which doesn't include Intel ANV as yet. There is an MR outstanding for this. * When dealing with 1080p content, there may be some visual distortion in the bottom lines of frames due to chroma scaling incorporating the extra hidden lines at the bottom of the frame (1080p content is actually stored as 1088 lines), depending on the hardware/driver combination and the scaling algorithm. This cannot be easily addressed as the mechanical fix for it violates the Vulkan spec, and probably requires a spec change to resolve properly. All of these caveats will be fixed in either drivers or ffmpeg, and so will not require mpv changes (unless something unexpected happens) If you want to run on nvidia w