summaryrefslogtreecommitdiffstats
path: root/video/filter/refqueue.c
Commit message (Collapse)AuthorAgeFilesLines
* refqueue: free referenced images on freewm42016-06-191-0/+1
| | | | | | | | Otherwise stale references will survive forever. Could leak hardware video surfaces. In particular, the mpv vdpau code crashed with an assertion when exiting after toggling deinterlacing, because not all references were released.
* vf_d3d11vpp: add a D3D11 video processor filterwm42016-05-281-0/+5
| | | | | | | | | | | | | | | | | Main use: deinterlacing. I'm not sure how to select the deinterlacing mode at all. You can enumate the available video processors, but at least on Intel, all of them either signal support for all deinterlacers, or none (the latter is apparently used for IVTC). I haven't found anything that actually tells the processor _which_ algorithm to use. Another strange detail is how to select top/bottom fields and field dominance. At least I'm getting quite similar results to vavpp on Linux, so I'm content with it for now. Future plans include removing the D3D11 video processor use from the ANGLE interop code.
* vf_vdpaupp: use refqueue helperwm42016-05-271-3/+14
| | | | | | | | | | | This makes vf_vdpaupp use the deinterlacer helper code already used by vf_vavpp. I nice side-effect is that this also removes some traces of code originating from vo_vdpau.c, so we can switch it to LGPL. Extend the refqueue helper with a deint setting. If not set, mp_refqueue_should_deint() always returns false, which slightly simplifies vf_vdpaupp. It's of no consequence to vf_vavpp (other than it has to set it to get expected behavior).
* vf_vavpp: make refqueue logic field-basedwm42016-05-251-29/+89
| | | | | | | Abstracts the annoying framerate-doubling behavior. Same deal as with refqueue introduction: the code size blows up, but at least it can be reused for other filters.
* vf_vavpp: use future instead of past PTS to determine field durationwm42016-05-251-7/+7
| | | | | | | | | | | | If the deinterlacer separates fields, the framerate must be doubled. Since we have no stable and reliably framerate anywhere, we've been calculating it by taking the time halfway to the next frame. vf_vavpp actually used the past frame to calculate the frame duration, which is sort of ok, but will skip the 2nd field in a stream (since the first frame has no past PTS). This is annoying for testing, so use the future frame PTS instead, which means the last field of the stream will be dropped instead.
* vf_vavpp: move frame handling to separate filewm42016-05-251-0/+153
Move the handling of the future/past frames and the associated dataflow rules to a separate source file. While this on its own seems rather questionable and just inflates the code, I intend to reuse it for other filters. The logic is annoying enough that it shouldn't be duplicated a bunch of times. (I considered other ways of sharing this logic, such as an uber- deinterlace filter, which would access the hardware deinterlacer via a different API. Although that sounds like kind of the right approach, this would have other problems, so let's not, at least for now.)