summaryrefslogtreecommitdiffstats
path: root/wscript_build.py
Commit message (Collapse)AuthorAgeFilesLines
* vf_dlopen: remove this filterwm42017-06-181-12/+0
| | | | | | | | | | | | | | | | It was an attempt to move some MPlayer filters (which were removed from mpv) to external, loadable filters. That worked well, but then the MPlayer filters were ported to libavfilter (independently), so they're available again. Also there is a more widely supported and more advanced loadable filter system supported by mpv: vapoursynth. In conclusion, vf_dlopen is not useful anymore, confusing, and requires quite a bit of code (and probably wouldn't survive the rewrite of the mpv video filter chain, which has to come at some point). It has some implicit dependencies on internal conventions, like possibly the format names dropped in the previous commit. We also deprecated it last release. Drop it.
* js: add javascript scripting support using MuJSAvi Halachmi (:avih)2017-06-141-0/+7
| | | | | | | | | | | | | | | Implements JS with almost identical API to the Lua support. Key differences from Lua: - The global mp, mp.msg and mp.utils are always available. - Instead of returning x, error, return x and expose mp.last_error(). - Timers are JS standard set/clear Timeout/Interval. - Supports CommonJS modules/require. - Added at mp.utils: getenv, read_file, write_file and few more. - Global print and dump (expand objects) functions. - mp.options currently not supported. See DOCS/man/javascript.rst for more details.
* wscript_build: install shared libmpv to BINDIR for Win32Ricardo Constantino2017-04-241-0/+4
|
* video: drop vaapi/vdpau hw decoding support with FFmpeg 3.2wm42017-04-231-2/+0
| | | | | | | | | | This drops support for the old libavcodec APIs. Now FFmpeg 3.3 or FFmpeg git is required. Libav has no release with the new APIs yet, so for Libav git as of a few weeks or months ago or so is required if you want to use Libav. Not much actually changes in hwdec_vaegl.c - some code is removed, but the reindentation inflates the diff.
* player: make screenshot commands honor the async flagwm42017-04-011-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | And also change input.conf to make all screenshots async. (Except the every-frame mode, which always uses synchronous mode and ignores the flag.) By default, the "screenshot" command is still asynchronous, because scripts etc. might depend on this behavior. This is only partially async. The code for determining the filename is still always run synchronously. Only encoding the screenshot and writing it to disk is asynchronous. We explicitly document the exact behavior as undefined, so it can be changed any time. Some of this is a bit messy, because I wanted to avoid duplicating the message display code between sync and async mode. In async mode, this is called from a worker thread, which is not safe because showing a message accesses the thread-unsafe OSD code. So the core has to be locked during this, which implies accessing the core and all that. So the code has weird locking calls, and we need to do core destruction in a more "controlled" manner (thus the outstanding_async field). (What I'd really want would be the OSD simply showing log messages instead.) This is pretty untested, so expect bugs. Fixes #4250.
* wscript: fix broken build with dvdread+dvdnav in 34e6a26Ricardo Constantino2017-03-311-4/+2
| | | | Didn't know waf actually tried to compile the same files twice.
* wscript: decouple dvdnav check from dvdreadRicardo Constantino2017-03-311-0/+2
| | | | | | | | | Reallows enabling dvdnav without enabling dvdread which was broken in 77cbb3543 when they were both disabled by default. Since dvdnav requires dvdread, we can enable dvdread:// even if --enable-dvdread isn't passed. Fixes #4290
* osx: initial Touch Bar supportAkemi2017-03-261-0/+1
|
* sub: add SDH subtitle filterDan Oscarsson2017-03-251-0/+1
| | | | | | | | | | Add subtitle filter to remove additions for deaf or hard-of-hearing (SDH). This is for English, but may in part work for others too. This is an ASS filter and the intention is that it can always be enabled as it by default do not remove parts that may be normal text. Harder filtering can be enabled with an additional option. Signed-off-by: wm4 <wm4@nowhere>
* w32_common: move the IDropTarget impl to a separate fileJames Ross-Gowan2017-03-261-0/+1
| | | | | | This was mostly self-contained, so its removal makes w32_common.c a bit easier to read. Also, because it was self contained and its author has agreed to LGPL relicencing, the new file has the LGPL licence header.
* af_drc: removeJan Janssen2017-03-251-1/+0
| | | | | | | | | | | | | | | | Remove low quality drc filter. Anyone whishing to have dynamic range compression should use the much more powerful acompressor ffmpeg filter: mpv --af=lavfi=[acompressor] INPUT Or with parameters: mpv --af=lavfi=[acompressor=threshold=-25dB:ratio=3:makeup=8dB] INPUT Refer to https://ffmpeg.org/ffmpeg-filters.html#acompressor for a full list of supported parameters. Signed-off-by: wm4 <wm4@nowhere>
* vdpau: support new vdpau libavcodec decode APIwm42017-03-231-1/+1
| | | | | | | | | | | | | | | | | | | The new API works like the new vaapi API, using generic hwaccel support. One minor detail is the error message that will be printed if using non-4:2:0 surfaces (which as far as I can tell is completely broken in the nVidia drivers and thus not supported by mpv). The HEVC warning (which is completely broken in the nVidia drivers but should work with Mesa) had to be added to the generic hwaccel code. This also trashes display preemption recovery. Fuck that. It never really worked. If someone complains, I might attempt to add it back somehow. This is the 4th iteration of the libavcodec vdpau API (after the separate decoder API, the manual hwaccel API, and the automatic vdpau hwaccel API). Fortunately, further iterations will be generic, and not require much vdpau-specific changes (if any at all).
* vo_opengl: add experimental vdpauglx backendwm42017-03-181-0/+1
| | | | | | | | | | | | | | | | As the manpage says, this has no value other than adding bugs. It uses code based on context_x11.c, and basically does very stripped down context creation (no alpha support etc.). It uses vdpau for display, and maps vdpau output surfaces as FBOs to render into them. This might be good to experiment with asynchronous presentation. For now, it presents synchronously, with a 4 frame delay (which should whack off A/V sync). The forced 4 frame delay is probably also why interaction feels slower. There are some weird vdpau errors on resizing and uninit. No idea what causes them.
* vd_lavc, vaapi: move hw device creation to generic codewm42017-02-201-1/+0
| | | | | | | | hw_vaapi.c didn't do much interesting anymore. Other than the function to create a device for decoding with vaapi-copy, everything can be done by generic code. Other libavcodec hwaccels are planned to provide the same API as vaapi. It will be possible to drop the other hw_ files in the future. They will use this generic code instead.
* vo_opengl: remove dxva2 dummy hwdec backendwm42017-02-201-1/+0
| | | | | | | | | This was a hack to let libmpv API users pass a d3d device to mpv. It's not needed anymore for 2 reasons: 1. ANGLE does not have this problem 2. Even native GL via nVidia (where this failed) seems to not require this anymore
* vo_opengl: implement videotoolbox hwdec on iOSAman Gupta2017-02-171-0/+1
| | | | | | Implements --hwdec=videotoolbox on iOS. Similar to hwdec_osx.c, but using CVPixelBuffer APIs available on iOS instead of the equivalent IOSurface APIs in macOS.
* videotoolbox: factor some duplicated codewm42017-02-171-0/+1
| | | | | | | | The code for copying a videotoolbox surface to mp_image was duplicated (with some minor differences - I picked the hw_videotoolbox.c version, because it was "better"). mp_imgfmt_from_cvpixelformat() is somewhat duplicated with the vt_formats[] table, but this will be fixed in a later commit, and moving the function to shared code is preparation.
* player: add experimental stream recording featurewm42017-02-071-0/+1
| | | | | This is basically a WIP, but it can't remain in a branch forever. A warning is print when using it as it's still a bit "shaky".
* build: prefix hwaccel decoder wrapper filenames with hw_wm42017-01-171-8/+8
| | | | | | Should have done this a long time ago. d3d.c remains as it is, because it's just a bunch of helper functions.
* player: add experimental C plugin interfacewm42017-01-121-1/+7
| | | | | | | | | | | | | | | | | This basically reuses the scripting infrastructure. Note that this needs to be explicitly enabled at compilation. For one, enabling export for certain symbols from an executable seems to be quite toolchain-specific. It might not work outside of Linux and cause random problems within Linux. If C plugins actually become commonly used and this approach is starting to turn out as a problem, we can build mpv CLI as a wrapper for libmpv, which would remove the requirement that plugins pick up host symbols. I'm being lazy, so implementation/documentation are parked in existing files, even if that stuff doesn't necessarily belong there. Sue me, or better send patches.
* vaapi: support new libavcodec vaapi APIwm42017-01-111-1/+2
| | | | | | | | | | | | | | | | | The old API is deprecated, and libavcodec prints a warning at runtime. The new API is a bit nicer and does many things for you, such as managing the underlying hwaccel decoder. libavutil also provides code for managing surfaces (we use their surface pool). The new code does not contain any code from the original MPlayer VAAPI patch (that was used as base for some of the vaapi code in mpv). Thus the new code is LGPL. The new API actually does not add any visible symbols, so the only way to detect it is a version check. Of course, the versions overlap between FFmpeg and Libav, which requires additional care. The new API did not get merged into FFmpeg yet, so there's no check for FFmpeg.
* vaapi: rename vaapi.c to vaapi_old.cwm42017-01-111-1/+1
| | | | | vaapi.c will be reintroduced with the new code using the new libavcodec vaapi API.
* build: always run code generators before compilingStefano Pigozzi2017-01-071-17/+2
|
* build: use matroska.py & file2string.py as python modulesStefano Pigozzi2017-01-051-17/+42
|
* manpage: add table of contents to the HTML versionZhiming Wang2016-12-141-1/+1
| | | | | | | | The reST contents directive is added to mpv.rst. In wscript_build.py, the --strip-elements-with-class=contents option is needed for the rst2man call in order to prevent the TOC from appearing in mpv.1.
* wscript_build: rst2pdf: increase section break levelshinchiro2016-12-111-1/+1
| | | | This fix broken pdf build and hopefully less fragile in future
* vo_opengl: hwdec_cuda: Use dynamic loading for cuda functionsPhilip Langdale2016-11-231-0/+1
| | | | | This change applies the pattern used in ffmpeg to dynamically load cuda, to avoid requiring the CUDA SDK at build time.
* wscript: fix typowm42016-11-221-1/+1
|
* vf_vdpaurb: remove this filterwm42016-11-221-1/+0
| | | | Was deprecated, superseded by --hwdec=vdpau-copy.
* audio/out: add AudioUnit output driver for iOSAman Gupta2016-11-011-0/+3
|
* vo_tct: optional custom size, support non-posix with 80x25 defaultAvi Halachmi (:avih)2016-10-251-1/+1
| | | | Also, replace the UTF8 half block char at the source code with C escape.
* vo_tct: introduce modern caca alternativerr-2016-10-201-0/+1
|
* hwdec_cuda: Rename config variable to be more consistentPhilip Langdale2016-09-161-2/+2
| | | | | | 'cuda-gl' isn't right - you can turn this on without any GL and get some non-zero benefit (with the cuda-copy hwaccel). So 'cuda-hwaccel' seems more consistent with everything else.
* player: move builtin profiles to a separate filewm42016-09-151-0/+4
| | | | | | | | | Move the embedded string with the builtin profiles to a separate builtin.conf file. This makes it easier to read and edit, and you can also check it for errors with --include=etc/builtin.conf. (Normally errors are hidden intentionally, because there's no way to output error messages this early, and because some options might not be present on all platforms or with all configurations.)
* vo_opengl: mali fbdev supportwm42016-09-131-0/+1
| | | | | | | | | | | | Minimal support just for testing. Only the window surface creation (including size determination) is really platform specific, so this could be some generic thing with platform-specific support as some sort of sub-driver, but on the other hand I don't see much of a need for such a thing. While most of the fbdev usage is done by the EGL driver, using this fbdev ioctl is apparently the only way to get the display resolution.
* vo_opengl: add hw overlay support and use it for RPIwm42016-09-121-1/+2
| | | | | | | | | | | This overlay support specifically skips the OpenGL rendering chain, and uses GL rendering only for OSD/subtitles. This is for devices which don't have performant GL support. hwdec_rpi.c contains code ported from vo_rpi.c. vo_rpi.c is going to be deprecated. I left in the code for uploading sw surfaces (as it might be slightly more efficient for rendering sw decoded video), although it's dead code for now.
* build: recompile zsh completion if zsh.pl changesPhilip Sequeira2016-09-101-1/+1
|
* hwdec/opengl: Add support for CUDA and cuvid/NvDecodePhilip Langdale2016-09-081-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Nvidia's "NvDecode" API (up until recently called "cuvid" is a cross platform, but nvidia proprietary API that exposes their hardware video decoding capabilities. It is analogous to their DXVA or VDPAU support on Windows or Linux but without using platform specific API calls. As a rule, you'd rather use DXVA or VDPAU as these are more mature and well supported APIs, but on Linux, VDPAU is falling behind the hardware capabilities, and there's no sign that nvidia are making the investments to update it. Most concretely, this means that there is no VP8/9 or HEVC Main10 support in VDPAU. On the other hand, NvDecode does export vp8/9 and partial support for HEVC Main10 (more on that below). ffmpeg already has support in the form of the "cuvid" family of decoders. Due to the design of the API, it is best exposed as a full decoder rather than an hwaccel. As such, there are decoders like h264_cuvid, hevc_cuvid, etc. These decoders support two output paths today - in both cases, NV12 frames are returned, either in CUDA device memory or regular system memory. In the case of the system memory path, the decoders can be used as-is in mpv today with a command line like: mpv --vd=lavc:h264_cuvid foobar.mp4 Doing this will take advantage of hardware decoding, but the cost of the memcpy to system memory adds up, especially for high resolution video (4K etc). To avoid that, we need an hwdec that takes advantage of CUDA's OpenGL interop to copy from device memory into OpenGL textures. That is what this change implements. The process is relatively simple as only basic device context aquisition needs to be done by us - the CUDA buffer pool is managed by the decoder - thankfully. The hwdec looks a bit like the vdpau interop one - the hwdec maintains a single set of plane textures and each output frame is repeatedly mapped into these textures to pass on. The frames are always in NV12 format, at least until 10bit output supports emerges. The only slightly interesting part of the copying process is that CUDA works by associating PBOs, so we need to define these for each of the textures. TODO Items: * I need to add a download_image function for screenshots. This would do the same copy to system memory that the decoder's system memory output does. * There are items to investigate on the ffmpeg side. There appears to be a problem with timestamps for some content. Final note: I mentioned HEVC Main10. While there is no 10bit output support, NvDecode can return dithered 8bit NV12 so you can take advantage of the hardware acceleration. This particular mode requires compiling ffmpeg with a modified header (or possibly the CUDA 8 RC) and is not upstream in ffmpeg yet. Usage: You will need to specify vo=opengl and hwdec=cuda. Note that hwdec=auto will probably not work as it will try to use vdpau first. mpv --hwdec=cuda --vo=opengl foobar.mp4 If you want to use filters that require frames in system memory, just use the decoder directly without the hwdec, as documented above.
* misc: add some annoying mpv_node helperswm42016-08-281-0/+1
| | | | | | | Sigh. Some parts of mpv essentially duplicate this code (with varrying levels of triviality) - this can be fixed "later".
* audio/filter: remove delay audio filterPaul B Mahol2016-08-121-1/+0
| | | | Similar filter is available in libavfilter.
* client API: add stream_cb API for user-defined stream implementationsAman Gupta2016-08-071-1/+2
| | | | | | Based on #2630. Some heavy changes by committer. Signed-off-by: wm4 <wm4@nowhere>
* wscript: add proper non-version'd SONAME for AndroidJan Ekström2016-07-301-1/+7
| | | | | This seems to have become a requirement since API target 23+, and matches what FFmpeg does.
* build: add --htmldir optionChris Mayo2016-07-301-1/+1
| | | | | Defaults to docdir but makes it possible to install html documentation separately.
* audio: refactor mixer code and delete mixer.cwm42016-07-171-1/+0
| | | | | | | | | | | | | | | | | mixer.c didn't really deserve to be separate anymore, as half of its contents were unnecessary glue code after recent changes. It also created a weird split between audio.c and af.c due to the fact that mixer.c could insert audio filters. With the code being in audio.c directly, together with other code that unserts filters during runtime, it will be possible to cleanup this code a bit and make it work like the video filter code. As part of this change, make the balance code work like the volume code, and add an option to back the current balance value. Also, since the balance semantics are unexpected for most users (panning between the audio channels, instead of just changing the relative volume), and there are some other volumes, formally deprecate both the old property and the new option.
* d3d: merge angle_common.h into d3d.hwm42016-06-281-2/+1
| | | | | | | | OK, this was dumb. The file didn't have much to do with ANGLE, and the functionality can simply be moved to d3d.c. That file contains helpers for decoding, but can always be present (on Windows) since it doesn't access any D3D specific libavcodec APIs. Thus it doesn't need to be conditionally built like the actual hwaccel wrappers.
* vo_opengl: remove prescaling framework with superxbr prescalerBin Jin2016-06-181-1/+0
| | | | Signed-off-by: wm4 <wm4@nowhere>
* vo_opengl: remove nnedi3 prescalerBin Jin2016-06-181-5/+0
|
* vo_opengl: fix d3d11 hardware decoding probing on Windows 7wm42016-06-091-0/+1
| | | | | | | | | | | | | | | Although D3D11 video decoding is unuspported on Windows 7, the associated APIs almost work. Where they fail is texture creation, where we try to create D3D11_BIND_DECODER surfaces. So specifically try to detect this situation. One issue is that once the hwdec interop is created, the damage is done, and it can't use another backend (because currently only 1 hwdec backend is supported). So that's where we prevent attempts to use it. It still can fail when trying to use d3d11va-copy (since that doesn't require an interop backend), but at that point we don't care anymore - dxva2(-copy) is tried before that anyway.
* build: don't install tests, only build themIlya Tumaykin2016-05-291-5/+6
| | | | | Considering tests are usually executed right after a successful build, there's no reason to keep them around for later.
* video: remove d3d11 video processor use from OpenGL interopwm42016-05-291-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We now have a video filter that uses the d3d11 video processor, so it makes no sense to have one in the VO interop code. The VO uses it for formats not directly supported by ANGLE (so the video data is converted to a RGB texture, which ANGLE can take in). Change this so that the video filter is automatically inserted if needed. Move the code that maps RGB surfaces to its own inteorp backend. Add a bunch of new image formats, which are used to enforce the new constraints, and to automatically insert the filter only when needed. The added vf mechanism to auto-insert the d3d11vpp filter is very dumb and primitive, and will work only for this specific purpose. The format negotiation mechanism in the filter chain is generally not very pretty, and mostly broken as well. (libavfilter has a different mechanism, and these mechanisms don't match well, so vf_lavfi uses some sort of hack. It only works because hwaccel and non-hwaccel formats are strictly separated.) The RGB interop is now only used with older ANGLE versions. The only reason I'm keeping it is because it's relatively isolated (uses only existing mechanisms and adds no new concepts), and because I want to be able to compare the behavior of the old code with the new one for testing. It will be removed eventually. If ANGLE has NV12 interop, P010 is now handled by converting to NV12 with the video processor, instead of converting it to RGB and using the old mechanism to import that as a texture.
* vf_d3d11vpp: add a D3D11 video processor filterwm42016-05-281-0/+1
| | | | | | | | | | | | | | | | | Main use: deinterlacing. I'm not sure how to select the deinterlacing mode at all. You can enumate the available video processors, but at least on Intel, all of them either signal support for all deinterlacers, or none (the latter is apparently used for IVTC). I haven't found anything that actually tells the processor _which_ algorithm to use. Another strange detail is how to select top/bottom fields and field dominance. At least I'm getting quite similar results to vavpp on Linux, so I'm content with it for now. Future plans include removing the D3D11 video processor use from the ANGLE interop code.
* vf_vavpp: move frame handling to separate filewm42016-05-251-0/+1
| | | | | | | | | | | | | | Move the handling of the future/past frames and the associated dataflow rules to a separate source file. While this on its own seems rather questionable and just inflates the code, I intend to reuse it for other filters. The logic is annoying enough that it shouldn't be duplicated a bunch of times. (I considered other ways of sharing this logic, such as an uber- deinterlace filter, which would access the hardware deinterlacer via a different API. Although that sounds like kind of the right approach, this would have other problems, so let's not, at least for now.)