summaryrefslogtreecommitdiffstats
path: root/wscript_build.py
Commit message (Collapse)AuthorAgeFilesLines
* build: merge d3d11va and dxva2 hwaccel checkswm42016-05-111-3/+3
| | | | | | We don't have any reason to disable either. Both are loaded dynamically at runtime anyway. There is also no reason why dxva2 would disappear from libavcodec any time soon.
* vo_opengl: angle: dynamically load ANGLEwm42016-05-111-0/+1
| | | | | | | | | | | | ANGLE is _really_ annoying to build. (Requires special toolchain and a recent MSVC version.) This results in various issues with people having trouble to build mpv against ANGLE (apparently linking it against a prebuilt binary doesn't count, or using binaries from potentially untrusted sources is not wanted). Dynamically loading ANGLE is going to be a huge convenience. This commit implements this, with special focus on keeping it source compatible to a normal build with ANGLE linked at build-time.
* video: refactor how VO exports hwdec device handleswm42016-05-091-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | The main change is with video/hwdec.h. mp_hwdec_info is made opaque (and renamed to mp_hwdec_devices). Its accessors are mainly thread-safe (or documented where not), which makes the whole thing saner and cleaner. In particular, thread-safety rules become less subtle and more obvious. The new internal API makes it easier to support multiple OpenGL interop backends. (Although this is not done yet, and it's not clear whether it ever will.) This also removes all the API-specific fields from mp_hwdec_ctx and replaces them with a "ctx" field. For d3d in particular, we drop the mp_d3d_ctx struct completely, and pass the interfaces directly. Remove the emulation checks from vaapi.c and vdpau.c; they are pointless, and the checks that matter are done on the VO layer. The d3d hardware decoders might slightly change behavior: dxva2-copy will not use the VO device anymore if the VO supports proper interop. This pretty much assumes that any in such cases the VO will not use any form of exclusive mode, which makes using the VO device in copy mode unnecessary. This is a big refactor. Some things may be untested and could be broken.
* d3d11va: store texture/subindex in IMGFMT_D3D11VA plane pointerswm42016-04-271-1/+0
| | | | | | | | | Basically this gets rid of the need for the accessors in d3d11va.h, and the code can be cleaned up a little bit. Note that libavcodec only defines a ID3D11VideoDecoderOutputView pointer in the last plane pointers, but it tolerates/passes through the other plane pointers we set.
* vo_opengl: D3D11VA + ANGLE interopwm42016-04-271-0/+1
| | | | | | | | | | | | | | | | | | | This uses ID3D11VideoProcessor to convert the video to a RGBA surface, which is then bound to ANGLE. Currently ANGLE does not provide any way to bind nv12 surfaces directly, so this will have to do. ID3D11VideoContext1 would give us slightly more control about the colorspace conversion, though it's still not good, and not available in MinGW headers yet. The video processor is created lazily, because we need to have the coded frame size, of which AVFrame and mp_image have no concept of. Doing the creation lazily is less of a pain than somehow hacking the coded frame size into mp_image. I'm not really sure how ID3D11VideoProcessorInputView is supposed to work. We recreate it on every frame, which is simple and hopefully doesn't affect performance.
* vd_lavc: simplify RPI and Mediacodec wrapperswm42016-04-251-2/+0
| | | | | | | | | | | | Use the recently added lavc_suffix mechanism to select the wrapper decoder. With all hwdec callbacks being optional, and RPI/Mediacodec having only dummy callbacks, all the callbacks can be removed as well. The result is that the vd_lavc_hwdec struct for both of them is tiny. It's better to move them to vd_lavc.c directly, because they are so trivial and small.
* vd_lavc: add d3d11va hwdecKevin Mitchell2016-03-301-0/+3
| | | | | | This commit adds the d3d11va-copy hwdec mode using the ffmpeg d3d11va api. Functions in common with dxva2 are handled in a separate decode/d3d.c file. A future commit will rewrite decode/dxva2.c to share this code.
* Add a mediacodec decoder hwdec wrapperJan Ekström2016-03-251-0/+1
| | | | | Does the same thing as the rpi one - makes fallback possible by pretending that h264_mediacodec is a hwdec.
* ipc: add Windows implementation with named pipesJames Ross-Gowan2016-03-231-1/+3
| | | | | | | | | | | | | | | This implements the JSON IPC protocol with named pipes, which are probably the closest Windows equivalent to Unix domain sockets in terms of functionality. Like with Unix sockets, this will allow mpv to listen for IPC connections and handle multiple IPC clients at once. A few cross platform libraries and frameworks (Qt, node.js) use named pipes for IPC on Windows and Unix sockets on Linux and Unix, so hopefully this will ease the creation of portable JSON IPC clients. Unlike the Unix implementation, this doesn't share code with --input-file, meaning --input-file on Windows won't understand JSON commands (yet.) Sharing code and removing the separate implementation in pipe-win32.c is definitely a possible future improvement.
* build: be less strict about line endingswm42016-03-111-2/+2
| | | | This is a shitty hack, but also not terribly offensive.
* vo_opengl: add dxva2 interop to angle backendKevin Mitchell2016-03-101-0/+1
| | | | | | Like dxinterop, this uses StretchRect or RGB conversion. This is unavoidable as long as we use the dxva2 API, as there is no way to access the raw hardware decoded Direct3D9 surfaces.
* build: install symbolic SVG iconJashandeep Sohi2016-03-101-0/+4
|
* demux: add null demuxerwm42016-03-041-0/+1
| | | | It's useless, but can be used for fancy --lavfi-complex nonsense.
* ao: initial OpenSL ES supportIlya Zhuravlev2016-02-271-0/+1
| | | | | | | | OpenSL ES is used on Android. At the moment only stereo output is supported. Two options are supported: 'frames-per-buffer' and 'sample-rate'. To get better latency the user of libmpv should pass values obtained from AudioManager.getProperty(PROPERTY_OUTPUT_FRAMES_PER_BUFFER) and AudioManager.getProperty(PROPERTY_OUTPUT_SAMPLE_RATE).
* wscript: don’t install the encoding profiles with encoding disabledEmmanuel Gil Peyrot2016-02-221-1/+2
|
* wscript: remove dxva2-dxinterop configure testKevin Mitchell2016-02-171-1/+1
| | | | Wasn't really necessary as it was equivalent to gl-dxinterop.
* vo_opengl: dxinterop: add dxva2 passthroughKevin Mitchell2016-02-171-0/+1
| | | | | Use dxva2 surface to fill RGB IDirect3DSurface9 shared with opengl via DXRegisterObjectNV.
* Rewrite ordered chapters and timeline stuffwm42016-02-151-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This uses a different method to piece segments together. The old approach basically changes to a new file (with a new start offset) any time a segment ends. This meant waiting for audio/video end on segment end, and then changing to the new segment all at once. It had a very weird impact on the playback core, and some things (like truly gapless segment transitions, or frame backstepping) just didn't work. The new approach adds the demux_timeline pseudo-demuxer, which presents an uniform packet stream from the many segments. This is pretty similar to how ordered chapters are implemented everywhere else. It also reminds of the FFmpeg concat pseudo-demuxer. The "pure" version of this approach doesn't work though. Segments can actually have different codec configurations (different extradata), and subtitles are most likely broken too. (Subtitles have multiple corner cases which break the pure stream-concatenation approach completely.) To counter this, we do two things: - Reinit the decoder with each segment. We go as far as allowing concatenating files with completely different codecs for the sake of EDL (which also uses the timeline infrastructure). A "lighter" approach would try to make use of decoder mechanism to update e.g. the extradata, but that seems fragile. - Clip decoded data to segment boundaries. This is equivalent to normal playback core mechanisms like hr-seek, but now the playback core doesn't need to care about these things. These two mechanisms are equivalent to what happened in the old implementation, except they don't happen in the playback core anymore. In other words, the playback core is completely relieved from timeline implementation details. (Which honestly is exactly what I'm trying to do here. I don't think ordered chapter behavior deserves improvement, even if it's bad - but I want to get it out from the playback core.) There is code duplication between audio and video decoder common code. This is awful and could be shareable - but this will happen later. Note that the audio path has some code to clip audio frames for the purpose of codec preroll/gapless handling, but it's not shared as sharing it would cause more pain than it would help.
* dxva2: use mp_image pool for d3d surfacesKevin Mitchell2016-02-141-0/+1
| | | | | | | | | | | | | This is required so that the individual surfaces can pass beyond the dxva2 decoder and be passed to the vo. This also adds additional data to mp_image->planes[0] for IMGFMT_DXVA2, which is required for maintaining and releasing the surface even if the decoder code is uninited. The IDirectXVideoDecoder itself is encapsulated together with its surface pool and configuration in a dxva2_decoder structure whose creation and destruction is managed by talloc.
* wscript_build: disable SONAME generation when building for AndroidJan Ekström2016-02-101-11/+18
| | | | | Android is well-known for not supporting SONAME'd libraries. All libraries imported into an APK have to end with '.so'.
* Initial Android supportJan Ekström2016-02-101-0/+1
| | | | | * Adds an 'android' feature, which is automatically detected. * Android has a broken strnlen, so a wrapper is added from FreeBSD.
* player: add complex filter graph supportwm42016-02-051-0/+1
| | | | | | | | | | | | | | | | See --lavfi-complex option. This is still quite rough. There's no support for dynamic configuration of any kind. There are probably corner cases where playback might freeze or burn 100% CPU (due to dataflow problems when interaction with libavfilter). Future possible plans might include: - freely switch tracks by providing some sort of default track graph label - automatically enabling audio visualization - automatically mix audio or stack video when multiple tracks are selected at once (similar to how multiple sub tracks can be selected)
* build: make libavfilter mandatorywm42016-02-051-8/+8
| | | | | | The complex filter support that will be added makes much more complex use of libavfilter, and I'm not going to bother with adding hacks to keep libavfilter optional.
* build: fix mpv.conf installationwm42016-01-111-1/+1
|
* ao_wasapi: move out some utility functionswm42016-01-111-0/+1
| | | | | | Note that hresult_to_str() (coming from wasapi_explain_err()) is mostly wasapi-specific, but since HRESULT error codes are unique, it can be extended for any other use.
* ao_dsound: remove this audio outputwm42016-01-061-1/+0
| | | | | | | It existed for XP-compatibility only. There was also a time where ao_wasapi caused issues, but we're relatively confident that ao_wasapi works better or at least as good as ao_dsound on Windows Vista and later.
* build: add option of html manualChris Mayo2016-01-051-0/+13
|
* win32: input: use Vista CancelIoExJames Ross-Gowan2015-12-201-1/+1
| | | | | | | | | | | | | | | | | libwaio was added due to the complete inability to cancel synchronous I/O cleanly using the public Windows API in Windows XP. Even calling TerminateThread on the thread performing I/O was a bad solution, because the TerminateThread function in XP would leak the thread's stack. In Vista and up, however, this is no longer a problem. CancelIoEx can cancel synchronous I/O running on other threads, allowing the thread to exit cleanly, so replace libwaio usage with native Vista API functions. It should be noted that this change also removes the hack added in 8a27025 for preventing a deadlock that only seemed to happen in Windows XP. KB2009703 says that Vista and up are not affected by this, due to a change in the implementation of GetFileType, so the hack should not be needed anymore.
* vo_opengl: prefix per-backend source files with context_wm42015-12-191-9/+9
|
* vo_opengl: split backend code from common.c to context.cwm42015-12-191-0/+1
| | | | | | | | Now common.c only contains the code for the function loader, while context.c contains the backend loader/dispatcher. Not calling it "backend.c", because the central struct is called MPGLContext.
* vo_opengl: x11egl: retrieve framebuffer depthwm42015-12-191-0/+1
| | | | | | | | | | This is used for dithering, although I'm not aware of anyone who got higher than 8 bit depth support to work on Linux. Also put this into egl_helpers.c. Since EGL is pseudo-portable at best I have no hope that the EGL context creation code in all the backends can be fully shared. But some self-contained functionality can definitely be shared.
* sub: rename sd_lavc_conv.c to lavc_conv.cwm42015-12-181-1/+1
| | | | | | | | The previous commit turned sd_lavc_conv from a sd_driver to free-standing functions. Do the rename to reflect this change separately to avoid confusing git's content tracking. (Or did git solve this, making separating renames and content changes unnecessary?)
* sub: remove sd_srt.cwm42015-12-151-1/+0
| | | | | | | | The FFmpeg subtitle converter does the same. There used to be some deficiencies in FFmpeg's code, but it seems at least some of them have been fixed. There also used to be the timestamp issue (see previous commit messages), but this doesn't matter anymore. So no reason to keep this code - get rid of it.
* sub: remove sd_microdvd.cwm42015-12-151-1/+0
| | | | | | | This can be dropped for the same reasons as in the previous commits. It removes MicroDVD conversion support on Libav, although MicroDVD files couldn't be read in the first place ever since demux_subreader.c was removed.
* sub: remove sd_lavf_srt.cwm42015-12-151-1/+0
| | | | | | | This restored timestamps when demuxing srt subtitles in Libav, which was important for avoiding slightly overlapping subtitles. Since the way this works was changed, there is no real reason to maintain proper timestamps anymore on this level - this can be dropped without issues.
* sub: remove sd_movtext.cwm42015-12-151-1/+0
| | | | | libavcodec's movtext-to-ass converter does the same and has more features. On Libav, this commit disables mp4 subtitle display.
* vo_opengl: add dxinterop backendJames Ross-Gowan2015-12-111-0/+1
| | | | | | | | | | | | | | | | | | | | | WGL_NV_DX_interop is widely supported by Nvidia and AMD drivers. It allows a texture to be shared between Direct3D and WGL, so that rendering can be done with WGL and presentation can be done with Direct3D. This should allow us to work around some persistent WGL issues, such as dropped frames with some driver/OS combos, drivers that buffer frames to increase performance at the cost of latency, and the inability to disable exclusive fullscreen mode when using WGL to render to a fullscreen window. The addition of a DX_interop backend might also enable some cool Direct3D-specific enhancements in the future, such as using the GetPresentStatistics API to get accurate frame presentation timestamps. Note that due to a driver bug, this backend is currently broken on Intel. It will appear to work as long as the window is not resized too often, but after a few changes of size it will be unable to share the newly created renderbuffer with GL. See: https://software.intel.com/en-us/forums/graphics-driver-bug-reporting/topic/562051
* demux: remove old subtitle parserwm42015-12-101-1/+0
| | | | | | | | | All of these are supported by FFmpeg now. It was disabled by default too (with FFmpeg). If compiled against Libav, mpv will lose the ability to read some subtitle formats (but the most important ones, srt and ass, still should work).
* stream: drop PVR supportwm42015-12-101-1/+0
| | | | | | | | | This is only for specific Hauppage cards. According to the comments in who is actively using this feature. Get it out of the way. Anyone who still wants to use this should complain. Keeping this code would not cause terribly much additional work, and it could be restored again. (But not if the request comes months later.)
* build: install scalable svg icon as wellChristian Hesse2015-12-011-0/+4
|
* vo_opengl: win32: test for exclusive modeJames Ross-Gowan2015-11-261-0/+1
| | | | | | | | | | This is a hack, but unfortunately the DwmGetCompositionTimingInfo heuristic does not work in all cases (with multiple-monitors on Windows 8.1 and even with a single monitor in Windows 10.) See the comment in mp_w32_is_in_exclusive_mode() for more details. It should go without saying that if any better method of doing this reveals itself, this hack should be dropped.
* vo_opengl: add initial ANGLE supportJames Ross-Gowan2015-11-181-0/+1
| | | | | | | | | | | ANGLE is a GLES2 implementation for Windows that uses Direct3D 11 for rendering, enabling vo_opengl to work on systems with poor OpenGL drivers and bypassing some of the problems with native GL, such as VSync in fullscreen mode. Unfortunately, using GLES2 means that most of vo_opengl's advanced features will not work, however ANGLE is under rapid development and GLES3 support is supposed to be coming soon.
* demux_libass: remove this demuxerwm42015-11-111-1/+0
| | | | | | | | | This loaded external .ass files via libass. libavformat's .ass reader is now good enough, so use that instead. Apparently libavformat still doesn't support fonts embedded into text .ass files, but support for this has been accidentally broken in mpv for a while anyway. (And only 1 person complained.)
* vo_opengl: add DRM EGL backendrr-2015-11-081-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Notes: - Unfortunately the only way to talk to EGL from within DRM I could find involves linking with GBM (generic buffer management for Mesa.) Because of this, I'm pretty sure it won't work with proprietary NVidia drivers, but then again, last time I checked NVidia didn't offer proper screen resolution for VT. - VT switching doesn't seem to work at all. It's worth mentioning that using vo_drm before introduction of VT switcher had an anomaly where user could switch to another VT and input text to it, while video played on top of that VT. However, that isn't the case with drm_egl: I can't switch to other VT during playback like this. This makes me think that it's either a limitation coming from my firmware or from EGL/KMS itself rather than a bug with my code. Nonetheless, I still left (untestable) VT switching code in place, in case it's useful to someone else. - The mode_id, connector_id and device_path should be configurable for power users and people who wish to watch videos on nonprimary screen. Unfortunately I didn't see anything that would allow OpenGL backends to register their own set of options. At the same time, adding them to global namespace is pointless. - A few dozens of lines could be shared with vo_drm (setting up VT switching, most of code behind page flipping). I don't have any strong opinion on this. - Sometimes I get minor visual glitches. I'm not sure if there's a race condition of some sort, unitialized variable (doubtful), or if it's buggy driver. (I'm using integrated Intel HD Graphics 4400 with Mesa) - .config and .control are very minimal. Signed-off-by: wm4 <wm4@nowhere>
* w32: use DisplayConfig API to retrieve correct monitor refresh rateJames Ross-Gowan2015-11-061-0/+1
| | | | | | | | | | This is based on an older patch by James Ross-Gowan. It was rebased and cleaned up. Also, the DWM API usage present in the older patch was removed, because DWM reports nonsense rates at least on Windows 8.1 (they are rounded to integers, just like with the old GDI API - except the GDI API had a good excuse, as it could report only integers). Signed-off-by: wm4 <wm4@nowhere>
* vo_opengl: implement NNEDI3 prescalerBin Jin2015-11-051-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement NNEDI3, a neural network based deinterlacer. The shader is reimplemented in GLSL and supports both 8x4 and 8x6 sampling window now. This allows the shader to be licensed under LGPL2.1 so that it can be used in mpv. The current implementation supports uploading the NN weights (up to 51kb with placebo setting) in two different way, via uniform buffer object or hard coding into shader source. UBO requires OpenGL 3.1, which only guarantee 16kb per block. But I find that 64kb seems to be a default setting for recent card/driver (which nnedi3 is targeting), so I think we're fine here (with default nnedi3 setting the size of weights is 9kb). Hard-coding into shader requires OpenGL 3.3, for the "intBitsToFloat()" built-in function. This is necessary to precisely represent these weights in GLSL. I tried several human readable floating point number format (with really high precision as for single precision float), but for some reason they are not working nicely, bad pixels (with NaN value) could be produced with some weights set. We could also add support to upload these weights with texture, just for compatibility reason (etc. upscaling a still image with a low end graphics card). But as I tested, it's rather slow even with 1D texture (we probably had to use 2D texture due to dimension size limitation). Since there is always better choice to do NNEDI3 upscaling for still image (vapoursynth plugin), it's not implemented in this commit. If this turns out to be a popular demand from the user, it should be easy to add it later. For those who wants to optimize the performance a bit further, the bottleneck seems to be: 1. overhead to upload and access these weights, (in particular, the shader code will be regenerated for each frame, it's on CPU though). 2. "dot()" performance in the main loop. 3. "exp()" performance in the main loop, there are various fast implementation with some bit tricks (probably with the help of the intBitsToFloat function). The code is tested with nvidia card and driver (355.11), on Linux. Closes #2230
* vo_opengl: add Super-xBR filter for upscalingBin Jin2015-11-051-0/+1
| | | | | | | | | | | Add the Super-xBR filter for image doubling, and the prescaling framework to support it. The shader code was ported from MPDN extensions project, with modification to process luma only. This commit is largely inspired by code from #2266, with `gl_transform_trans()` authored by @haasn taken directly.
* build: remove explicit checks for VPPwm42015-10-171-1/+1
| | | | | This was once done when old versions of libva without VPP had to be supported. Some parts of it were removed earlier already.
* Revert "vo_x11: remove this video output"wm42015-09-301-0/+1
| | | | | | | | | | | | | | | This reverts commit d11184a256ed709a03fa94a4e3940eed1b76d76f. Unfortunately, there was a lot of unexpected resistance. Do note that this is still extremely slow, crappy, etc. Note that vo_x11.c was further edited. Compared to the removed vo_x11.c, an additional ~200 lines of code was removed in order to simplify it. I tried to strip it down as much as possible. In particular, support for odd non-32 bit formats (24, 16, 15, 8 bit) is dropped. Closes #2300.
* vo_opengl: rename hwdec_vda.c to hwdec_osx.cwm42015-09-281-1/+1
| | | | | | | It doesn't deal with VDA at all anymore. Rename it to hwdec_osx.c. Not using hwdec_videotoolbox.c, because that would gi