summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* sub/lavc_conv: skip ReadOrder reset when subtitle decoder gets flushedJan Ekström2019-09-211-0/+1
| | | | | | | | | | | During initial testing with US closed captions, ARIB captions, timed text in MP4 or the specific external SRT files I tested with there were no hints that this flag would be needed for seeking to work. Unfortunately, that result seems to have been incorrect. Fixes #6970
* dec_sub: remove unused declarationwm42019-09-211-1/+0
|
* input: add keybind commandDudemanguy9112019-09-214-0/+65
|
* playloop: don't read playback pos from byte streamDudemanguy9112019-09-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If a file format supports PTS resets, get_current_pos_ratio calculates the ratio using the stored filepos in the demuxer struct instead of a standard query of the current time in the stream and its total length. This seems like a reasonable way to avoid weird PTS values, but in reality this just causes problems and results in inaccurate ratio values that can affect other parts of the player (most notably the osc seekbar). For libavformat formats, demux->filepos is obtained from the demux_lavf_fill_buffer function which is called on the next packet. The problem is that there is a slight delay between packets and in some cases, this delay can be relatively large. That means the obtained demuxer->filepos value will be very inaccurate since it obtains the pos from the end of the upcoming packet and not its actual current position. This is especially noticeable at the very beginning of playback where get_current_pos_ratio sometimes returns a value of well over 2% despite less than a second passing in the stream. Another telltale sign is to simply watch the osc seekbar as a stream progresses and observe how it loads in staggered steps as every packet is decoded. In contrast, the seekbar progresses smoothly on the playback of a format that does not support PTS resets. The simple solution is to instead use the query of the current time and length of a stream and calculate the ratio that way. get_current_pos_ratio will still fallback on using the byte stream position if the previous queries fail. However, get_current_time will be more accurate in the vast majority of cases and should be the preferred method of calculating the position ratio.
* wayland: avoid handling a 0-value axis eventDudemanguy9112019-09-211-0/+2
| | | | This shouldn't be possible, but an extra check never hurts.
* player: expose pixel aspect ratio, bitrate and rotation value on trackswnoun2019-09-213-0/+21
|
* DOCS/contribute.md: expand on recommended C99 usagewm42019-09-211-1/+1
|
* build: don't add swift object when swift disabledder richter2019-09-211-2/+6
|
* wayland: read xcursor size from XCURSOR_SIZE envemersion2019-09-211-1/+13
| | | | | This allows compositors to set the cursor size from user configuration.
* player: use track title if exists instead of filenamethewisenerd2019-09-211-1/+5
|
* input: ignore empty lines on drag-drop mime datathewisenerd2019-09-211-1/+1
| | | | do not outright err and quit the player for this
* ao_oss: Fallback to stereo when the device does not support >2 channelsLeonardo Taccari2019-09-211-6/+10
| | | | | | | | ioctl(..., SNDCTL_DSP_CHANNELS, &nchannels) for not supported nchannels does not return an error and instead set nchannels to the default value. Instead of failing with no audio, fallback to stereo.
* osd: allow sub-text to work even if sub-visibility is disableddudemanguy2019-09-213-5/+5
|
* x11: fix ICC profiling for multiple monitorsslatchurie2019-09-212-2/+22
| | | | | | | | | To find the correct ICC profile X atom, the screen number was calculated directly from the xrandr order of the screens. But if a primary screen is set, it should be the first Xinerama screen, even if it is not the first xrandr screen. Calculate the the proper atom id for each screen.
* osc: add mouse mid-button as alias to shift+left buttonRicardo Constantino2019-09-211-0/+3
|
* wayland: don't show cursor when fullscreeningdudemanguy2019-09-212-0/+7
|
* wayland: reconfigure cursor on pointer enter eventThomas Weißschuh2019-09-212-1/+4
| | | | | | | | | | | | | | | | | | | | | On wayland the cursor has to be configured each time the pointer enters. Currently if the window (re)gains the focus, the pointer is not hidden, even when configured. After the mouse has been moved the pointer hides correctly. https://wayland.freedesktop.org/docs/html/apa.html#protocol-spec-wl_pointer: wl_pointer::enter - enter event ... When a seat's focus enters a surface, the pointer image is undefined and a client should respond to this event by setting an appropriate pointer image with the set_cursor request. Fixes #6185. Signed-off-by: Thomas Weißschuh <thomas@t-8ch.de>
* demux_cue: auto-detect CUE sheet charsetwnoun2019-09-215-0/+38
|
* command: add video-add/video-remove/video-reload commandsPaul B Mahol2019-09-212-0/+30
|
* wayland: add mouse buttons and fix axis scalingdudemanguy2019-09-211-4/+24
| | | | | | | | | | Previously, the only mouse buttons supported in wayland were left, right, and middle click. This adds the thumb back/forward buttons as valid bindings. Also it removes the old, default behavior of always sending a right click if an unrecognized mouse button is clicked. In a related but different fix, the magnitude of an axis event in wayland is not important to mpv since it internally handles all scaling. The only thing we care about is getting the sign when the event occurs.
* client API: add mpv_command_retDark2019-09-213-0/+26
| | | | | | | | | | This change adds a version of `mpv_command` that also returns a result. The main rationale behind this is `mpv_command_node` requires defining multiple structs before you can even use it, which results in a pretty painful to use interface just to get the result from a command. There isn't really a good name for this function, so I'm open to suggestions on a better name for it.
* DOCS/contribute.md: license clarificationswm42019-09-212-0/+6
| | | | | | | | | Basically, both the license of the file and the preferred license of the project (LGPLv2.1+) counts. I'm doing that so that files with more liberal licenses don't get infected by LGPL, but allow copy & pasting to LGPL source files without jumping through lawyer bullshit hoops. Mention this in Copyright too.
* github: remove LGPL relicensing boilerplatewm42019-09-211-3/+2
| | | | | | Most files are LGPL anyway by now. contribute.md still requires contributors to provide changes under LGPL, which should cover changes to files that are LGPL, as well as GPL files.
* vo_sdl: Only create the SDL window onceCameron Cawley2019-09-211-54/+23
|
* ao_pulse: add --pulse-allow-suspendedTérence Clastres2019-09-212-1/+7
| | | | | | | | | | This flag makes mpv continue using the PulseAudio driver even if the sink is suspended. This can be useful if JACK is running with PulseAudio in bridge mode and the sink-input assigned to mpv is the one JACK controls, thus being suspended. By forcing mpv to still use PulseAudio in this case, the user can now adjust the sink to an unsuspended one.
* stream_libarchive: Always use LC_CTYPE_MASK for libarchiveJames Hilliard2019-09-212-2/+1
| | | | | | Using LC_ALL_MASK is unnecessary and unreliable on some systems. Signed-off-by: James Hilliard <james.hilliard1@gmail.com>
* command: drop removed cache properties from cache update eventswm42019-09-201-2/+2
| | | | These did nothing anymore, maybe made it slightly slower.
* player: update status line cache displaywm42019-09-202-7/+5
| | | | | | | | | | | Replace the "+" with "/". The "+" was supposed to imply that the cache is the sum of the time (demuxer cache) and the size in bytes (stream cache). We could not provide something nicer, because we had no idea how many seconds of media was buffered in the stream cache. Now the stream cache is done, and both the duration and byte size show the amount buffered in the demuxer cache. Hopefully "/" is better to imply this properly. Update the manpage explanations too.
* context_drm_egl: Use eglGetPlatformDisplayEXT if availablememeka2019-09-201-1/+20
| | | | | | | | Check if eglGetPlatformDisplayEXT is available and try to use it to obtain the display connection. Fall back to eglGetDisplay if eglGetPlatformDisplayEXT is not available or failing. From PR #5992
* client API: fix missing property change events after property updatesGunnar Marten2019-09-201-24/+28
| | | | | | | | | | | | | | | | | | | | | | | | | When update_prop() successfully fetched a changed property value, it set prop->changed to true to indicate the success. mark_property_changed() also always set prop->changed to true and additionally prop->need_new_value to true This is the case since 6ac0ef78 If the observed property changes every frame and then due to timing the next mark_property_changed() is called before gen_property_change_event() and therefore directly after update_prop(), prop->need_new_value was again true and indicated that a new value has to be retrieved with update_prop(). As a result the event for the last successful update_prop() was never triggered. This meant that a property change event were never generated for frame-based properties only for properties that were observed with MPV_FORMAT_NONE or when the timing was different and gen_property_change_event() was called after update_prop(). To fix this, mark_property_change() and update_prop() should not use the same flag to indicate different things and therefore a new flag for successfully update a property is introduced. But with the now decoupled property changed and updated the need_new_value flag is redundant and removed completely. Fixes #4195
* loadfile: restore playlist prefetchingwm42019-09-201-4/+11
| | | | | | | | | | | | | | | | | | | | | | | With the stream cache gone, this function had almost no use anymore (other than opening the stream). Improve this by triggering demuxer cache readahead. This enables all streams. At this point we can't know yet what streams the user's options would select (at least not without great additional effort). Generally this is what you want, and the stream cache would have read the same amount of data. In addition, this will work much better for files that e.g. need to seek to the end when opening (typically mp4, and mkv files produced by newer mkvmerge versions). Remove the deselection call from add_stream_track(). This should be fine, as streams normally start out as deselected anyway. In the prefetch case, some code in play_current_file() actually deselects it. Streams that appear during demuxing are disabled by default, so this doesn't break this logic either. Fixes: #6753
* demux: propagate streaming flag through demux_timelinewm42019-09-203-3/+10
| | | | | | | | | | | | | | Before this commit, EDL or CUE files did not properly enable the cache if they were on "slow" media (stream->streaming==true). This happened because the stream is unset for demux_timeline, so the streaming flag could not be queried anymore. Fix this by adding this flag to struct demuxer, and propagate it exactly like the is_network flag. is_network is not used for checking the cache options anymore, and its main function seems to be something else. Normal http streams set the streaming flag already. This should fix #6958.
* client API, vo_libmpv: document random deadlock problemswm42019-09-202-0/+20
| | | | | | | I guess trying to make DR work on libmpv was a mistake. I never observed such a deadlock, but it's looks like it's theoretically possible.
* vo_libmpv: fix some more uninit issueswm42019-09-201-24/+35
| | | | | | | | | | | | | | | | | | | | | | This is mostly for the case when mpv_render_context_free() is called while video is going on. This is supposed to gracefully stop video and deinitialize everything properly. (I feel like it would put too much on the API user to require that video is stopped before calling this function. Whether video is running or not is a fairly highlevel thing, and the API user could not do it in a race-free way.) One problem was that unit() accessed ctx after ctx->in_use was set to false. The update(ctx) call was basically a racy use-after-free. It needed that call to wake up the mpv_render_context_free() loop that waited for VO uninit. Fix this by triggering the wakeup inside the lock, and then doing "barrier" locking in mpv_render_context_free(). Another problem was that the wait loop didn't really wait properly. IT seems the had_kill_update field was a botched attempt to do that. It's indeed quite hairy to do that with update(). Instead make use of the dispatch queue (infinite timeout, using mp_dispatch_interrupt()), which handles the problem of having to wait both for dispatch queue updates and VO uninit at the same time.
* client API: document unfortunate render API threading requirementwm42019-09-201-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is because dr_helper.c calls pthread_self(). It's used to avoid deadlocks. This was not a problem internal to mpv (dr_helper.c was first created for vo_gpu.c), but it accidentally leaked as an unintended consequence of an implementation detail to libmpv. This problem existed for MPV_RENDER_PARAM_ADVANCED_CONTROL users, ever since it was introduced. Maybe this could be done differently, but it's certainly very tricky. the pthread_t is used to avoid deadlocks when the caller is on the same thread as the one which needs to handle the calls. The critical code is in free_dr_buffer_on_dr_thread(). It's called when a DR image is free'd. Freeing a DR image requires access to the GL context, i.e. it just has to run on the "GL" thread. In libmpv's case, this is done by calling the API user's wakeup call, and the user will eventually call mpv_render_context_update() on the "GL" thread, which in turn calls mp_dispatch_queue_process() on the dispatch queue that was passed to dr_helper_create(), which then calls av_buffer_unref(), which calls gl_video_dr_free_buffer(). (God who came up with this rube goldberg shit.) The above case will typically happen when e.g. vd_lavc.c (internal mpv thread) frees images. Allocation works similarly; deallocation is just trickier because calls to free images are everywhere. Now consider if mpv_render_context_render() releases a DR image. This function is always called on the "GL" thread. Going through the dispatch queue would obviously deadlock, because according to the render API rules, the user's wakeup callback must not mpv_render_context_update(), instead it has to signal the "GL" thread, which must wait until mpv_render_context_render() returns. To avoid this deadlock, dr_helper.c checks whether the calling thread is the "GL" thread, and if so, allows a reentrant call (basically gl_video_dr_free_buffer() is called directly). While with GL, you usually will stay on the same thread, API users were in theory allowed to e.g. move the GL context to a different thread. But this dr_helper issue means this is not possible. Moreover, other APIs will not have the same thread-locality problems as GL. FUCK THIS SHIT In theory, libmpv could provide an API to "move" the context to a separate thread, but let's not start with even more broken crap like this. I'm not sure yet whether there is an easy solution. Maybe all mpv_render() function could be guarded by entry/exit functions, which set/unset a flag that dr_helper.c should use reentrant calls. libmpv users which do not set MPV_RENDER_PARAM_ADVANCED_CONTROL were never affected.
* vo_libmpv: always create ctx->dispatchwm42019-09-201-19/+12
| | | | | | | | Preparation for the next commit. Until now, it was only needed if DR was involved. One reason for not always creating it was that you normally must not use it if advanced_control is not enabled. This is why e.g. VOCTRL_SCREENSHOT now checks for that variable; it still can't use ctx->dispatch if the render API user did not enable it.
* render api: fix use-after-freewnoun2019-09-203-33/+6
| | | | | | | | render api needs to wait for vo to be destroyed before frees the context. The purpose of kill_cb is to wake up render api after vo is destroyed, but uninit did that before kill_cb, so kill_cb tries using the freed memory. Remove kill_cb to fix the issue as uninit is able to do the work.
* rpi: Update for modern systemsCameron Cawley2019-09-205-25/+14
|
* Coypright: update list of fileswm42019-09-201-13/+1
|
* vo: remove unused equalizer control remainswm42019-09-201-14/+1
| | | | | | | | | | Equalizer control was redone in 03cf150ff3516789d5812 (over 2 years ago). Ever since, the equalizer control structs and the GET voctrl have been unused. Only the SET voctrl is still used as notification mechanism (actually a bad hack to avoid some further option change handling complexity). Remove the unused parts.
* oml_sync: fix typo in commentwm42019-09-201-2/+2
| | | | I think... Also reword another part of the text.
* vf_fingerprint: use aligned_alloc instead of posix_memalignwm42019-09-192-2/+8
| | | | | | | | | | | I was assuming posix_memalign was the most portable function to use, but MinGW does not provide it for some reason. Switch to C11 aligned_alloc() which someone suggested was provided by MinGW (but actually isn't, someone probably confused it with the incompatible _aligned_malloc), and add a configure check. Even though it turned out that MinGW doesn't provide it, the function is slightly more elegant than posix_memalign(), so stay with it.
* demux_lavf: document intentional FFmpeg API violationwm42019-09-191-0/+4
| | | | | | | | | | | | | | | This field is documented as internal, so an API user should not access it. However, this is the only way to get some read statistics without replacing FFmpeg's entire HLS demuxer. (Using custom I/O as workaround doesn't work: the HLS code uses some weird internal APIs that cannot be provided by FFmpeg API users; I even made the author of the relevant patch to provide a public API, but which was shot down by another FFmpeg developer. So I take this as my right to access this field.) Mention this explicitly, as it affects ABI and API compatibility, and I don't want that anyone claims this was a "mistake". Add some explanations.
* packet: fix theoretical UB if called on "empty" packetswm42019-09-191-2/+4
| | | | | | | | | | In theory, a 0 size allocation could have made it memset() on a NULL pointer (with a non-0 size, which makes it crash in addition to theoretical UB). This should never happen, since even packets with size 0 should have an associated allocation, as FFmpeg currently does. But avoiding this makes the API slightly more orthogonal and less tricky, I guess.
* Revert "demux/packet: fix demux_packet_shorten"wm42019-09-191-2/+2
| | | | | | | | | | | This reverts commit 95636c65e73c1d0d8cba43d8c230291d99962e88. This change shouldn't be needed, and in fact it's wrong. The FFmpeg API function could do anything it wants with the packet, including changing the packet data pointer. Likewise, it's not guaranteed that the referenced packet's fields mirror the current state of the mpv packet struct (the AVPacket is only kept for the AVBuffer and the side data stuff).
* client API: fix some commentswm42019-09-191-4/+4
|
* sd_lavc: support scaling for bitmap subtitleswm42019-09-191-0/+16
|
* demux: fix another incorrect BOF cache flag issuewm42019-09-191-2/+5
|
* f_swscale: fix a typowm42019-09-191-1/+1
|
* manpage: input.rst: fix a typowm42019-09-191-1/+1
|
* video: add vf_fingerprint and a skip-logo scriptwm42019-09-197-0/+618
| | | | | | | | | | | | | | | | | | | | | | | | | skip-logo.lua is just what I wanted to have. Explanations are on the top of that file. As usual, all documentation threatens to remove this stuff all the time, since this stuff is just for me, and unlike a normal user I can afford the luxuary of hacking the shit directly into the player. vf_fingerprint is needed to support this script. It needs to scale down video frames as part of its operation. For that, it uses zimg. zimg is much faster than libswscale and generates more correct output. (The filter includes a runtime fallback, but it doesn't even work because libswscale fucks up and can't do YUV->Gray with range adjustment.) Note on the algorithm: seems almost too simple, but was suggested to me. It seems to be pretty effective, although long time experience with false positives is missing. At first I wanted to use dHash [1][2], which is also pretty simple and effective, but might actually be worse than the implemented mechanism. dHash has the advantage that the fingerprint is smaller. But exact matching is too unreliable, and you'd still need to determine the number of different bits for fuzzier comparison. So there wasn't really a reason to use it. [1] https://pypi.org/project/dhash/ [2] http://www.hackerfactor.com/blog/index.php?/archives/529-Kind-of-Like-That.html
* command: make vf-metadata/af-metadata somewhat observablewm42019-09-191-1/+1
| | | | | | | | | | | Until now they weren't observable and never reported any updates. Apply a shitty hack to make them mostly-observable. It relies on the "idle" event, which is basically triggered on every frame displayed, or similar. This can lead to property change notifications not being sent quickly enough. The cleaner solution would be adding a notification mechanisms from filters, but I'm too lazy for that.
* command: make vf-metadata/af-metadata not query metadata twicewm42019-09-191-7/+13
| | | | | | | | | | | For simplicity, these properties usually query the metadata from the filter twice, even if it's not technically needed at all. The reason for this is mostly the horrible (and legacy) sub-path access (which is why tag_property() is so complex). But for simple cases, we can easily avoid double querying, so do that. The benefit is performance (well, won't matter), and supporting filters that reset information on query (for later).
* video: generally try to align image data on 64 byteswm42019-09-195-4/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Generally, using x86 SIMD efficiently (or crash-free) requires aligning all data on boundaries of 16, 32, or 64 (depending on instruction set used). 64 bytes is needed or AVX-512, 32 for old AVX, 16 for SSE. Both FFmpeg and zimg usually require aligned data for this reason. FFmpeg is very unclear about alignment. Yes, it requires you to align data pointers and strides. No, it doesn't tell you how much, except sometimes (libavcodec has a legacy-looking avcodec_align_dimensions2() API function, that requires a heavy-weight AVCodecContext as argument). Sometimes, FFmpeg will take a shit on YOUR and ITS OWN alignment. For example, vf_crop will randomly reduce alignment of data pointers, depending on the crop parameters. On the other hand, some libavfilter filters or libavcodec encoders may randomly crash if they get the wrong alignment. I have no idea how this thing works at all. FFmpeg usually doesn't seem to signal alignment internal anywhere, and usually leaves it to av_malloc() etc. to allocate with proper alignment. libavutil/mem.c currently has a ALIGN define, which is set to 64 if FFmpeg is built with AVX-512 support, or as low as 16 if built without any AVX support. The really funny thing is that a normal FFmpeg build will e.g. align tiny string allocations to 64 bytes, even if the machine does not support AVX at all. For zimg use (in a later commit), we also want guaranteed alignment. Modern x86 should actually not be much slower at unaligned accesses, but that doesn't help. zimg's dumb intrinsic code apparently randomly chooses between aligned or unaligned accesses (depending on compiler, I guess), and on some CPUs these can even cause crashes. So just treat the requirement to align as a fact of life. A