summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* video: fix player not exiting if no video frame was renderedwm42019-09-191-2/+3
| | | | | | | | | | | | E.g. "mpv null:// --demuxer=rawvideo" will "hang" by waiting for video EOF forever. It's not signalled correctly because of the last-frame corner case, which attempts to wait until the current frame is finally displayed (which is signalled by whether a new frame can be queued, see commit 1a339fa09d for some details). If no frame was ever queued, the VO is not configured, and vo_is_ready_for_frame() never returns true. Fix this by using vo_has_frame(), which seems to be exactly the correct thing we need.
* stream: log positions on seek failureswm42019-09-191-1/+2
|
* ytdl_hook: fix pseudo-DASH if no init fragment is presentwm42019-09-191-5/+11
| | | | | | | | | | | | | | Init fragments are not a necessity for DASH, but this code assumed so. Maybe the check was to prevent worse. But using normal EDL here leads to very shitty behavior where it tries to open hundreds or thousands of fragments, each with its own demuxer and HTTP connection. (This behavior is fine for normal uses of EDLs, but completely unacceptable when emulating fragmented streaming protocols. I'm not sure why the normal EDL code is needed here, but I think someone claimed some obscure sites just need it.) This happens in the same situation as the one described in the previous commit.
* ytdl_hook: audio can use fragmented DASH toowm42019-09-191-1/+1
| | | | | | | | | | | | | | Otherwise we'd just use the base URL as media URL, which would fail with a 404 error. Not sure if there's a deeper reason why the audio path was explicitly different from the video one. But this actually works now for a video that returned fragmented DASH audio with the default format selection. (This affects streams on that well known site of a big evil Silicon Valley company. Typically happens after live stream gets converted to a normal video, though after some time passes, this fragmented version is deleted, and replaced by a non-fragmented one. I've observed this several times and this seems to be the "normal" behavior.)
* demux_timeline: include "dash" hint in reported file formatwm42019-09-191-1/+2
|
* demux_timeline: disable end-of-segment handling in DASH modewm42019-09-191-2/+2
| | | | | | | | Normal EDL needs to clip packets coming from the underlying demuxer to the segment range (including complicated stuff due to frame reordering). This is unwanted In pseudo-DASH mode. A broken or subtly incorrect manifest would lead to "bad stuff" happening. The intention of the pseudo-DASH mode is to literally concatenate fragments.
* demux: fix typo in a commentwm42019-09-191-2/+2
|
* demux: fix SEEK_FORWARD into end of cached rangewm42019-09-191-0/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This fixes that there were weird delay ("buffering") when seeking into the last part of a seekable range. The exact case which triggers it if SEEK_FORWARD is used, and the seek pts is after the second-last keyframe, but before the end of the range. In that case, find_seek_target() returned NULL, and the cache layer waited until the _next_ keyframe the underlying demuxer returned until resuming playback. find_seek_target() returned NULL, because the last keyframe had kf_seek_pts unset. This field contains the lowest PTS in the packet range from the keyframe until the next keyframe (or EOF). For normal seeks, this is needed because keyframes don't necessarily have the minimum PTS in the packet range, so it needs to be computed by waiting for all packets until the next keyframe (or EOF). Strictly speaking, this behavior was correct, but it meant that the caller would set ds->skip_to_keyframe, which waits for the next newly demuxed keyframe. No packets were returned to the decoder until this happened, usually resulting in the frontend entering "buffering" mode. What it really needs to do is returning the last keyframe in the cache. In this situation, the seek target points in the middle of the last completely cached packet range (as delimited by keyframes), and SEEK_FORWARD is supposed to skip to the next keyframe. This is in line with the basic assumptions the packet cache makes (e.g. the keyframe flag means it's possible to start decoding, and the frames decoded from it and following packets will strictly have PTS values above the previous keyframe range). This means in this situation the kf_seek_pts value doesn't matter either. So fix this situation by explicitly detecting it and then returning the last cached keyframe. Should the search loop look at all packets, instead of only keyframe ones? This would mean it can know that it's within the last keyframe range (without looking at queue->seek_end). Maybe this would be a bit more natural for the SEEK_FORWARD case, but due to PTS reordering it doesn't sound like a useful thing to do. Should skip_to_keyframe be checked by the code that sets kf_seek_pts to a known value? This wouldn't help too much; the frontend would still go into "buffering" mode for no reason until the packet range is completed, although it would resume from the correct range. Should a NULL return always unconditionally use keyframe_latest? This makes sense because the seek PTS is usually already in the cached range, so this is the only case that should happen. But there are scary special cases, like sparse subtitle streams, or other uses of find_seek_target() which could be out of range now or in future. Basically, don't "risk" it. One other potential problem with this is that the "adjust seek target" code will be disabled in this case. It checks kf_seek_pts, and if it's unset, the adjustment is not done. Maybe this could be changed to use the queue's seek_end time, but I'm not sure if this is fully kosher. On the other hand, I think the main use for this adjustment is with backwards seeks, so this shouldn't matter. A previous commit dealing with audio/video stream merging mentioned how seeking forward entered "buffering" mode for unknown reasons; this commit fixes this issue.
* demux_timeline: report network speed of slave connectionswm42019-09-193-1/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | demux_timeline doesn't do any transport accesses itself. The slave demuxers do this (these will actually access the stream layer and perform e.g. network accesses). As a consequence, demux_timeline always reported 0 bytes read, and network speed display didn't work. Fix this by awkwardly reporting the amount of read bytes upwards. This is not very nice, and requires explicit calls whenever the slave "might" have read data. Due to the way the reporting is done, it only works if the slaves do not run demuxer threads, which makes things even less nice. (Fortunately they don't anyway, because it would be a waste of resources.) Some identifiers contain the word "hack" as a warning. Some of the stupidity comes from the fact that demux.c itself resets the stats randomly in order to calculate the bytes_per_second value, which is useless for a slave, but of course is still done, because demux.c itself is not aware of whether it's on the slave or top-level layer. Unfortunately, this must do. In theory, the demuxer thread/cache layer should be separated from demuxer implementations. This would get rid of all the awkwardness and nonsense. For example, the only threading involved would be the caching layer, completely separate from demuxers themselves. It'd be the only thing calculates speed rates for the player frontend, too (instead of doing it for each demuxer, even if unused).
* demux: slightly cleanup network speed reportingwm42019-09-193-8/+32
| | | | | | | | | | | | It was an ugly hack, and the next commit will make it even uglier. Slightly reduce the ugliness to prevent death of too many brain cells, though it's still an ugly hack. The cleanup is really minor, but I guess the following commit would be much worse otherwise. In particular, this commit checks accesses (instead of having a public field with evil access rules), which should avoid misunderstandings and incorrect use. Strictly speaking, the added field is redundant, but the next commit complicates it a bit.
* ytdl_hook: disable EDL-generated useless chapters when merging streamswm42019-09-191-1/+2
| | | | (Yes, a bit odd how this header is needed only for the first stream.)
* demux_edl: add a special header to disable chapter generationwm42019-09-192-11/+29
| | | | A bit of a hack.
* demux_edl: explicitly error on unknown header typeswm42019-09-191-0/+2
| | | | | | I think this is better. On the other hand, this is a behavior change. The EDL "spec" says that unknown fields are igored. But strictly speaking, unknown headers are not "fields", but unknown entities.
* demux_edl: minor cleanup to header parsingwm42019-09-191-31/+35
| | | | | | | EDL "headers" were always an afterthought, and kind of hacked on top of the existing code. Improve it slightly, and make it follow the conventions of the normal parsing. Basically use the same code structure for them, just that they use different field names.
* ytdl_hook: merge separate audio tracks via EDLwm42019-09-191-5/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This merges separate audio and video tracks into one virtual stream, which helps the mpv caching layer. See previous EDL commit for details. It's apparently active for most of evil Silicon Valley giant's streaming videos. Initial tests seem to work fine, except it happens pretty often that playback goes into buffering immediately even when seeking within a cached range, because there is not enough forward cache data yet to fully restart playback. (Or something like this.) The audio stream title used to be derived from track.format_note; this commit stops doing so. It seemed pointless anyway. If really necessary, it could be restored by adding new EDL headers. Note that we explicitly don't do this with subtitle tracks. Subtitle tracks still have a chance with on-demand loading or loading in the background while video is already playing; merging them with EDL would prevent this. Currently, subtitles are still added in a "blocking" manner, but in theory this could be loosened. For example, the Lua API already provides a way to run processes asynchronously, which could be used to add subtitles during playback. EDL will probably be never flexible enough to provide this. Also, subtitles are downloaded at once, rather than streamed like audio and video. Still missing: disabling EDL's pointless chapter generation, and propagating download speed statistics through the EDL wrapper.
* loadfile, ytdl_hook: don't reject EDL-resolved URLs through playlistwm42019-09-193-1/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The ytdl wrapper can resolve web links to playlists. This playlist is passed as big memory:// blob, and will contain further quite normal web links. When playback of one of these playlist entries starts, ytdl is called again and will resolve the web link to a media URL again. This didn't work if playlist entries resolved to EDL URLs. Playback was rejected with a "potentially unsafe URL from playlist" error. This was completely weird and unexpected: using the playlist entry directly on the command line worked fine, and there isn't a reason why it should be different for a playlist entry (both are resolved by the ytdl wrapper anyway). Also, if the only EDL URL was added via audio-add or sub-add, the URL was accessed successfully. The reason this happened is because the playlist entries were marked as STREAM_SAFE_ONLY, and edl:// is not marked as "safe". Playlist entries passed via command line directly are not marked, so resolving them to EDL worked. Fix this by making the ytdl hook set load-unsafe-playlists while the playlist is parsed. (After the playlist is parsed, and before the first playlist entry is played, file-local options are reset again.) Further, extend the load-unsafe-playlists option so that the playlist entries are not marked while the playlist is loaded. Since playlist entries are already verified, this should change nothing about the actual security situation. There are now 2 locations which check load_unsafe_playlists. The old one is a bit redundant now. In theory, the playlist loading code might not be the only code which sets these flags, so keeping the old code is somewhat justified (and in any case it doesn't hurt to keep it). In general, the security concept sucks (and always did). I can for example not answer the question whether you can "break" this mechanism with various combinations of archives, EDL files, playlists files, compromised sites, and so on. You probably can, and I'm fully aware that it's probably possible, so don't blame me.
* demux, demux_edl: add extension for tracks sourced from separate streamswm42019-09-195-159/+315
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit adds an extension to mpv EDL, which basically allows you to do the same as --audio-file, --external-file, etc. in a single EDL file. This is a relatively quick & dirty implementation. The dirty part lies in the fact that several shortcuts are taken. For example, struct timeline now forms a singly linked list, which is really weird, but also means the other timeline using demuxers (cue, mkv) don't need to be touched. Also, memory management becomes even worse (weird object ownership rules that are just fragile WTFs). There are some other dubious small changes, mostly related to the weird representation of separate streams. demux_timeline.c contains the actual implementation of the separate stream handling. For the most part, most things that used to be on the top level are now in struct virtual_source, of which one for each separate stream exists. This is basically like running multiple demux_edl.c in parallel. Some changes could strictly speaking be split into a separate commit, such as the stream_map type change. Mostly untested. Seems to work for the intended purpose. Potential for regressions for other timeline uses (like ordered chapters) is probably low. One thing which could definitely break and which I didn't test is the pseudo-DASH fragmented EDL code, of which ytdl can trigger various forms in obscure situations. (Uh why don't we have a test suite.) Background: The intention is to use this for the ytdl wrapper. A certain streaming site from a particularly brain damaged and plain evil Silicon Valley company usually provides streams as separate audio and video streams. The ytdl wrapper simply does use audio-add (i.e. adding it as external track, like with --audio-file), which works mostly fine. Unfortunately, mpv manages caching completely separately for external files. This has the following potential problems: 1. Seek ranges are rendered incorrectly. They always use the "main" stream, in this case the video stream. E.g. clicking into a cached range on the OSC could trigger a low level seek if the audio stream is actually not cached at the target position. 2. The stream cache bloats unnecessarily. Each stream may allocate the full configured maximum cache size, which is not what the user intends to do. Cached ranges are not pruned the same way, which creates disjoint cache ranges, which only use memory and won't help with fast seeking or playback. 3. mpv will try to aggressively read from both streams. This is done from different threads, with no regard which stream is more important. So it might happen that one stream starves the other one, especially if they have different bitrates. 4. Every stream will use a separate thread, which is an unnecessary waste of system resources. In theory, the following solutions are available (this commit works towards D): A. Centrally manage reading and caching of all streams. A single thread would do all I/O, and decide from which stream it should read next. As long as the total TCP/socket buffering is not too high, this should be effective to avoid starvation issues. This can also manage the cached ranges better. It would also get rid of the quite useless additional demuxer threads. This solution is conceptually simple, but requires refactoring the entire demuxer middle layer. B. Attempt to coordinate the demuxer threads. This would maintain a shared cache and readahead state to solve the mentioned problems explicitly. While this sounds simple and like an incremental change, it's probably hard to implement, creates more messy special cases, solution A. seems just a better and simpler variant of this. (On the other hand, A. requires refactoring more code.) C. Render an intersection of the seek ranges across all streams. This fixes only problem 1. D. Merge all streams in a dedicated wrapper demuxer. The general demuxer layer remains unchanged, and reading from separate streams is handled as special case. This effectively achieves the same as A. In particular, caching is simply handled by the usual demuxer cache layer, which sees the wrapper demuxer as a single stream of interleaved packets. One implementation variant of this is to reuse the EDL infrastructure, which this commit does. All in all, solution A would be preferable, because it's cleaner and works for all external streams in general. Some previous commit tried to prepare for implementing solution A. This could still happen. But it could take years until this is finally seriously started and finished. In any case, this commit doesn't block or complicate such attempts, which is also why it's the way to go. It's worth mentioning that original mplayer handles external files by creating a wrapper demuxer. This is like a less ideal mixture of A. and D. (The similarity with A. is that extending the mplayer approach to be fully dynamic and without certain disadvantages caused by the wrapper would end up with A. anyway. The similarity with D. is that due to the wrapper, no higher level code needs to be changed.)
* demux: make demuxer list static, remove ancient commentwm42019-09-191-5/+1
| | | | | I'd actually very much encourage demuxer implementations outside problematic libavformat.
* build: silence idiotic -Wformat-truncationwm42019-09-191-1/+2
| | | | | | | This warns about player/audio.c:253 with gcc 8.2.0. Although this warning could be useful to check the worst case estimation, the compiler doesn't explain how it gets its dumb, bogus result, so this is useless. You'd just end up trying to make the compiler happy for no reason.
* demux_lavf: increase max. probe sizewm42019-09-191-1/+1
| | | | | For those shitty mp3s with extremely large ID3v2/APIC tags, and for which libavformat insists on reading all data until after the ID3v2.
* stream: redo buffer handling and allow arbitrary size for stream_peek()wm42019-09-194-50/+97
| | | | | | | | | | | | | | | | struct stream used to include the stream buffer, including peek buffer, inline in the struct. It could not be resized, which means the maximum peek size was set in stone. This meant demux_lavf.c could peek only so much data. Change it to use a dynamic buffer. Because it's possible, keep the inline buffer for default buffer sizes (which are basically always used outside of file opening). It's unknown whether it really helps with anything. Probably not. This is also the fallback plan in case we need something like the old stream cache in order to deal with mp4 + unseekable http: the code can now be easily changed to use any buffer size.
* demux: another unused functionwm42019-09-192-13/+0
|
* command: report unknown file size as unavailable, not -1wm42019-09-191-0/+2
|
* demux: autoselection is gonewm42019-09-192-9/+0
| | | | Was used by DVD, I think.
* stats.lua: silence annoying fontconfig warningswm42019-09-191-2/+2
| | | | Apparently I don't have this font.
* demux: remove some more minor dead codewm42019-09-192-8/+4
| | | | Also add clarifications.
* demux: get rid of ->control callbackwm42019-09-194-24/+9
| | | | | | | | The only thing left is the notification for track switching. Just get rid of that. There's probably no real reason to get rid of control(), but why not. I think I was actually trying to do some real work but fuck that.
* demux: change hack for closing subtitle files earlywm42019-09-197-30/+35
| | | | | | | | | | | | | | | | | | | | | Subtitles (and a few other file types, like playlists) are not streamed, but fully read on opening. This means keeping the file handle or network socket open is a waste of resources and could cause other weird behavior. This is why there's a hack to close them after opening. Change this hack to make the demuxer itself do this, which is less weird. (Until recently, demuxer->stream ownership was more complex, which is why it was done this way.) There is some evil shit due to a huge ownership/lifetime mess of various objects. Especially EDL (the currently only nested demuxer case) requires being careful about mp_cancel and passing down stream pointers. As one defensive programming measure, stop accessing the "stream" variable in open_given_type(), even where it would still work. This includes removing a redundant line of code, and removing the peak call, which should not be needed anymore, as the remaining demuxers do this mostly correctly.
* demux: make demux_open() privatewm42019-09-193-8/+8
| | | | | | | | | | I always wanted to get rid of this, because it makes the ownership rules for the stream pointer really awkward. demux_edl.c was the only remaining user of this. Replace it with a semi-clever idea: the init segment shit can be used to pass the "file" contents as memory block, and "memory://" itself provides an empty stream. I have no idea if this actually works, because I didn't immediately find a test stream (would have to be some youtube DASH shit).
* demux: simplify API for returning cache statuswm42019-09-195-152/+70
| | | | | | | | Instead of going through those weird DEMUXER_CTRLs, query this information directly. I'm not sure which kind of brain damage made me use CTRLs for these. Since there are no other DEMUXER_CTRLs that make sense for the frontend, remove the remaining infrastructure for them too.
* demux: return stream file size differently, rip out stream ctrlswm42019-09-194-49/+6
| | | | | | | The stream size return was the only thing that still required doing STREAM_CTRLs from frontend through the demuxer layer. This can be done much easier, so rip it out. Also rip out the now unused infrastructure for STREAM_CTRLs via demuxer layer.
* stream_libarchive: remove base filename stuffwm42019-09-194-32/+1
| | | | | | | | Apparently this was so that when playing a video file from a .rar file, it would load external subtitles with the same name (instead of looking for mpv's rar:// mangled URL). This was requested on github almost 5 years ago. Seems like a weird feature, and I don't care. Drop it, because it complicates some in progress change.
* demux_timeline: fix off by one error, rearrange weird codewm42019-09-191-4/+4
| | | | | | | | | | This code set pkt->stream to a value which I'm not sure whether it's correct. A recent commit overwrote it with a value that is definitely correct. There appears to be an off by one error. No fucking clue whether this was somehow correct, but applying an apparent fix does not seem to break anything, so whatever.
* demux: return packets directly from demuxer instead of using sh_streamwm42019-09-198-57/+93
| | | | | | | Preparation for other potential changes to separate demuxer cache/thread and actual demuxers. Most things are untested, but it seems to work somewhat.
* DOCS/edl-mpv: document a dumb thingwm42019-09-191-0/+3
| | | | | | | | | | | | | | EDLs can be provided either as external file, or "inline" as a big edl:// URL. There is no difference between them, except if it's loaded from an external file, there is some weird filename sanitation going on (see fix_filenames() in demux_edl.c). It seems this is intended to be a security mechanism, but probably makes no sense at all. Note that playlists are allowed to access anything locally. One difference to playlists is that the EDL code lacks the "security" mechanism when accessing playlist entries (see handling of the playlist_entry.stream_flags field - EDL would need something similar), so don't remove that, as I'm unaware of the exact consequences.
* command: make playlist builtin OSD property show titles instead of URLswm42019-09-191-5/+8
| | | | Useful for ytdl.
* stream_libarchive: fix another crash with broken rar fileswm42019-09-191-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | libarchive (sometimes affectionately called libcve) has this annoying behavior that if after a "fatal" error, you do any operation on the archive context other than querying the error and closing the context, you get a free CVE. So we close the archive context in these situations. This can set p->mpa to NULL, so code accessing this field needs to be careful. This was not considered in a certain code path, and a simple truncated .rar file made it crash. Part of the problem was that the file inside the rar was a mkv file, which triggered seeking when the demux_mkv resync code encountered bogus data. This is probably a regression from a relatively recent change to this code (in any case mpv 0.29.1 doesn't crash). Fix this by adding the check. There's also a mechanism to reopen an archive context used to emulate seeking, since most libarchive format handlers don't support this natively. Add a reopen call to the codepath, because obviously it should always be possible to seek back into a "working" area of the file. There is a second bug with this: if reopening fails, we don't adjust the current position back to 0, which in some cases means we accidentally return bogus data to the reader when we shouldn't. Fix this by always resetting the position on reopening.
* sub/sd_ass: always set the libass track type to TRACK_TYPE_ASSJan Ekström2019-09-191-2/+1
| | | | | | | | It would always autodetect it based on the passed style block, but as we are defining it - we might as well define it always. (As far as I can see all decoders in libavcodec utilize 4+ style blocks)
* sub/sd_ass: utilize UINT32_MAX subtitle duration for unknownJan Ekström2019-09-192-9/+12
| | | | | US closed captions, teletext and ARIB caption decoders utilize this value.
* sub/lavc_conv: switch to the newer "ass" subtitle decoding modeJan Ekström2019-09-193-5/+22
| | | | | Existing since 2016, this removes timestamps from the lines, and gives more precision in the timestamps (1:1000).
* wayland: fix wl_proxy leakdudemanguy2019-09-191-0/+3
| | | | | | | | | | | | | | | | | | This one is probably not terribly obvious from just the valgrind log, but a wayland dev explained it to me just a second ago. Whenever mpv sends events to the screen with wl_display_dispatch, wayland internally allocates memory to a struct wl_proxy object if a new id is found. Quite a few more things happen to that proxy object, but eventually mpv stores the data on the client-side in a wrapper type of struct (struct wl_data_offer). mpv's data_device_listener keeps track of those proxies and frees the memory when appropriate. Of course, mpv is constantly sending events to the screen and does so until the user quits the