summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* aspect: add video margin optionswm42019-09-194-5/+73
| | | | | | | | | | | | | | | Semantics a bit questionable. This is done for the OSC (next commit), and a comment added the manpage explicitly states this. Meaning this is probably garbage and needs to revisit when the OSC changes and/or someone wants to use this margin feature for something else. Not sure about the subtitle thing. It's imaginable that someone uses these options to create empty borders for subtitles on the bottom, so subtitles should be located there. On the other hand, this gives a rather unpolished user experience when using the (later added) OSC feature to not overlap with the video. There's not much of a point if the OSC still overlaps the video. However, I'm too lazy to think about this, so it stays like it is.
* aspect: fix some UB problems in corner caseswm42019-09-191-6/+6
| | | | | | | | | | | --video-margin-ratio-left=0.2 --video-margin-ratio-right=0.9 (added in the the next commit) will set f_w to inf, resulting in some garbage being propagated. Later, the OSD margins are computed from values before various sanity clamping is applied, which makes libass suffer from bullshit values. I'm very sure it's OK and more correct to compute the OSD margins using the later values, but I'm not sure about that.
* stats.lua: add cache info pagewm42019-09-191-4/+69
| | | | Uses page 3, which was apparently reserved for filter info.
* manpage: fix false statementwm42019-09-191-2/+2
|
* demux: turn some redundant assignments into assertswm42019-09-191-3/+5
| | | | | | | | demux_packet.next should not be used outside of demux.c, and in this case it's a packet that was just passed to demux.c from the outside. demux_packet.stream is already set by the demuxer, and this is assured by the add_packet_locked() caller.
* demux: move a functionwm42019-09-191-14/+12
| | | | | | The new location makes equally much sense (or more, since it's close to its per-stream companion function), and we don't need a forward declaration.
* demux: disable backward demuxing if it fatally failswm42019-09-191-0/+13
| | | | | | | | | | | | | | We don't care much about this case, because backward playback can fail terribly without a good way to detect it, so this was fine. However, this froze in certain situations. Reading from a subtitle file for which backward demuxing failed could make it get stuck in demux_read_packet_async() in unthreaded mode. (That we don't support backwards subtitle decoding anyway doesn't matter for this.) So aggressively disable backward demuxing to prevent worse in these situations. The behavior will still be awful, because the frontend is still in backwards playback mode, but at least it won't freeze.
* demux: add a on-disk cachewm42019-09-1912-39/+510
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Somewhat similar to the old --cache-file, except for the demuxer cache. Instead of keeping packet data in memory, it's written to disk and read back when needed. The idea is to reduce main memory usage, while allowing fast seeking in large cached network streams (especially live streams). Keeping the packet metadata on disk would be rather hard (would use mmap or so, or rewrite the entire demux.c packet queue handling), and since it's relatively small, just keep it in memory. Also for simplicity, the disk cache is append-only. If you're watching really long livestreams, and need pruning, you're probably out of luck. This still could be improved by trying to free unused blocks with fallocate(), but since we're writing multiple streams in an interleaved manner, this is slightly hard. Some rather gross ugliness in packet.h: we want to store the file position of the cached data somewhere, but on 32 bit architectures, we don't have any usable 64 bit members for this, just the buf/len fields, which add up to 64 bit - so the shitty union aliases this memory. Error paths untested. Side data (the complicated part of trying to serialize ffmpeg packets) untested. Stream recording had to be adjusted. Some minor details change due to this, but probably nothing important. The change in attempt_range_joining() is because packets in cache have no valid len field. It was a useful check (heuristically finding broken cases), but not a necessary one. Various other approaches were tried. It would be interesting to list them and to mention the pros and cons, but I don't feel like it.
* osdep: add mkostemps() emulationwm42019-09-192-2/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Supposed to follow the standard function. The standard function is not standard, but a GNU extension. Adding some ifdef mess is pointless too - it has no advantages other than having a mess, and not spotting implementation bugs in the emulation due to running it only on "obscure" platforms (like Windows, so most computers actually, except the developer's platform). There is mkstemp(), which at least is in POSIX 2008. But it's 100% useless, except in some obscure cases: it doesn't set O_CLOEXEC, nor can you pass it to it. Without O_CLOEXEC, we'd leak the temporary file to all child processes. (The fact that the file, which is expected to reach double or tripple digit GB sizes, will be deleted only once all processes unreference the FD, makes this sort of a big deal. You could ftruncate() it, but that doesn't fix all the other problems.) Why did POSIX standardize mkstemp() and O_CLOEXEC apparently at the same time, but provided no way to pass O_CLOEXEC to mkstemp()? With the introduction of O_CLOEXEC, they acknowledged that there's a need to atomically set the FD_CLOEXEC flag when creating file descriptors. (FD_CLOEXEC was standard before that, but setting it with fcntl() is racy.) You're much more likely to need a temp file that is CLOEXEC rather than the opposite, and even if they were somehow opposed to CLOEXEC by default (such as for compat. reasons), surely POSIX could have standardized mkostemp() too or instead. And then there's the fact that this whole O_CLOEXEC mess is stupid. Surely there would have been a better way to handle this, instead of requiring adding O_CLOEXEC to almost ALL instances of open() in all code that has been written ever. The justification for this is that the historic default was wrong, and you can't change it (e.g. this won't work: changing the behavior of exec() and not inherit the FD to the child process, unless a hypothetical O_KEEP_EXEC flag is set). But on the other hand, surely you could have introduced an exec() variant which does close all FDs, except a whitelist of FDs passed to it. Let's call it execve2(). In fact, I'm going to argue that exec() call sites are the most aware of whether (and which) FDs to inherit. Some programs even tried to explicitly iterate over all opened FDs and explicitly close "unwanted" FDs (which of course was problematic for other reasons), and such an execve2() call would have been the ideal solution. Maybe this proposed solution would have had problems too. But surely revisiting and reviewing every exec*() call would have been simpler than reviewing every open() call. And more importantly, having to extend every damn library function that either calls open() or creates FDs in some other way, like mkstemp(). What argument are there going to be against this? That there will be library code that can't keep working correctly with processes that use the "old" exec? Well, what about all my legacy library code that uses open() incorrectly, and that will break no matter what? Well, I'm not going to claim that I can come up with better solutions than POSIX (generally or in this case), but this situation is ABSOLUTELY ATROCIOUS. It makes win32 programming look attractive compared to POSIX, that standard pandering to dead people from the past. (Note: not trying to insult dead people.) I'm not sure what POSIX is even doing. Anything useful? Doesn't look like it to me. Are they paid? Why? They didn't even fix the locale mess, nor do they intend to. I bet they're proud of discussing compatibility to 70ies code day in and day out iwtohut ever producing anything useful. What a load of crap. They seriously got to do better than this. Oh, and my wrapper is probably buggy. Fortunately that doesn't matter. Also I'm dumping this into io.h. Originally, io.h was just supposed to replace broken implementation of standard functions by MinGW (and then by Android), but whatever, just give a dumping ground for shit code.
* demux: move comment to slightly better locationwm42019-09-191-1/+1
|
* demux: fix excessive backwards seeking with backwards playbackwm42019-09-191-1/+2
| | | | | | | | | | | | | | | | Backwards demuxing usually seeks back back by a "random" amount (set by a user option) when it needs new preceding packets. It turns out a past change made these backwards seek amounts add up when it didn't need to (i.e. subtracting the amount from the seek pos without properly resetting it), which could possibly slow down playback as it went on. The reason for this was that back_seek_pos was set for every stream on every seek. This made the reset not affect other streams (in particular streams which weren't used and never were reset, or which didn't reset that often). But as the commit adding it showed, this is needed only to set the initial position. So do that. Fixes: "demux: fix initial backward demuxing state in some cases"
* demux: fix minor seek_preroll consistency issuewm42019-09-191-0/+2
| | | | | | | | | When packet appending sets the start of the range, it adjusts the range by seek_preroll. Do this when packets are pruned from the start of the range too. (Yeah, seek_preroll handling is probably broken in some other cases. It was halfhearted to begin with.)
* demux: mess with seek range updates and pruningwm42019-09-193-118/+156
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The main thing this commit does is removing demux_packet.kf_seek_pts. It gets rid of 8 bytes per packet. Which doesn't matter, but whatever. This field was involved with much of seek range updating and pruning, because it tracked the canonical seek PTS (i.e. start PTS) of a packet range. We have to deal with timestamp reordering, and assume the start PTS is the lowest PTS across all packets (not necessarily just the first packet). So knowing this PTS requires looping over all packets of a range (no, the demuxer isn't going to tell us, that would be too sane). Having this as packet field was perfectly fine. I'm just removing it because I started hating extra packet fields recently. Before this commit, this value was cached in the kf_seek_pts field (and computed "incrementally" when adding packets). This commit computes the value on demand (compute_keyframe_times()) by iterating over the placket list. There is some similarity with the state before 10d0963d851fa, where I introduced the kf_seek_pts field - maybe I'm just moving in circles. The commit message claims something about quadratic complexity, but if the code before that had this problem, this new commit doesn't reintroduce it, at least. (See below.) The pruning logic is simplified (I think?) - there is no "incremental" cached pruning decision anymore (next_prune_target is removed), and instead it simply prunes until the next keyframe like it's supposed to. I think this incremental stuff was only there because of very old code that got refactored away before. I don't even know what I was thinking there, it just seems complex. Now the seek range is updated when a keyframe packet is removed. Instead of using the kf_seek_pts field, queue->seek_start is used to determine the stream with the lowest timestamp, which should be pruned first. This is different, but should work well. Doing the same as the previous code would require compute_keyframe_times(), which would introduce quadratic complexity. On the other hand, it's fine to call compute_keyframe_times() when the seek range is recomputed on pruning, because this is called only once per removed keyframe packet. Effectively, this will iterate over the packet list twice instead of once, and with some locality. The same happens when packets are appended - it loops over the recently added packets once again. (And not more often, which would go above linear complexity.) This introduces some "cleverness" with avoiding calling update_seek_ranges() even when keyframe packets added/removed, which is not really tightly coupled to the new code, and could have been in a separate commit. Removing next_prune_target achieves the same as commit b275232141f56, which is hereby reverted (stale is_bof flags prevent seeking before the current range, even if the beginning of the file was pruned). The seek range is now strictly computed after at least one packet was removed, and stale state should not be possible anymore. Range joining may over-allocate the index a little. It tried hard to avoid this before by explicitly freeing the old index before creating a new one. Now it iterates over the old index while adding the entries to the new one, which is simpler, but may allocate twice the memory in the worst case. It's not going to matter for anything, though. Seeking will be slightly slower. It needs to compute the seek PTS values across all packets in the vicinity of the seek target. The previous code also iterated over these packets, but now it iterates one packet range more. Another minor detail is that the special seeking code for SEEK_FORWARD goes away. The seeking code will now iterate over the very last packet range too, even if it's incomplete (i.e. packets are still being appended to it). It's fine that it touches the incomplete range, because the seek_end fields prevent that anything particularly incorrect can happen. On the other hand, SEEK_FORWARD can now consider this as seek target, which the deleted code had to do explicitly, as kf_seek_pts was unset for incomplete packet ranges.
* demux: fix a commentwm42019-09-191-1/+1
| | | | | | | | | Obviously doesn't sense with this order. The git history shows that this comment was touched multiple times, without ever fixing it. It was originally added in 2016, where the "for" was missing. Later, the "for" was added, but to the wrong position. What the fuck?
* demux: cache a valuewm42019-09-191-10/+9
| | | | | Just for readability purposes. Although the field is mutable, it never changes within the function.
* demux: redo timed metadatawm42019-09-196-195/+201
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The old implementation didn't work for the OGG case. Discard the old shit code (instead of fixing it), and write new shit code. The old code was already over a year old, so it's about time to rewrite it for no reason anyway. While it's true that the old code appears to be broken, the main reason to rewrite this is to make it simpler. While the amount of code seems to be about the same, both the concept and the actual tag handling are simpler. The result is probably a bit more correct. The packet struct shrinks by 8 byte. That fact that it wasted 8 bytes per packet for a rather obscure use case was the reason I started this at all (and when I found that OGG updates didn't work). While these 8 bytes aren't going to hurt, the packet struct was getting too bloated. If you buffer a lot of data, these extra fields will add up. Still quite some effort for 8 bytes. Fortunately, it's not like there are any managers that need to be convinced whether it's worth doing. The freedom to waste time on dumb shit. The old implementation attached the current metadata to each packet. When the decoder read the packet, the packet's metadata was made current. The new implementation stores metadata as separate list, and requires that the player frontend tells it the current playback time, which will be used to find the currently valid metadata. In both cases, the objective was to correctly update metadata even if a lot of data is buffered ahead (and to update them correctly when seeking within the demuxer cache). The new implementation is actually slightly more correct, because it uses the playback time for the metadata lookup. Consider if you have an audio filter which buffers 15 seconds (unfortunately such a filter exists), then the old code would update the current title 15 seconds too early, while the new one does it correctly. The new code also simplifies mixing the 3 metadata sources (global, per stream, ICY). We assume these aren't mixed in a meaningful way. The old code tried to be a bit more "exact". I didn't bother to look how the old code did this, but the new code simply always "merges" with the previous metadata, so if a newer tag removes a field, it's going to stick around anyway. I tried to keep it simple. Other approaches include making metadata a special sh_stream with metadata packets. This would have been conceptually clean, but the implementation would probably have been unnatural (and doesn't match well with libavformat's API anyway). It would have been nice to make the metadata updates chapter points (makes a lot of sense for the intended use case, web radio current song information), but I don't think it would have been a good idea to make chapters suddenly so dynamic. (Still an idea to keep in mind; the new code actually makes it easier to work towards this.) You could mention how subtitles are timed metadata, and actually are implemented as sparse packet streams in some formats. mp4 implements chapters as special subtitle stream, AFAIK. (Ironically, this is very not-ideal for files. It would be useful for streaming like web radio, but mp4 is extremely bad for streaming by design for other reasons.) bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla
* demux_lavf: compensate timestamp resets for OGG web radio streamswm42019-09-192-5/+74
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some OGG web radio streams use timestamp resets when a new song starts (you can find those Xiph's directory - other streams there don't show this behavior). Basically, the OGG stream behaves like concatenated OGG files, and "of course" the timestamps will start at 0 again when the song changes. This is very inconvenient, and breaks the seekable demuxer cache. In fact, any kind of seeking will break This is more time wasted in Xiph's bullshit. No, having timestamp resets by design is not reasonable, and fuck you. I much prefer the awful ICY/mp3 streaming mess, even if that's lower quality and awful. Maybe it wouldn't be so bad if libavformat could tell us WHERE THE FUCK THE RESET HAPPENS. But it doesn't, and the randomly changing timestamps is the only thing we get from its API. At this point, demux_lavf.c is like 90% hacks. But well, if libavformat applies this strange mixture of being clever for us vs. giving us unfiltered garbage (while pretending it abstracts everything, and hiding _useful_ implementation/low level details), not much we can do. This timestamp linearizing would, in general, probably be better done after the decoder, because then we wouldn't need to deal with timestamp resets. But the main purpose of this change is to fix seeking within the demuxer cache, so we have to do it on the lowest level. This can probably be applied to other containers and video streams too. But that is untested. Some further caveats are explained in the manpage.
* demux_lavf: add per-stream statewm42019-09-191-8/+17
| | | | Seems like this will be useful later.
* demux_lavf: use common mpv/ffmpeg timestamp conversion functionwm42019-09-191-4/+2
| | | | | | | Probably doesn't change anything, other than looking slightly better. In theory, the common function has some stuff that makes it more likely that timestamps round-trip through conversions properly, but I didn't confirm that.
* audio_buffer: fix some more theoretical UBwm42019-09-191-0/+3
| | | | | | | This may call memmove() with size==0 and a NULL data pointer. In addition to this being UB with memmove(), I think it's UB to do arithmetic on a NULL pointer too. Of course, this doesn't matter in practice at all, and is just stupidity to torture programmers.
* demux: refactor cache range init/deinitwm42019-09-193-59/+51
| | | | | | | | | | | | | | | | | | | | Remove the duplicated creation of the first range. Explicitly destroy ranges, including the last one on final deinit. It looks like this also fixes a leak of removed range structs, which was never noticed because they're so small, and were freed on final deinit due to having the demuxer as talloc parent. This improves upon the previous commit too (that change should have been part of it I guess). Sub-demuxers (demux_timeline only) now automatically don't use the cache (like it was intended by the previous commit). The cache is "initialized" (or disabled) last in the recursive call chain, which is messy, but this sub demuxer stuff FUCKING SUCKS, as mentioned in the previous commit message. This would be no problem if the caching layer and actual demuxer implementations were separate. Most of this change has no purpose. Might make (de-)initialization of further cache exerpiments simpler.
* demux: really disable cache for sub-demuxerswm42019-09-193-6/+15
| | | | | | | | | | | | | | | | | | It seems the so called demuxer cache wasn't really disabled for sub-demuxers (timeline stuff). This was relatively harmless, since the actual packet data was shared anyway via refcounting. But with the addition of a mmap cache backend, this may change a lot. So strictly disable any caching for sub-demuxers. This assumes that users of sub-demuxers (only demux_timeline.c by now?) strictly use demux_read_any_packet(), since demux_read_packet_async() will require some minor read-ahead if a low level packet read returned a packet for a different stream. This requires some awkward messing with this fucking heap of trash. The thing that is really wrong here is that the demuxer API mixes different concepts, and sub-demuxers get the same API as decoders, and use the cache code.
* demux: handle accounting for index size differentlywm42019-09-191-16/+25
| | | | | | | | | | | | | | | | | | | | | | | | The demuxer cache tries to track the number of bytes allocated for the cache. In addition to the packet queue, the seek index is another data structure that roughly depends on the amount of packets cached. So the index size should somehow be part of the total number of bytes tracking. Until now, this was handled with KF_SEEK_ENTRY_WORST_CASE, basically a shitty heuristic. It was a guess (and probably rather an upper bound than a lower bound). The implementation details made it annoying, and it was conceptually inaccurate too. Change this, and instead simply add the index size to the total cache size. This essentially makes it part of the backbuffer. It's nice that this cleanly decouples it from the packet size tracking itself. Since it's part of the backbuffer number of bytes now, packet pruning can't necessarily free enough space in the backbuffer anymore. Before this commit, the backbuffer consisted of packets only, so it was possible to reduce its size to 0 by pruning all packets until the decoder reader position, at which point a packet was accounted as forward buffered. Now the index is added to this, and it can't be pruned. Replace the assert() because of this changed invariant.
* m_option: add "B" suffix to human-readable byte numberswm42019-09-191-3/+5
| | | | | | | | | | The conversion to string as the pretty printer returns it is sometimes used on OSD. I think it's pretty odd that quantities below 1 KB are shown as number without suffix. So use "B" for them. For orthogonality, allow the same for parsing. (Although strictly speaking, this is not a requirement of the option API. Option parsers don't need to accept pretty-printed strings.)
* common: add MP_IS_ALIGNED macrowm42019-09-191-0/+1
|
* packet: change len field from int to size_twm42019-09-192-2/+2
| | | | | Why not. struct demux_packet doesn't change on 64 bit size due to alignment padding.
* demux: fix assertion when switching tracks during backward playbackwm42019-09-191-20/+20
| | | | | | | | | | | | | | Someone who rams a knife into his own hand just to see what happens is normally put in a psychiatric ward. But in software, this is acceptable behavior. Programs are not supposed to crash just because a user did something unreasonably dumb. Switching tracks during backward playback is such a thing. It triggered an assertion because the newly enabled stream was not properly initialized for backward playback. Fix this, and make it actually work (mostly; it still takes a "while" until playback recovers fully). This actually makes some aspects of initialization slightly cleaner.
* player: ensure backward playback state is propagated on track switchingwm42019-09-194-5/+14
| | | | | | | | Track switching doesn't run reset_playback_state(), so a track enabled at runtime during backward playback would lead to a messed up state. This commit just does a bad code monkey fix to this. It feels like there needs to be a much better way to propagate this state.
* demux: use binary search for cache seek indexwm42019-09-191-7/+28
| | | | | | | | | | | | | Not sure if this is bug-free. You _always_ make bugs when writing a binary search from scratch (and such is the curse of C, though if I did this in C++ it would probably end in blood). It seems to work though, checking against the normal linear search. It's slightly faster. Not much. I wonder if the termination condition can be written in a nicer/elegant way. I guess the fact that it's not a == predicate makes this slightly messier?
* demux: create full seek index for cached packetswm42019-09-191-26/+72
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The purpose of the seek index is to avoid having to iterate over the full linked list of cached packets, which should speed up seeking. Until now, there was an excuse of a seek index, that didn't really work. The idea of the old index was nice: have a fixed number of entries (no need to worry about exceeding memory requirements), which are "stretched" out as the cache gets bigger. The size of it was 16 entries, which in theory should speed up seeking by the factor 16, given evenly spaced out entries. To achieve this even spacing, it attempted to "thin out" the index by half once the index was full (see e.g. index_distance field). In my observations this didn't really work, and the distribution of the index entries was very uneven. Effectively, this did nothing. It probably worked once and I can't be assed to debug my own shit code. Writing new shit code is more fun. Write new shit code for fun. This time it's a complete index. It's kept in a ringbuffer (for easier LIFO style appending/removing), which is resized with realloc if it becomes too small. Actually, the index is not completely completely; it's still "thinned out" by a hardcoded time value (INDEX_STEP_SIZE). This should help with things like audio or crazy subtitle tracks (do they still create those?), where we can just iterate over a small part of the linked packet list to get the exact seek position. For example, for AAC audio tracks with typical samplerates/framesizes we'd iterate about 50 packets in the linked list. The results are good. Seeking in large caches is much faster now, apparently at least 1 or 2 orders of magnitude. Part of this is because we don't need to touch every damn packet in a huge linked list (bad cache behavior - the index is a linear memory region instead), but "thinning" out the search space also helps. Both aspects can be easily tested (setting INDEX_STEP_SIZE to 0, and replacing e->pts with e->pkt->kf_seek_pts in find_seek_target()). This does use more memory of course. In theory, we could tolerate memory allocation failures (the index is optional and only for performance), but I didn't bother and inserted an apologetic comment instead, have fun with the shit code). the memory usage doesn't seem to be that bad, though. Due to INDEX_STEP_SIZE it's bounded by the file duration too. Try to account for the additional memory usage with an approximation (see KF_SEEK_ENTRY_WORST_CASE). It's still a bit different, because the index needs a single, potentially large allocation.
* demux: simplify cache search and exit earlywm42019-09-191-15/+10
| | | | | | | | | | | | | | The search was slightly more complicated and slow than it had to be. It didn't assume that the packet list was sorted, which is responsible for much of this. (I think the search code was borrowed from demux_mkv.c, which does not sort index entries.) There was a half-hearted attempt to make it exit early, but it was mostly ineffective. Simplify the code based on the assumption that the list is sorted. This will exit the search loop once the worst case candidate entry was checked.
* demux: update some commentswm42019-09-191-15/+28
| | | | | | | Mostly about the packet queue and the