summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* demux_mkv: remove unnecessary parsing for vp9wm42017-11-172-6/+2
| | | | | | | We can finally get rid of this crap. Depends on a ffmpeg-mpv change. Always worked with Libav (ever since they fixed it properly).
* w32_common: move imm32.dll function to w32->api structpavelxdd2017-11-151-15/+12
| | | | | | | | | For consistency with already implemented shcore.dll function loading in w32->api: Moved loading of imm32.dll to w32_api_load, and declare pImmDisableIME function pointer in the w32->api struct. Removed unloading of imm32.dll.
* vo_gpu/context_android: Process surface resizes correctlysfan52017-11-141-10/+11
|
* appveyor: use git submodule update --initJames Ross-Gowan2017-11-131-2/+1
| | | | Thanks @jeeb.
* demux_lavf: always give libavformat the filename when probingwm42017-11-121-1/+1
| | | | | | | | | | | This gives the filename or URL to the libavformat probing logic, which might use the file extension as a "help" to decide which format the file is. This helps with mp3 files that have large id3v2 tags and prevents the idiotic ffmpeg probing logic to think that a mp3 file is amr. (What we really want is knowing whether we _really_ need to feed more data to libavformat to detect the format. And without having to pre-read excessive amounts of data for relatively normal streams.)
* stream_libarchive, osdep: use stubs for POSIX 2008 locale on MinGWwm42017-11-122-0/+8
|
* demux_playlist: support .url fileswm42017-11-121-3/+15
| | | | Requested. Not tested due to lack of real samples. Fixes #5107.
* build: enable libarchive by defaultwm42017-11-121-1/+0
| | | | Or libcve, as the vlc developers call it.
* vo_gpu: ra_gl: remove stride hackwm42017-11-121-4/+1
| | | | Same reasoning as in commit 9b5d062d36e3.
* stream_libarchive: workaround various types of locale braindeathwm42017-11-122-4/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix that libarchive fails to return filenames for UTF-8/UTF-16 entries. The reason is that it uses locales and all that garbage, and mpv does not set a locale. Both C locales and wchar_t are shitfucked retarded legacy braindeath. If the C/POSIX standard committee had actually competent members, these would have been deprecated or removed long ago. (I mean, they managed to remove gets().) To justify this emotional outbreak potentially insulting to unknown persons, I will write a lot of text. Those not comfortable with toxic language should pretend this is a religious text. C locales are supposed to be a way to support certain languages and cultures easier. One example are character codepages. Back when UTF-8 was not invented yet, there were only 255 possible characters, which is not enough for anything but English and some european languages. So they decided to make the meaning of a character dependent on the current codepage. The locale (LC_CTYPE specifically) determines what character encoding is currently used. Of course nowadays, this is legacy nonsense. Everything uses UTF-8 for "char", and what doesn't is broken and terrible anyway. But the old ways stayed with us, and the stupidity of it as well. C locales were utterly moronic even when they were invented. The locale (via setlocale()) is global state, and global state is not a reasonable way to do anything. It will break libraries, or well modularized code. (The latter would be forced to strictly guard all entrypoints set set/restore locales, assuming a single threaded world.) On top of that, setting a locale randomly changes the semantics of a bunch of standard functions. If a function respects locale, you suddenly can't rely on it to behave the same on all systems. Some behavior can come as a surprise, and of course it will be dependent on the region of the user (it doesn't help that most software is US-centric, and the US locale is almost like the C locale, i.e. almost what you expect). Idiotically, locales were not just used to define the current character encoding, but the concept was used for a whole lot of things, like e. g. whether numbers should use "," or "." as decimal separaror. The latter issue is actually much worse, because it breaks basic string conversion or parsing of numbers for the purpose of interacting with file formats and such. Much can be said about how retarded locales are, even beyond what I just wrote, or will wrote below. They are so hilariously misdesigned and insufficient, I can't even fathom how this shit was _standardized_. (In any case, that meant everyone was forced to implement it.) Many C functions can't even do it correctly. For example, the character set encoding can be a multibyte encoding (not just UTF-8, but awful garbage like Shift JIS (sometimes called SHIT JIZZ), yet functions like toupper() can return only 1 byte. Or just take the fact that the locale API tries to define standard paper sizes (LC_PAPER) or telephone number formatting (LC_TELEPHONE). Who the fuck uses this, or would ever use this? But the badness doesn't stop here. At some point, they invented threads. And they put absolutely no thought into how threads should interact with locales. So they kept locales as global state. Because obviously, you want to be able to change the semantics of basic string processing functions _while_ they're running, right? (Any thread can call setlocale() at any time, and it's supposed to change the locale of all other threads.) At this point, how the fuck are you supposed to do anything correctly? You can't even temporarily switch the locale with setlocale(), because it would asynchronously fuckup the other threads. All you can do is to enforce a convention not to set anything but the C local (this is what mpv does), or to duplicate standard functions using code that doesn't query locale (this is what e.g. libass does, a close dependency of mpv). Imagine they had done this for certain other things. Like errno, with all the brokenness of the locale API. This simply wouldn't have worked, shit would just have been too broken. So they didn't. But locales give a delicious sweet spot of brokenness, where things are broken enough to cause neverending pain, but not broken enough that enough effort would have spent to fix it completely. On that note, standard C11 actually can't stringify an error value. It does define strerror(), but it's not thread safe, even though C11 supports threads. The idiots could just have defined it to be thread safe. Even if your libc is horrible enough that it can't return string literals, it could just just some thread local buffer. Because C11 does define thread local variables. But hey, why care about details, if you can just create a shitty standard? (POSIX defines strerror_r(), which "solves" this problem, while still not making strerror() thread safe.) Anyway, back to threads. The interaction of locales and threads makes no sense. Why would you make locales process global? Who even wanted it to work this way? Who decided that it should keep working this way, despite being so broken (and certainly causing implementation difficulties in libc)? Was it just a fucked up psychopath? Several decades later, the moronic standard committees noticed that this was (still is) kind of a bad situation. Instead of fixing the situation, they added more garbage on top of it. (Probably for the sake of "compatibility"). Now there is a set of new functions, which allow you to override the locale for the current thread. This means you can temporarily override and restore the local on all entrypoints of your code (like you could with setlocale(), before threads were invented). And of course not all operating systems or libcs implement this. For example, I'm pretty sure Microsoft doesn't. (Microsoft got to fuck it up as usual, and only provides _configthreadlocale(). This is shitfucked on its own, because it's GLOBAL STATE to configure that GLOBAL STATE should not be GLOBAL STATE, i.e. completely broken garbage, because it requires agreement over all modules/libraries what behavior should be used. I mean, sure, makign setlocale() affect only the current thread would have been the reasonable behavior. Making this behavior configurable isn't, because you can't rely on what behavior is active.) POSIX showed some minor decency by at least introducing some variations of standard functions, which have a locale argument (e.g. toupper_l()). You just pass the locale which you want to be used, and don't have to do the set locale/call function/restore locale nonense. But OF COURSE they fucked this up too. In no less than 2 ways: - There is no statically available handle for the C locale, so you have to initialize and store it somewhere, which makes it harder to make utility functions safe, that call locale-affected standard functions and expect C semantics. The easy solution, using pthread_once() and a global variable with the created locale, will not be easily accepted by pedantic assholes, because they'll worry about allocation failure, or leaking the locale when using this in library code (and then unloading the library). Or you could have complicated library init/uninit functions, which bring a big load of their own mess. Same for automagic DLL constructors/destructors. - Not all functions have a variant that takes a locale argument, and they missed even some important ones, like snprintf() or strtod() WHAT THE FUCK WHAT THE FUCK WHAT THE FUCK WHAT THE FUCK WHAT THE FUCK WHAT THE FUCK WHAT THE FUCK WHAT THE FUCK WHAT THE FUCK I would like to know why it took so long to standardize a half-assed solution, that, apart from being conceptually half-assed, is even incomplete and insufficient. The obvious way to fix this would have been: - deprecate the entire locale API and their use, and make it a NOP - make UTF-8 the standard character type - make the C locale behavior the default - add new APIs that explicitly take locale objects - provide an emulation layer, that can be used to transparently build legacy code without breaking them But this wouldn't have been "compatible", and the apparently incompetent standard committees would have never accepted this. As if anyone actually used this legacy garbage, except other legacy garbage. Oh yeah, and let's care a lot about legacy compatibility, and let's not care at all about modern code that either has to suffer from this, or subtly breaks when the wrong locales are active. Last but not least, the UTF-8 locale name is apparently not even standardized. At the moment I'm trying to use "C.UTF-8", which is apparently glibc _and_ Debian specific. Got to use every opportunity to make correct usage of UTF-8 harder. What luck that this commit is only for some optional relatively obscure mpv feature. Why is the C locale not UTF-8? Why did POSIX not standardize an UTF-8 locale? Well, according to something I heard a few years ago, they're considering disallowing UTF-8 as locale, because UTF-8 would violate certain ivnariants expected by C or POSIX. (But I'm not sure if I remember this correctly - probably better not to rage about it.) Now, on to libarchive. libarchive intentionally uses the locale API and all the broken crap around it to "convert" UTF-8 or UTF-16 (as contained in reasonably sane archive formats) to "char*". This is a good start! Since glibc does not think that the C locale uses UTF-8, this fails for mpv. So trying to use archive_entry_pathname() to get the archive entry name fails if the name contains non-ASCII characters. Maybe use archive_entry_pathname_utf8()? Surely that should return UTF-8, since its name seems to indicate that it returns UTF-8. But of fucking course it doesn't! libarchive's horribly convoluted code (that is full of locale API usage and other legacy shit, as well as ifdefs and OS specific code, including Windows and fucking Cygwin) somehow fucks up and fails if the locale is not set to UTF-8. I made a PR fixing this in libarchive almost 2 years ago, but it was ignored. So, would archive_entry_pathname_w() as fallback work? No, why would it? Of course this _also_ involves shitfucked code that calls shitfucked standard functions (or OS specific ifdeffed shitfuck). The truth is that at least glibc changes the meaning of wchar_t depending on the locale. Unlike most people think, wchar_t is not standardized to be an UTF variant (or even unicode) - it's an encoding that uses basic units that can be larger than 8 bit. It's an implementation defined thing. Windows defines it to 2 bytes and UTF-16, and glibc defines it to 4 bytes and UTF-32, but only if an UTF-8 locale is set (apparently). Yes. Every libarchive function dealing with strings has 3 variants: plain, _utf8, and _w. And none of these work if the locale is not set. I cannot fathom why they even have a wchar_t variant, because it's redundant and fucking useless for any modern code. Writing a UTF-16 to UTF-8 conversion routine is maybe 3 pages of code, or a few lines if you use iconv. But libarchive uses all this glorious bullshit, and ends up with 3 not working API functions, and with over 4000 lines of its own string abstraction code with gratuitous amounts of ifdefs and OS dependent code that breaks in a fairly common use case. So what we do is: - Use the idiotic POSIX 2008 API (uselocale() etc.) (Too bad for users who try to build this on a system that doesn't have these - hopefully none are left in 2017. But if there are, torturing them with obscure build errors is probably justified. Might be bad for Windows though, which is a very popular platform except on phones.) - Use the "C.UTF-8" locale, which is probably not 100% standards compliant, but works on my system, so it's fine. - Guard every libarchive call with uselocale() + restoring the locale. - Be lazy and skip some libarchive calls. Look forward to the unlikely and astonishingly stupid bugs this could produce. We could also just set a C UTF-8 local in main (since that would have no known negative effects on the rest of the code), but this won't work for libmpv. We assume that uselocale() never fails. In an unexplainable stroke of luck, POSIX made the semantics of uselocale() nice enough that user code can fail failures without introducing crash or security bugs, even if there should be an implementation fucked up enough where it's actually possible that uselocale() fails even with valid input. With all this shitty ugliness added, it finally works, without fucking up other parts of the player. This is still less bad than that time when libquivi fucked up OpenGL rendering, because calling a libquvi function would load some proxy abstraction library, which in turn loaded a KDE plugin (even if KDE was not used), which in turn called setlocale() because Qt does this, and consequently made the mpv GLSL shader generation code emit "," instead of "." for numbers, and of course only for users who had that KDE plugin installed, and lived in a part of the world where "." is not used as decimal separator. All in all, I believe this proves that software developers as a whole and as a culture produce worse results than drug addicted butt fucked monkeys randomly hacking on typewriters while inhaling the fumes of a radioactive dumpster fire fueled by chinese platsic toys for children and Elton John/Justin Bieber crossover CDs for all eternity.
* vo_gpu: d3d11: remove flipped texture upload hackJames Ross-Gowan2017-11-121-8/+0
| | | | Made unnecessary by 4a6b04bdb930.
* osx: fix the bundle $PATH yet againAkemi2017-11-111-1/+1
| | | | we have 5 parameters for the string but only 4 were being used.
* cocoa: always return the target NSRect when in fullscreenAkemi2017-11-111-1/+4
| | | | | | there is no need to calculate a new rectangle when in fullscreen since we always want to cover the whole screen. so just return the target rectangle.
* demux: avoid queue overflow warning when joining two rangeswm42017-11-111-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the backbuffer is much larger than the forward buffer, and if you join a small range with a large range (larger than the forward buffer), then the seek issues to the end of the range after joining will overflow the queue. Normally, read_more will be false when the forward buffer is full, but the resume seek after joining will set need_refresh to true, which forces more reading and thus triggers the overfloe warning. Attempt to fix this by not setting read_more to true on refresh seeks. Set prefetch_more instead. read_more will still be set if an A/V stream has no data. This doesn't help with the following problems related to using refresh seeks for track switching: - If the forward buffer is full, then enabling another track will obviously immediately overflow the queue, and immediately lead to marking the new track as having no more data (i.e. EOF). We could cut down the forward buffer or so, but there's no simple way to implement it. Another possibility would be dropping all buffers and trying to resume again, but this would likely be complex as well. - Subtitle tracks will not even show a warning (because they are sparse, and we have no way of telling whether a packet is missing, or there's just no packet near the current position). Before this commit, enabling an empty subtitle track would probably have overflown the queue, because ds->refreshing was never set to true. Possibly this could be solved by determining a demuxer read position, which would reflect until which PTS all subtitle packets should have been demuxed. The forward buffer limit was intended as a last safeguard to avoid excessive memory usage against badly interleaved files or decoders going crazy (up to reading the whole into memory and OOM'ing the user's system). It's not good at all to limit prefetch. Possibly solutions include having another smaller limit for prefetch, or maybe having only a total buffer limit, and discarding back buffer if more data has to be read. The current solution is making the forward buffer larger than the forward duration (--cache-secs) would require, but of course this depends on the stream's bitrate.
* demux: export demuxer cache sizes in byteswm42017-11-104-0/+27
| | | | | | Plus sort of document them, together with the already existing undocumented fields. (This is mostly for debugging, so use is discouraged.)
* demux: use seekable cache for network by default, bump prefetch limitwm42017-11-102-8/+18
| | | | | | | | The option for enabling it has now an "auto" choice, which is the default, and which will enable it if the media is thought to be via network or if the stream cache is enabled (same logic as --cache-secs). Also bump the --cache-secs default from 10 to 120.
* demux_mkv: fix potential uninitialized variable readwm42017-11-101-2/+3
|
* demux: set default back buffer to some high valuewm42017-11-102-2/+4
| | | | | | | Some back buffer is required to make the immediate forward range seekable. This is because the back buffer limit is strictly enforced. Just set a rather high back buffer by default. It's not use if --demuxer-seekable-cache is disabled, so this is without risk.
* demux: limit number of seek ranges to a static maximumwm42017-11-101-5/+20
| | | | | | | Limit the number of cached ranges to MAX_SEEK_RANGES, which is the same number of maximally exported seek ranges. It makes no sense to keep them, as the user won't see them anyway. Remove the smallest ones to enforce the limit if the number grows too high.
* demux: speed up cache seeking with a coarse indexwm42017-11-101-1/+54
| | | | | | | | | | | Helps a little bit, I guess. But in general, t(h)rashing the cache kills us anyway. This has a fixed number of index entries. Entries are added/removed as packets go through the packet queue. Only keyframes after index_distance seconds are added. If there are too many keyframe packets, the existing index is reduced by half, and index_distance is doubled. This should provide somewhat even spacing between the entries.
* demux: avoid wasting time by stopping packet search as early as possiblewm42017-11-101-1/+3
| | | | | | | | | | The packet queue is sorted, so we can stop the search if we have found a packet, and the next packet in the queue has a higher PTS than the seek PTS (for the sake of SEEK_FORWARD, we still consider the first packet with a higher PTS). Also, as a mostly cosmetic change, but which might be "faster", check target for NULL, instead of target_diff for a magic float value.
* demux: simplify remove_packet() functionwm42017-11-101-26/+12
| | | | Turns out this is only ever used to remove the head element anyway.
* demux: fix failure to join ranges with subtitles in some caseswm42017-11-101-4/+12
| | | | | | Subtitle streams are sparse, and no overlap is required to correctly join two cached ranges. This was not correctly handled iff the two ranges had different subtitle packets close to the join point.
* demux: reverse which range is reused when joining themwm42017-11-101-25/+22
| | | | | Which one to use doesn't really matter, but reusing the first one will probably be slightly more convenient later on.
* demux: fix a race condition with async seekingwm42017-11-101-3/+4
| | | | | | | | | | | demux_add_packet() must completely ignore any packets that are added while a queued seek is not initiated yet. The main issue is that after setting in->seeking==true, the central lock is released, and it can take "a while" until it's reacquired on the demux thread and the seek is actually initiated. During that time, packets could be read and added, that have nothing to do with the new state.
* demux: get rid of an unnecessary fieldwm42017-11-101-15/+13
|
* vo_gpu: never pass flipped images to ra or ra backendswm42017-11-101-2/+7
| | | | | | | | | | | | | | | | | | | Seems like the last refactor to this code broke playing flipped images, at least with --opengl-pbo --gpu-api=opengl. Flipping is rather a shitmess. The main problem is that OpenGL does not support flipped uploading. The original vo_gl implementation considered it important to handle the flipped case efficiently, so instead of uploading the image line by line backwards, it uploaded it flipped, and then flipped it in the renderer (basically the upload path ignored the flipping). The ra code and backends probably have an insane and inconsistent mix of semantics, so fix this by never passing it flipped images in the first place. In the future, the backends should probably support flipped images directly. Fixes #5097.
* demux: attempt to accurately reflect seek range with muxed subtitleswm42017-11-101-5/+33
| | | | | | | | | | | | | | | | If subtitles are part of the stream, determining the seekable range becomes harder. Subtitles are sparse, and can have packets in irregular intervals, or even completely lack packets. The usual logic of computing the seek range by the min/max packet timestamps fails. Solve this by making the only assumption we can make: subtitle packets are implicitly demuxed along with other packets. We also assume perfect interleaving for this, but you really can't do anything with sparse packets that makes sense without this assumption. One special case is if we prune sparse packets within the current seekable range. Obviously this should limit the seekable range to after the pruned packet.
* demux: reduce indentation for two functionswm42017-11-101-37/+36
| | | | | Remove the single-exit, that added a huge if statement containing everything, just for some corner case.
* demux: some minor mostly cosmeticswm42017-11-101-13/+15
| | | | | None of these change functionality in any way (although the log level changes obviously change the terminal output).
* demux: simplify a functionwm42017-11-101-21/+19
| | | | | update_stream_selection_state() doesn't need all these arguments. Not sure what I was thinking here.
* demux: change how refreshes on track switching are handledwm42017-11-101-66/+59
| | | | | | Instead of weirdly deciding this on every packet read and having the code far away from where it's actually needed, just run it where it's actually needed.
* demux: get rid of weird backwards loopwm42017-11-101-1/+1
| | | | | | A typical idiom for calling functions that remove something from the array being iterated, but it's not needed here. I have no idea why this was ever done.
* demux: avoid broken readahead when joining rangeswm42017-11-101-4/+5
| | | | | | | | | | Setting ds->refreshing for unselected streams could lead to a nonsensical queue overflow warning, because read_packet() took it as a reason to continue reading. Also add some more information to the queue overflow warning (even if that one doesn't have anything to do with this bug - it was for unselected streams only).
* demux: reduce difference between threaded and non-threaded modewm42017-11-101-27/+35
| | | | | | | | | | | This fixes an endless loop with threading disabled, such as for example when playing a file with an external subtitle file, and seeking to the beginning. Something will set in->seeking, but the seek is never executed, which made demux_read_packet() loop endlessly. (External subtitles will use non-threaded mode for whatever reasons.) Fix this by by making the unthreaded code to execute the worker thread body, which reduces the difference in logic.
* demux: support multiple seekable cached rangeswm42017-11-094-230/+610
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Until now, the demuxer cache was limited to a single range. Extend this to multiple range. Should be useful for slow network streams. This commit changes a lot in the internal demuxer cache logic, so there's a lot of room for bugs and regressions. The logic without demuxer cache is mostly untouched, but also involved with the code changes. Or in other words, this commit probably fucks up shit. There are two things which makes multiple cached ranges rather hard: 1. the need to resume the demuxer at the end of a cached range when seeking to it 2. joining two adjacent ranges when the lowe range "grows" into it (and resuming the demuxer at the end of the new joined range) "Resuming" the demuxer means that we perform a low level seek to the end of a cached range, and properly append new packets to it, without adding packets multiple times or creating holes due to missing packets. Since audio and video never line up exactly, there is no clean "cut" possible, at which you could resume the demuxer cleanly (for 1.) or which you could use to detect that two ranges are perfectly adjacent (for 2.). The way how the demuxer interleaves multiple streams is also unpredictable. Typically you will have to expect that it randomly allows one of the streams to be ahead by a bit, and so on. To deal with this, we have heuristics in place to detect when one packet equals or is "behind" a packet that was demuxed earlier. We reuse the refresh seek logic (used to "reread" packets into the demuxer cache when enabling a track), which checks for certain packet invariants. Currently, it observes whether either the raw packet position, or the packet DTS is strictly monotonically increasing. If none of them are true, we discard old ranges when creating a new one. This heavily depends on the file format and the demuxer behavior. For example, not all file formats have DTS, and the packet position can be unset due to libavformat not always setting it (e.g. when parsers are used). At the same time, we must deal with all the complicated state used to track prefetching and seek ranges. In some complicated corner cases, we just give up and discard other seek ranges, even if the previously me