summaryrefslogtreecommitdiffstats
path: root/player/video.c
Commit message (Collapse)AuthorAgeFilesLines
* f_decoder_wrapper: replace most public fields with setters/getterswm42020-02-291-4/+5
| | | | | | | | | | | | | | | | | | | I may (optionally) move decoding to a separate thread in a future change. It's a bit attractive to move the entire decoder wrapper to there, so if the demuxer has a new packet, it doesn't have to wake up the main thread, and can directly wake up the decoder. (Although that's bullshit, since there's a queue in between, and libavcodec's multi-threaded decoding plays cross-threads ping pong with packets anyway. On the other hand, the main thread would still have to shuffle the packets around, so whatever, just seems like better design.) As preparation, there shouldn't be any mutable state exposed by the wrapper. But there's still a large number of corner-caseish crap, so just use setters/getters for them. This recorder thing will inherently not work, so it'll have to be disabled if threads are used. This is a bit painful, but probably still the right thing. Like speculatively pulling teeth.
* player: dumb seeking related stuff, make audio hr-seek defaultwm42020-02-281-24/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | Try to deal with various corner cases. But when I fix one thing, another thing breaks. (And it's 50/50 whether I find the breakage immediately or a few months later.) So results may vary. The default for--hr-seek is changed to "default" (not creative enough to find a better name). In this mode, audio seeking is exact if there is no video, or if the video has only a single frame. This change is actually pretty dumb, since audio frames are usually small enough that exact seeking does not really add much. But it gets rid of some weird special cases. Internally, the most important change is that is_coverart and is_sparse handling is merged. is_sparse was originally just a special case for weird .ts streams that have the corresponding low-level flag set. The idea is that they're pretty similar anyway, so this would reduce the number of corner cases. But I'm not sure if this doesn't break the original intended use case for it (I don't have a sample anyway). This changes last-frame handling, and respects the duration of the last frame only if audio is disabled. This is mostly "coincidental" due to the need to make seeking past EOF trigger player exit, and is caused by setting STATUS_EOF early. On the other hand, this might have been this way before (see removed chunk close to it).
* player: set playback_pts in hr-seek past EOF casewm42020-02-281-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Hr-seek past the last frame instantly enters EOF, which means handle_playback_time() will not set playback_pts to the video PTS (as all video frames are skipped), which leads to the playback time being taken from the last seek target. This results in confusing behavior, especially since the seek time will be clipped to the file duration for display, but not for further relative seeks. Obviously, the time should be set to the last video frame, so use the last video frame as fallback if both audio and video have ended. Also, since the same problem exists with audio-only playback, add a fallback for audio PTS too. We don't know which was the "last" fragment of media played (to decide whether to use the audio or video PTS as the fallback), but it doesn't matter since the maximum works. This could lead to some undesired effects. In particular the audio PTS is basically a bad guess, and is for example not clipped against --end. (But the ridiculous way audio syncing and clamping currently works, I'm not going to touch that shit unless I rewrite it completely.) The cover art case is slightly broken: using --keep-open with keyframe seeks will result in 0 as playback PTS (the video PTS). OK, who cares, it got late. Also casually get rid of last_vo_pts, since that barely made any sense at all. Fixes: #7487
* player: remove stale last frame referenceswm42020-02-281-2/+5
| | | | | | | | | | | | | | | | | The seeking logic saves the last video frame it has seen (for example for being able to seek to the last frame, or backstepping). Unfortunately, the frame was fed back to the filtering pipeline in situations when it shouldn't have. Then it's an out of order frame, because it really saves the last _discarded_ frame. For example, seeking to the end of a file with --keep-open, shift+up, shift+down => invalid video pts warning due to saved_frame being fed back. Explicitly discard saved_frame when it's obviously not needed anymore. The removed accesses to "r" are strictly speaking unrelated (just const-propagating them).
* player: make screenshot each-frame mode more accuratewm42020-02-071-2/+0
| | | | | | | | | | | | | | | | | | | | | | | Due to asynchronicity, we generally can't guarantee that a video frame matches up with other events such as playback time change exactly (since decoding, presentation, and property update all happen at different times). This is a complaint in the referenced bug report, where screenshot filenames in each-frame screenshot did not use the correct timestamp, and instead was lagging behind by 1 frame. But in this case, synchronicity was already pretty much forced with wait calls. The only problem was that the playback time was updated at a later time, which results in the observed 1 frame lag. Fix this by moving the place where the screenshot is triggered in this mode. Normal screenshots may still have the old problem. There is no effort made to guarantee the timestamps absolutely line up, same as with the OSD. (If you want a guarantee, you need to use a video filter, such as libavfilter's drawtext. These will obviously use the proper timestamp, instead of going through the somewhat asynchronous property etc. system in the player frontend.) Fixes: #7433
* player: avoid underrun wakeup loopwm42019-12-161-1/+8
| | | | | | | | | | | | | | | | | | The VO underrun detection (just a weak heuristic) added in commit f26dfb flagged the underrun state every time it was checked, and since the check happened in every playloop iteration, this caused the playloop to wake up itself on every iteration. It burned an entire core while in this state. Fix this by flagging this condition only once (as it should be), and requiring that a frame is displayed to trigger it again. This makes it work similar as the audio underrun check. The bug report referenced below says --demuxer-thread=no avoided this. This is because the demuxer layer doesn't do proper underrun reporting if the reader thread is disabled. Fixes: #7259
* player: make repeated hr-seeks past EOF trigger EOF as expectedwm42019-12-141-3/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If you have a normal file with audio and video, and keep "spamming" forward hr-seeks, the player just kept showing the last video frame instead of exiting or playing the next file. This started happening since commit 6bcda94cb. Although not a bug per se, it was odd, and very user-noticable. The main problem was that the pending seek command was processed before the EOF was "noticed". Processing the command reset everything, so the player did not terminate playback, but repeated the seek. This commit restores the old behavior. For one, it makes video return the correct status (video.c). The parameter is a bit ugly, but better than duplicating the logic or having another MPContext field. (As a minor detail, setting r=VD_EOF makes sure have_new_frame() returns true, rather than going through another iteration or whatever the hell will happen instead, which would clobber logical_eof.) Another thing is making the seek logic actually wait until the seek outcome has been determined if audio is also active. Audio needs to wait for video in order to get the video seek target position. (Which in turn is because hr-seek still "snaps" to video frames. You can't seek in between two frames, so audio can't just use the seek target, but always has to wait on the timestamp of the video frame. This has other disadvantages and is a misdesign, but not something I'll fix today.) In theory, this might make hr-seeks less responsive, because it needs to fully decode/filter the audio too, but in practice most time is spent on video, which had to be fully decoded before this change. (In general, hr-seek could probably just show a random frame when a queued hr-seek overrides the current hr-seek, which would probably lead to a better user experience, but that's out of scope.) Fixes: #7206
* player: don't apply weird timestamp tolerance on backstepwm42019-12-031-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | Hr-seek has some sort of tolerance against timestamps, where it allows for up to 5ms deviation. This means it can work only for videos with up to 200 FPS framerate. There were complains about how it doesn't work with videos beyond some high fps. (1000 was mentioned, although that sounds more like it's about the limit that .mkv has.) I suspect this is because otherwise, it might be hard to hit a timestamp with --start, which specifies timestamps as integer, and thus will most likely never represent a timestamp exactly. Another part of the problem is that mpv uses 64 bit floats for timestamps, so fractional parts are never represented exactly. (Both the "tolerance" and using floats for timestamps were things introduced before my time.) Anyway, in the backstep case, we can be relatively sure that the timestamp will be exact (as in, the same unmodified value that was returned by the filter chain), so we can make an exception for that, in order to fix backstep. Untested. (For that you have users.) May help with #7208.
* options: deprecate --video-sync=display-adropwm42019-11-171-0/+6
| | | | A stupid thing that will probably be in the way.
* player: remove some unnecessary coverart special caseswm42019-11-171-1/+1
| | | | | | | | These should not be needed, since video is in EOF mode in this case anyway. Not too sure about the video.c case to be honest, well, here goes nothing.
* video: make track switching work for external imageswm42019-11-171-7/+13
| | | | | | | | Until now, this didn't work, since the external image had pts 0; so enabling video at a later time did nothing, because the image was discarded. Since hrseek now ends on the last frame (instead of nothing), reusing the hrseek mechanism solves this, and we don't even need to treat the cursed coverart case separately.
* video: set EOF status as soon as possiblewm42019-11-171-1/+7
| | | | | | | | | | | | See what the added code comment says. Normally when this is needed, it's the cover art case. But this flag is not set when using an external image. This gives weird seek behavior, because the frame will be "normally" displayed for its determined duration, and during normal video playback, the video pts will be used - which is always 0 here. This should happen only if audio is active. Otherwise, we're more or less in image viewer mode, where the image should be displayed for a configured duration.
* video: if hr-seek goes past last frame, seek to last framewm42019-11-171-7/+6
| | | | | | | | | | | | This gives much better behavior in general, and is what we want if video somehow ends earlier than audio. A common special is using an audio file with an external image file. This commit makes things like switching aspect ratio work (provided the demuxer for the image behaves correctly, which currently isn't the case with demux_mf.c). Since the image file had timestamp 0, it was usually skipped by hr-seek, and changed properties weren't applied to it at the start of the filter chain.
* video: take first frame into account in audio-sync modewm42019-11-161-3/+2
| | | | | | | | | | | | | | | It appears commit 4ad68d94523c3d101a broke handling the first video frame duration through roundabout ways (I think because the duration of the first frame was now available at all in the normal case). The first frame was cut short, which showed up especially with looping, or if the file had a low FPS. This questionable change seems to fix it without breaking any other known cases => push and call it a day. The display-sync mode did not have this problem. Fixes: #7150
* video: do not disable display-sync on A/V desyncwm42019-10-171-8/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | On a audio/video desync by more than 0.5 seconds, display-sync mode was disabled, and not enabled again (until playback restart, e.g. a seek). The idea was that it this only happens when this playback mode is broken and can't perform well anyway (A/V desync is a clear indication that something is very wrong). Instead of behaving like a god damn POS, it should revert to the more robust audio-sync mode. Unfortunately, this could happen sporadically due to temporary system performance problems, such as toggling fullscreen. Users didn't like this, and asked for a function to disable it, or to recover in some other way. This mechanism is questionable anyway. If an ignorant user enables display-sync, and encounters problems with it (without being able to determine that display-sync is messing up), the player will still behave like a POS on every playback, and even after every seek. It might actually be helpful to fail more consistently. Also, I've found that it's sill relatively reliable anyway even without this mechanism. So just remove the fallback. Fixes: #7048
* player: partially rework --cache-pausewm42019-10-111-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The --cache-pause feature (enabled by default) will pause playback for a while if network runs out of data. If this is not done, then playback will go on frame-wise (as packets are slowly read from the network and then instantly decoded and displayed). This feature is actually useless, as you won't get nice playback no matter what if network is too slow, but I guess I still prefer this behavior for some reason. This commit changes this behavior from using the demuxer cache state only, to trying to use underrun information from the AO/VO. This means if you have a very large audio buffer, then cache-pausing will trigger once that buffer is depleted, which will be some time _after_ the demuxer cache has run out. This requires explicit support from the AO. Otherwise, the behavior should be mostly the same as before this commit. This does not care about the AO buffer. In theory, the AO may underrun, then the player will write some data to the AO buffer, then the AO will recover and play this bit of data, then the player will probably trigger the cache-pause behavior. The probability of this happening should be pretty low, so I will hold off fixing this until the next refactor of the AO chain (if ever). The VO underflow detection was devised and tested in 5 minutes, and may not be correct. At least I'm fairly sure that the combination of all the factors should make incorrect behavior relatively unlikely, but problems are possible. Also, the demux_reader_state.underrun field may be inaccurate. It's only the present state at the time demux_get_reader_state() was called, and may exclude past underruns. In theory, this could cause "close" cases to be missed. Then you might get an audio underrun without cache-pausing acting on it. If the stars align, this could happen multiple times in the row, effectively making this feature not work. The most user-visible consequence of this change is that the user will now see an AO underrun warning every time the cache runs out. Maybe this cache-pause feature should just be removed...
* video: always decode 2 frames on playback restartwm42019-10-061-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Unless --video-latency-hacks, always decode 2 frames on playback restart. This in turn will always compute the correct frame duration (even for the first frame), which in turn happens to fix that playback with an image at the beginning breaks display. If a still image precedes video, and the size/format of the frame is different from that of the video following it, the incorrect frame duration caused vo_reconfig2() to be called early, causing the window to resize, and the renderer to clear the image to black. Specifically, it hit the default value of 1 second duration (for still images), so the image was displayed for 1 second, and changed to black until the next proper video frame was displayed. Normally this does not happen. Even if a video file displays still images, it normally repeats the still image at the video's FPS (which is sane). But you can construct such files, or use EDL to construct something similarly behaving. This change may increase seek latency a bit in audio video-sync mode (the default). It needs to wait until 2 frames are decoded, before it bothers to display the first frame. This is done even when seeking. In theory it might be good to introduce a "seek preview" mode, which shows the target image without all the preparations needed for starting playback. (For example, it could not decode audio.) But since I'm using video-sync=display-resample, which already needed to always decode 2 frames, I don't think this is a terribly high priority, nor do I consider the slightly slower seeking a regression. Fixes: #6765
* player: ensure backward playback state is propagated on track switchingwm42019-09-191-1/+5
| | | | | | | | Track switching doesn't run reset_playback_state(), so a track enabled at runtime during backward playback would lead to a messed up state. This commit just does a bad code monkey fix to this. It feels like there needs to be a much better way to propagate this state.
* player: fix --end for backwards playbackwm42019-09-191-0/+2
| | | | | | | | We need to transform the timestamp returned by get_play_end_pts(). I considered making it return the transformed timestamp directly. There are 4 callers; 2 need a transformed timestamps, 2 don't. So I guess it doesn't matter.
* video: fix player not exiting if no video frame was renderedwm42019-09-191-2/+3
| | | | | | | | | | | | E.g. "mpv null:// --demuxer=rawvideo" will "hang" by waiting for video EOF forever. It's not signalled correctly because of the last-frame corner case, which attempts to wait until the current frame is finally displayed (which is signalled by whether a new frame can be queued, see commit 1a339fa09d for some details). If no frame was ever queued, the VO is not configured, and vo_is_ready_for_frame() never returns true. Fix this by using vo_has_frame(), which seems to be exactly the correct thing we need.
* video: trust container FPS early on if possiblewm42018-05-241-1/+2
| | | | | If the container FPS is correct, this can help getting ideal mix factors for vo_gpu interpolation mode. Otherwise, it doesn't matter.
* screenshot: change async behavior to be in line with new semanticswm42018-05-241-1/+2
| | | | | | | | | | | | | | | | | | | | | | Basically reimplement the async behavior on top of the async command code. With this, all screenshot commands are async, and the "async" prefix basically does nothing. The prefix now behaves exactly like with other commands that use spawn_thread. This also means using the prefix in the preset input.conf is pointless (without effect) and misleading, so remove that. The each_frame mode was actually particularly painful in making this change, since the player wants to block for it when writing a screenshot, and generally doesn't fit into the new infrastructure. It was still relatively easy to reimplement by copying the original command and then repeating it on each frame. The waiting is reentrant now, so move the call in video.c to a "safer" spot. One way to observe how the new semantics interact with everything is using the mpv repl script and sending a screenshot command through it. Without async flag, the script will freeze while writing the screenshot (while playback continues), while with async flag it continues.
* demux, player: fix playback of sparse video streams (w/ still images)Aman Gupta2018-05-241-1/+11
| | | | | | | | | | | | | | | Fixes several issues playing back mpegts with video streams marked as having "still images". For example, see this video which has frames only every 6s: https://s3.amazonaws.com/tmm1/music-choice.ts Changes include: - start playback right away, without waiting for first video frame - do not consider the sparse video stream in demuxer underrun detection - do not require multiple video frames for the VO - use audio as the master stream for demuxer metadata events - use audio stream for playback time Signed-off-by: Aman Gupta <aman@tmm1.net>
* encode: get rid of the output packet queuewm42018-05-031-0/+1
| | | | | | | | | | | | Until recently, ao_lavc and vo_lavc started encoding whenever the core happened to send them data. Since audio and video are not initialized at the same time, and the muxer was not necessarily opened when the first encoder started to produce data, the resulting packets were put into a queue. As soon as the muxer was opened, the queue was flushed. Change this to make the core wait with sending data until all encoders are initialized. This has the advantage that we don't need to queue up the packets.
* video: actually wait for last frame being rendered on EOFwm42018-05-031-1/+5
| | | | | | | | | | | | | | | | | | | The video timing code could just decide that EOF was reached before it was displayed. This is not really a problem for normal playback (if you use something like --keep-open it'd show the last frame anyway, otherwise it'd at best flash it on screen before destroying the window). But in encode mode, it really matters, and makes the difference between having one frame more or less in the output file. Fix this by waiting for the VO before starting the real EOF. vo_is_ready_for_frame() is normally used to determine when the VO frame queue has enough space to send a new frame. Since the VO frame queue is currently at most 1 frame, it being signaled means the remaining frame was consumed and thus sent to the VO driver. If it returns false, it will wake up the playloop as soon as the state changes. I also considered using vo_still_displaying(), but it's not reliable, because it checks the realtime of the frame end display time.
* player: don't wait for last video frame in encode modewm42018-04-291-0/+3
| | | | | This code makes the player wait using real time, which makes sense for normal playback, but not encode mode.
* encode: rewrite half of itwm42018-04-291-7/+0
| | | | | | | | | | | | | The main change is that we wait with opening the muxer ("writing headers") until we have data from all streams. This fixes race conditions at init due to broken assumptions in the old code. This also changes a lot of other stuff. I found and fixed a few API violations (often things for which better mechanisms were invented, and the old ones are not valid anymore). I try to get away from the public mutex and shared fields in encode_lavc_context. For now it's still needed for some timestamp-related fields, but most are gone. It also removes some bad code duplication between audio and video paths.
* vo: add vo_reconfig2()wm42018-04-291-1/+1
| | | | | | 1. I want to get away from mp_image_params (maybe). 2. For encoding mode, it's convenient to get the nominal_fps, which is a mp_image field, and not in mp_image_params.
* vo: pass through framedrop flag differentlywm42018-03-151-1/+2
| | | | | | | | There is some sort-of awkwardness here, because option access needs to happen in a synchronized manner, and the framedrop flag is not in the VO option struct. Remove the mp_read_option_raw() call and the awkward change notification via VO_EVENT_WIN_STATE from command.c, and pass it through as new vo_frame flag.
* video: add option to reduce latency by 1 or 2 frameswm42018-03-031-4/+8
| | | | | | | | | | | | | | | | | | | | The playback start logic explicitly waits until the first frame has been displayed. Usually this will introduce a wait of 1 vsync. For normal playback this doesn't matter, but with respect to low latency needs, this only leads to additional data getting queued up in the demuxer or network buffers. Another thing is that the timing logic decodes 1 frame ahead (= 1 frame extra latency) to determine the exact duration of a frame. To be fair, there doesn't really seem to be a hard reason why this is needed. With the current code, enabling the option does lead to A/V desync sometimes (if the demuxer FPS is too inaccurate), and also frame drops at playback start in some situations. But this all seems to be avoidable, if the timing logic were to be rewritten completely, which should probably happen in the future. Thus the new option comes with the warning that it can be removed any time. This is also why the option has "hack" in the name.
* video: don't read ahead a frame in --untimed modewm42018-03-031-0/+3
| | | | | The extra frame is used to compute the exact frame duration. But frame drop is disabvled with --untimed.
* client API: deprecate opengl-cb API and introduce a replacement APIwm42018-02-281-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | The purpose of the new API is to make it useable with other APIs than OpenGL, especially D3D11 and vulkan. In theory it's now possible to support other vo_gpu backends, as well as backends that don't use the vo_gpu code at all. This also aims to get rid of the dumb mpv_get_sub_api() function. The life cycle of the new mpv_render_context is a bit different from mpv_opengl_cb_context, and you explicitly create/destroy the new context, instead of calling init/uninit on an object returned by mpv_get_sub_api(). In other to make the render API generic, it's annoyingly EGL style, and requires you to pass in API-specific objects to generic functions. This is to avoid explicit objects like the internal ra API has, because that sounds more complicated and annoying for an API that's supposed to never change. The opengl_cb API will continue to exist for a bit longer, but internally there are already a few tradeoffs, like reduced thread-safety. Mostly untested. Seems to work fine with mpc-qt.
* video: do not buffer extra frames with VO_CAP_NORETAIN outputsAman Gupta2018-02-171-0/+3
| | | | | | | | | | | | | | This fixes playback stalls on some mediacodec hardware decoders, which expect that frame buffers will be rendered and returned back to the decoder as soon as possible. Specifically, the issue was observed on an NVidia SHIELD Android TV, only when playing an H264 sample which switched between interlaced and non-interlaced frames. On an interlacing change, the decoder expects all outstanding frames would be returned to it before it would emit any new frames. Since a single extra frame always remained buffered by mpv, playback would stall. After this commit, no extra frames are buffered by mpv when using vo_mediacodec_embed.
* video: fix passing down FPS to vf_vapoursynthwm42018-02-031-7/+9
| | | | | | | To make this less of a mess, remove one of the redundant container_fps fields. Part of #5470.
* audio: move to decoder wrapperwm42018-01-301-1/+0
| | | | | | | | | | | | | | | | Use the decoder wrapper that was introduced for video. This removes all code duplication the old audio decoder wrapper had with the video code. (The audio wrapper was copy pasted from the video one over a decade ago, and has been kept in sync ever since by the power of copy&paste. Since the original copy&paste was possibly done by someone who did not answer to the LGPL relicensing, this should also remove all doubts about whether any of this code is left, since we now completely remove any code that could possibly have been based on it.) There is some complication with spdif handling, and a minor behavior change (it will restrict the list of codecs to spdif if spdif is to be used), but there should not be any difference in practice.
* video: make decoder wrapper a filterwm42018-01-301-181/+47
| | | | | | | | | | | | | | | | | | | | | | | | | Move dec_video.c to filters/f_decoder_wrapper.c. It essentially becomes a source filter. vd.h mostly disappears, because mp_filter takes care of the dataflow, but its remains are in struct mp_decoder_fns. One goal is to simplify dataflow by letting the filter framework handle it (or more accurately, using its conventions). One result is that the decode calls disappear from video.c, because we simply connect the decoder wrapper and the filter chain with mp_pin_connect(). Another goal is to eventually remove the code duplication between the audio and video paths for this. This commit prepares for this by trying to make f_decoder_wrapper.c extensible, so it can be used for audio as well later. Decoder framedropping changes a bit. It doesn't seem to be worse than before, and it's an obscure feature, so I'm content with its new state. Some special code that was apparently meant to avoid dropping too many frames in a row is removed, though. I'm not sure how the source code tree should be organized. For one, video/decode/vd_lavc.c is the only file in its directory, which is a bit annoying.
* player: replace old lavfi wrapper with new filter codewm42018-01-301-3/+19
| | | | | lavfi.c is not necessary anymore, because f_lavfi.c (which was actually converted from it) can be used now.
* video: rewrite filtering glue codewm42018-01-301-214/+78
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.
* msg: reinterpret a bunch of message levelsNiklas Haas2017-12-151-4/+4
| | | | | | | | | | | | | | | | | | | | | | I've decided that MP_TRACE means “noisy spam per frame”, whereas MP_DBG just means “more verbose debugging messages than MSGL_V”. Basically, MSGL_DBG shouldn't create spam per frame like it currently does, and MSGL_V should make sense to the end-user and provide mostly additional informational output. MP_DBG is basically what I want to make the new default for --log-file, so the cut-off point for MP_DBG is if we probably want to know if for debugging purposes but the user most likely doesn't care about on the terminal. Also, the debug callbacks for libass and ffmpeg got bumped in their verbosity levels slightly, because being external components they're a bit less relevant to mpv debugging, and a bit too over-eager in what they consider to be relevant information. I exclusively used the "try it on my machine and remove messages from MSGL_* until it does what I want it to" approach of refactoring, so YMMV.
* video: add a shitty hack to avoid missing subtitles with vf_subwm42017-12-081-0/+2
| | | | | | | | | | | | | | | | update_subtitles() makes sure all subtitle packets at/before the given PTS have been read and processed. Normally, this function is only called before sending a frame to the VO. This is too late for vf_sub, which expects the subtitles to be updated before feeding a frame to the filters. Apparently this was specifically a problem for the first frame. Subsequent frames might have been ok due to general prefetching. (This will fail anyway, should a filter dare to add an offset to the timestamps of the filered frames before they pass to vf_sub.) Fixes #5194.
* Fix various typos in log messagesNicolas F2017-12-031-1/+1
|
* video: remove automatic stereo3d filter insertionwm42017-11-291-12/+1
| | | | | | | | | | | | | The internal stereo3d filter was removed due to being GPL only, and due to being a mess that somehow used libavfilter's filter. Without this filter, it's hard to remove our internal stereo3d image attribute, so even using libavfilter's stereo3d filter would not work too well (unless someone fixes it and makes it able to use AVFrame metadata, which we then could mirror in mp_image). This was never well thought-through anyway, so just drop it. I think some "downsampling" support would still make sense, maybe that can be readded later.
* video: fix rotation and deinterlace auto filterswm42017-11-291-2/+6
| | | | | | | | Now using libavfilter filters directly. The rotation case is a bit lazy, because it uses the slow vf_rotate filter in all cases, instead of using special filters for 90° step rotations.
* video: fix typo in log messageNicolas F2017-10-221-1/+1
|
* video: fix poitential NULL derefwm42017-10-181-2/+3
| | | | | Regression introduced by direct rendering code additions. Found by same static analyzer.
* audio: make libaf derived code optionalwm42017-09-211-1/+1
| | | | | | | | | | | | | | | This code could not be relicensed. The intention was to write new filter code (which could handle both audio and video), but that's a bit of work. Write some code that can do audio conversion (resampling, downmixing, etc.) without the old audio filter chain code in order to speed up the LGPL relicensing. If you build with --disable-libaf, nothing in audio/filter/* is compiled in. It breaks a few features, such as --volume, --af, pitch correction on speed changes, replaygain. Most likely this adds some bugs, even if --disable-libaf is not used. (How the fuck does EOF notification work again anyway?)
* video: change --deinterlace behaviorwm42017-08-221-60/+5
| | | | | | | | | | | | This removes all GPL only code from it, and that's the whole purpose. Also happens to be much simpler. The "deinterlace" option still sort of exists, but only as runtime changeable option. The main change in behavior is that the property will not report back the actual deint state. Or in other words, if inserting or initializing the filter fails, the deinterlace property will still return "yes". This is in line with most recent behavior changes to properties and options.
* video: redo video equalizer option handlingwm42017-08-221-74/+1
| | | | | | | | | | | | | | | | | | | | | | | I really wouldn't care much about this, but some parts of the core code are under HAVE_GPL, so there's some need to get rid of it. Simply turn the video equalizer from its current fine-grained handling with vf/vo fallbacks into global options. This makes updating them much simpler. This removes any possibility of applying video equalizers in filters, which affects vf_scale, and the previously removed vf_eq. Not a big loss, since the preferred VOs have this builtin. Remove video equalizer handling from vo_direct3d, vo_sdl, vo_vaapi, and vo_xv. I'm not going to waste my time on these legacy VOs. vo.eq_opts_cache exists _only_ to send a VOCTRL_SET_EQUALIZER, which exists _only_ to trigger a redraw. This seems silly, but for now I feel like this is less of a pain. The rest of the equalizer using code is self-updating. See commit 96b906a51d5 for how some video equalizer code was GPL only. Some command line option names and ranges can probably be traced back to a GPL only committer, but we don't consider these copyrightable.
* audio: introduce a new type to hold audio frameswm42017-08-161-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is pretty pointless, but I believe it allows us to claim that the new code is not affected by the copyright of the old code. This is needed, because the original mp_audio struct was written by someone who has disagreed with LGPL relicensing (it was called af_data at the time, and was defined in af.h). The "GPL'ed" struct contents that surive are pretty trivial: just the data pointer, and some metadata like the format, samplerate, etc. - but at least in this case, any new code would be extremely similar anyway, and I'm not really sure whether it's OK to claim different copyright. So what we do is we just use AVFrame (which of course is LGPL with 100% certainty), and add some accessors around it to adapt it to mpv conventions. Also, this gets rid of some annoying conventions of mp_audio, like the struct fields that require using an accessor to write to them anyway. For the most part, this change is only dumb replacements of mp_audio related functions and fields. One minor actual change is that you can't allocate the new type on the stack anymore. Some code still uses mp_audio. All audio filter code will be deleted, so it makes no sense to convert this code. (Audio filters which are LGPL and which we keep will have to be ported to a new filter infrastructure anyway.) player/audio.c uses it because it interacts with the old filter code. push.c has some complex use of mp_audio and mp_audio_buffer, but this and pull.c will most likely be rewritten to do something else.
* player: make refresh seeks slightly more robustwm42017-08-141-6/+3
| | | | | | | | | | | | | | | | | | | | | | Refresh seeks are automatically issued when changing filters, which improves user experience if these filters change buffering or such. The refresh seek could actually overwrite a previously ongoing seek: set pause yes set time-pos 10 set vf "" Here, the video code issued a refresh seek to the previous video position, which could be different from the previously triggered (and still ongoing) seek, this overwriting the seek. Factor all refresh seek handling into a new function, and make it handle ongoing seeks correctly. Remove the weird new canonical_pts field, which actually had no use. Fixes #4757.
* player: do not destroy VO immediately if there is no video trackwm42017-08-141-1/+0
| | | | | | | | | | | Commit f1d161d55f45 accidentally added the handle_force_window() call if no track is selected. This was OK, but breaks something like "mpv *", where some files are not playable (like subtitle files) - the unplayable files would remove and recreate the VO window, which is annoying. Just drop the call again.
* player: make --lavfi-complex changeable at runtimewm42017-08-121-22/+13
| | | | | | | | Tends to be somewhat glitchy if subtitles are enabled, and you enable and disable tracks. On error, this will disable --lavfi-complex, which will result in whatever behavior.
* player: fix --lavfi-complex freezewm42017-08-111-0/+1
| | | | | | | | | | | | | | | | | | | Commit 0e0b87b6f3297 fixed that dropped packets did not trigger further work correctly. But it also made trivial --lavfi-complex freeze. The reason is that the meaning if DATA_AGAIN was overloaded: the decoders meant that they should be called again, while lavfi.c meant that other outputs needed to be checked again. Rename the latter meaning to DATA_STARVE, which means that the current input will deliver no more data, until "other" work has been done (like reading other outputs, or feeding input). The decoders never return DATA_STARVE, because they don't get input from the player core (instead, they get it from the demuxer directly, which is why they still can return DATA_WAIT). Also document the DATA_* semantics in the enum. Fixes #4746.
* vo_opengl: add direct rendering supportwm42017-07-241-0/+1
| | | | | | | | | | | | | | | | | | | | Can be enabled via --vd-lavc-dr=yes. See manpage additions for what it does. This reminds of the MPlayer -dr flag, but the implementation is completely different. It's the same basic concept: letting the decoder render into a GPU buffer to avoid a copy. Unlike MPlayer, this doesn't try to go through filters (libavfilter doesn't support this anyway). Unless a filter can work in-place, DR will be silently disabled. MPlayer had very complex semantics about buffer types and management (which apparently nobody ever understood) and weird restrictions that mostly limited it to mpeg2 style codecs. The mpv code does not do any of this, and just lets the decoder allocate an arbitrary number of untyped images. (No MPlayer code was used.) Parts of the code based on work by atomnuker (starting point for the generic code) and haasn (some GL definitions, some basic PBO code, and correct fencing).
* player: change license of most core files to LGPLwm42017-06-231-7/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These files have all in common that they were fully or mostly taken from mplayer.c. (mplayer.c was a huge file that contains almost all of the playback core, until it was split into multiple parts.) This was probably the hardest part to relicense, because so much code was moved around all the time. player/audio.c still does not compile. We'll have to redo audio filtering. Once that is done, we can probably actually provide an actual LGPL configure switch. Here is a relatively detailed list of potential issues: 8d190244: author did not reply, parts were made GPL-only in a previous commit. 7882ea9b: author could not be reached, but the code is gone. wscript still has --datadir switch, but I don't think this is relevant to copyright. f197efd5: unclear origin, but I consider the code gone anyway (replaced with generic OSD mechanisms). 8337d9c2: author did not reply, but only the option still exists (under a different name), other code was removed. d8fd7131: did not reply. Disabled in a previous commit. 05258251: same author as above. Both fields actually seem to have vanished (even when tracking renames), so no action taken. d459e644, 268b2c1a: author did not reply, but we reuse only the options (with different names and slightly or fully different semantics, and completely different implementations), so I don't think this is relevant for copyright. 09e742fe, 17c39c4e: same as above. e8a173de, bff4b3ee: author could not be reached. The commands were reworked to properties, and the code outside of the TV code were moved back to the TV code. So I don't think copyright applies to the current command.c parts (mp_property_tv_color, mp_property_tv_freq, mp_property_tv_scan). The TV parts remain GPL. 0810e427: could not be reached. Disabled in a previous commit. 43744a2d: unknown author, but this was replaced by dynamic alloc (if the change is even copyrightable). 116ca0c7: unknown author; reasoning see input.c relicensing commit. e7e4d1d8: these semantics still exist, but as generic code, and this code was fully removed. f1175cd9: the author of the cited patch is unknown, and upon inspection it turns out that I was only using the idea to pause the player on EOF, so I claim it's not copyright relevant. 25affdcc: author could not be reached (yet) - but it's only a function rename, not copyrightable. 5728504c was committed by Arpi (who agreed), but hints that it might be by a different author. In fact it seems to be mostly this patch: http://lists.mplayerhq.hu/pipermail/mplayer-dev-eng/2001-November/002041.html The author did not respond, but it all seems to have been removed later. It's a terrible mess though. Arpi reverted the A-V sync code at first, but left the RTC code for a while. The following commits remove these changes 100%: 14b35442, 7181a091, 31482783, 614f8475, df58e822. cehoyos did explicitly not agree to LGPL, but was involved in the following changes: c99d8fc8: applied a patch and didn't modify it, the original author agreed. 40ac0d31: author could not be reached, but all code is gone anyway. The "af" command has a similar function, but works completely different and actually reuses a mechanism older than this patch. 54350436: applied a patch, but didn't modify it, except for adding a German translation, which was removed later. a2dda036: same situation as above 240b743e: this was made GPL-only in a previous commit 7b25afd7: same as above (for now) kirijua could not be reached, but was a regular patch contributor: c2c997fd: video equalizer code move; probably not copyrightable. Is GPL due to Nick anyway. be54f481: technically, this became the audio track property later. But all what is left is the fact that you pass a track ID to it, so consider the original coypright non-relevant. 2f376d1b: this was rewritten in b7052b43, but for now we can afford to be careful, so this was marked as GPL only in a previous commit. 43844d09: remaining parts in main.c were reverted in a previous commit. anders has mostly disagreed with the LGPL relicensing. Does not want libaf to become LGPL, but made some concessions. In particular, he granted us permission to relicense 4943e9c52c and 242aa6ebd4. We also consider some of his changes remaining in mpv not relevant for copyright (such as 735de602 - we won't remove the this option completely). We will completely remove his other contributions, including the entire audio filter chain. For now, this stuff is marked as GPL only. The remaining question is how much code in player/audio.c (based on the former mplayer.c and dec_audio.c) is under his copyright. I made claims about this in a previous commit. Nick(ols) Kurshev, svn username "nick" and "nickols_k", could not be reached. He had a lot of changes in early MPlayer. It seems all of that was removed, at least in mpv. His main work, like VIDIX or libswscale work, does not exist in mpv anymore, but the changes to mplayer.c and other core parts still deserve attention: a4119f6b, fb927549, ad3529b8, e11b23dc, 5f2178be, 93c371d5: removed in b43d67e0, d1628d12, 24ed01fe, df58e822. 0a83c6ec, 104c125e, 4e067f62, aec5dcc8, b587a3d6, f3de6e6b: DR, VAA, and "tune" stuff was fully removed later on or replaced with other mechanisms. 340183b0: screenshots were redone later (the VOCTRL was even removed, with an independent implementation using the same VOCTRL a few years later), so not relevant anymore. Basically only the 's' shortcut remains (but not its implementation). 92c5c274, bffd4007, 555c6766: for now marked as GPL only in a previous commit. Might contain some trace amounts of "michael"'s copyright, who agreed to LGPL only once the core is relicensed. This will still be respected, but I don't think it matters at this in this case. (Some code touched by him was merged into mplayer.c, and then disappeared after heavy refactoring.) I tried to be as careful and as complete as possible. It can't be excluded that amends to this will be made later. This does not make the player LGPL yet.
* player: disable video equalizer frontend code for WIP LGPL modewm42017-06-231-0/+4
| | | | | | | | | | Nick and kiriuja could not be reached, and created/changed this in 92c5c274, 6441a5ad, bffd4007, 555c6766, c2c997fd. The video equalizer stuff was redone fully later, but there are still parts that look too similar and basically use the same approach. I'm more comfortable with declaring it GPL only for now. I plan to redo them later in a way that will remove copyright.
* player: disable deinterlace property for WIP LGPL modewm42017-06-231-0/+6
| | | | | | cehoyos has not agreed to the LGPL relicensing. He added the deinterlace property in commit 7b25afd7. Make it GPL-only for now. The still working parts of the --deinterlace option are not affected by his copyright.
* vo.c, vo.h, vo_null.c: change license to LGPLwm42017-05-101-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Most contributors have agreed. vo.c requires a "driver" entry for each video output - we assume that if someone who didn't agree to LGPL added a line, it's fine for vo.c to be LGPL anyway. If the affected video output is not disabled at compilation time, the resulting binary will be GPL anyway. One problem are the changes by Nick Kurshev (usually using "nick" as SVN username). He could not be reached. I believe all changes to his files are actually gone, but here is a detailed listing: fa1d5742bc: nick introduces a new VO API. It was removed in 64bedd9683. Some of this was replaced by VOCTRLs are introduced in 7c51652a1b, obviously replacing at least some functionality by his API. b587a3d642: nick adds a vo_tune_info_t struct. Removed in 64bedd9683 too. 9caad2c29a: nick adds some VOCTRLs, which were silently removed in 8cc5ba5ab8 (they became unused probably with the VIDIX removal). 340183b0e9: nick adds VO-based screenshots, which got removed in 2f4b840f62. Strangely the same name was introduced in 01cf896a2f again, but this is a coincidence and worked differently (also it was removed yet again in 2858232220). 104c125e6d: nick adds an option for "direct rendering". It was renamed in 6403904ae9 and fully removed in e48b21dd87. 5ddd8e92a1: nick adds code to check the VO driver preinit arg to every single VO driver. The argument itself and any possibly remaining code associated with it was removed in 1f5ffe7d30. f6878753fb: nick adds header inclusion guards. We assume this is not relevant for copyright. Some of nick's code was merely moved to other files, such as the equalizer stuff added in 555c676683 and moved in 4db72f6a80 and 12579136ff, and don't affect copyright of these files anymore. Other notes: fef7b17c34: a patch by someone who wasn't asked for relicensing added a symbol that was removed again in 1b09f4633. 4a8a46fafd: author probably didn't agree to LGPL, but the function signature was changed later on anyway, and nothing of this is left. 7b25afd742: the same author adds a symbol to what is vo.h today, which this relicensing commit removes, as it was unused. (It's not clear whether the mere symbol is copyrightable, but no need to take a risk.) 3a406e94d7, 9dd8f241ac: slave mode things by someone who couldn't be reached. This aspect of the old slave mode was completely removed. bbeb54d80a: patch by someone who was not asked, but the added code was completely removed again.
* player: unmess pause state handlingwm42017-04-141-2/+2
| | | | | | | | | Merge the pause_player() and unpause_player() functions. Make sure the pause events are emitted properly. We can now set the internal pause state based on a predicate, instead of e.g. handle_pause_on_low_cache() making a mess to trigger the internal pause state as wanted. Preparation for some more changes.
* video: deprecate almost all video filterswm42017-04-021-3/+4
| | | | | | | | | | | | | The plan is to nuke the custom filter chain completely. It's not clear what will happen to the still needed builtin filters (mostly hardware deinterlacing and vf_vapoursynth). Most likely we'll replace them with different filter chain concept (whose main purpose will be providing builtin things and bridging to libavfilter). The undocumented "warn" options are there to disable deprecation warnings when the player inserts filter automatically. The same will be done to audio filters, at a later point.
* lavfi: support hwdec filters for --lavfi-complexwm42017-02-201-0/+3
| | | | | | Not so important by itself, but important for when we replace the vf libavfilter wrapper with the common implementation. (Which will hopefully happen, but not too soon.)
* player: print hw format on "VO: " line toowm42017-01-291-2/+5
| | | | | | | Useful for distinguishing bit depth when hardware decoding. (To the degree it's useful to show it at all. This just brings the hardware decoding case on the same level of showing information as the software decode call.)
* video: support filtering hardware frames via libavfilterwm42017-01-161-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | Requires a bunch of hacks: - we access AVFilterLink.hw_frames_ctx. This is not a public API in FFmpeg and Libav. Newer FFmpeg provides an accessor (av_buffersink_get_hw_frames_ctx), but it's not available in Libav or the current FFmpeg release or Libav. We need this value after filter graph creation, so We have no choice but to access this. One alternative is making filter creation and format negotiation fully lazy (i.e. delay it and do it as filters are output), but this would be a huge change. So for now, we knowingly violate FFmpeg's and Libav's ABI and API constraints because they don't provide anything better. On newer FFmpeg, we use the (quite ugly) accessor, though. - mp_image_params doesn't (and can't) have a field for the frames context AVBufferRef. So we pass it via vf_set_proto_frame(), and even more hacks. - if a filter needs a hw context, but we haven't created one yet (because normally we create them lazily), it will fail at init. - we allow any hw format now, although this could go horrible wrong. Why all this effort? We could move hw deinterlacing filters etc. to FFmpeg, which is a very worthy goal.
* player: change aspects of cover art handlingwm42017-01-101-5/+21
| | | | | | | | | | | | | | | | | | | | | | | | Cover art handling is a disgusting hack that causes a mess in all components. And this will stay this way. This is the Xth time I've changed cover art handling, and that will probably also continue. But change the code such that cover art is injected into the demux packet stream, instead of having an explicit special case it in the decoder glue code. (This is somewhat more similar to the cover art hack in libavformat.) To avoid that the over art picture is decoded again on each seek, we need some additional "caching" in player/video.c. Decoding it after each seek would work as well, but since cover art pictures can be pretty huge, it's probably ok to invest some lines of code into caching it. One weird thing is that the cover art packet will remain queued after seeks, but that is probably not an issue. In exchange, we can drop the dec_video.c code, which is pretty convenient for one of the following commits. This code duplicates a bunch of lower-level decode calls and does icky messing with this weird state stuff, so I'm glad it goes away.
* video: use demuxer-signaled duration for last video framewm42016-12-211-0/+6
| | | | | | | | | Helps with gif, probably does unwanted things with other formats. This doesn't handle --end quite correctly, but this could be added later. Fixes #3924.
* manpage: replace `-vo` with `--vo`Douglas Christman2016-12-081-1/+1
|
* player: make sure non-video subtitle rendering is reset if video resumeswm42016-11-181-3/+3
| | | | | | | | | | | | | | | | | If video reaches EOF, subtitle timing will be switched to timing without video frames. This means it calls osd_set_force_video_pts() and overrides the PTS of whatever video frame is current (since the video frame's PTS has nothing to do with the current playback position anymore). This was not reset when seeking back into video. Subtitles wouldn't show up, or if there was a subtitle displayed, it would get stuck with it. In particular, this could happen even if EOF was only temporary (such as with --keep-open). Fix this by clearing the override PTS whenever a video frame is shown. Fixes #3770.
* player: show subtitles on VO if --force-window is usedwm42016-10-261-0/+3
| | | | | | | | | | | | | | | | | | | | | If a VO is created, but no video is playing (i.e. --force-window is used), then until now no subtitles were shown. This is because VO subtitle display normally depends on video frame timing. If there are no video frames, there can be no subtitles. Change this and add some code to handle this situation specifically. Set a subtitle PTS manually and request VO redrawing manually, which gets the subtitles rendered somehow. This is kind of shaky. The subtitles are essentially sampled at arbitrary times (such as when new audio data is decoded and pushed to the AO, or on user interaction). To make a it slightly more consistent, force a completely arbitrary minimum FPS of 10. Other solutions (such as creating fake video) would be more intrusive or would require VO-level API changes. Fixes #3684.
* player: speed up audio/video re-sync when there is a huge delayAman Gupta2016-10-211-1/+2
| | | | | | | | when there is a huge delay between audio/video sync, it can take a really long time to converge back. this speeds up the resync time by increasing the max_change allowed per iteration. Signed-off-by: wm4 <wm4@nowhere>
* player: make --stop-screensaver runtime-changeablewm42016-10-021-4/+1
| | | | | | | | | | Move the screensaver enable/disable determination to a central place, and call it if the stop-screensaver property is changed. Also, do not stop the screensaver when in idle mode (i.e. no file is loaded). Fixes #3615.
* video: trust demuxer framerate on invalid timestampswm42016-09-261-1/+1
| | | | | | | | | If the PTS goes backwards (whether it's a timestamp reset or some other problem) would just use 0 as frame duration. (At least until the logic for detecting divergence with the timestamps gets active.) Trust the demuxer framerate in these cases instead, if it's available. I think this improves behavior slightly with some broken files.
* player: minor changes in init codewm42016-09-191-1/+1
| | | | | | | | | | | | Move the MPV_LEAK_REPORT env query to mp_create(), where it will also be used by the client API (it might be helpful, so why not). The same applies to MPV_VERBOSE. The prepare_playlist() call doesn't need to be in mp_initialize() and can just be in mp_play_files() to reduce the size of mp_initialize(). Also, remove wakeup_playloop(), which is 100% redundant with mp_wakeup_core_cb().
* player: more option/property consistency fixeswm42016-09-181-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | Some properties had a different type from their equivalent options (such as mute, volume, deinterlace, edition). This wasn't really sane, as raw option values should be always within their bounds. On the other hand, these properties use a different type to reflect runtime limits (such as range of available editions), or simply to improve the "UI" (you don't want to cycle throuhg the completely useless "auto" value when cycling the "mute" property). Handle this by making them always return the option type, but also allowing them to provide a "constricted" type, which is used for UI purposes. All M_PROPERTY_GET_CONSTRICTED_TYPE changes are related to this. One consequence is that you can set the volume property to arbitrary high values just like with the --volume option, but using the "add" command it still restricts it to the --volume-max range. Also deprecate --chapter, as it is grossly incompatible to the chapter property. We pondered renaming it to --chapters, or introducing a more powerful --range option, but concluded that --start --end is actually enough. These changes appear to take care of the last gross property/option incompatibilities, although there might still be a few lurking.
* player: litter code with explicit wakeup callswm42016-09-161-5/+5
| | | | | | | | | | | | | This does 3 kinds of changes: - change sleeptime=x to mp_set_timeout() - change sleeptime=0 to mp_wakeup_core() calls (to be more explicit) - change commands etc. to call mp_wakeup_core() if they do changes that require the playloop to be rerun This is preparation for the following changes. The goal is to process client API requests without having to rerun the playloop every time. As of this commit, the changes should not change behavior. In particular, the playloop is still implicitly woken up on every command.
* player, ao, vo: don't call mp_input_wakeup() directlywm42016-09-161-0/+2
| | | | | | | | | | | | | Currently, calling mp_input_wakeup() will wake up the core thread (also called the playloop). This seems odd, but currently the core indeed calls mp_input_wait() when it has nothing more to do. It's done this way because MPlayer used input_ctx as central "mainloop". This is probably going to change. Remove direct calls to this function, and replace it with mp_wakeup_core() calls. ao and vo are changed to use opaque callbacks and not use input_ctx for this purpose. Other code already uses opaque callbacks, or has legitimate reasons to use input_ctx directly (such as sending actual user input).
* player: fix average frame duration calculationsda89ha92016-09-091-1/+1
|
* client API: make mpv_opengl_cb_uninit_gl() behavior slightly nicerwm42016-09-091-0/+4
| | | | | | Instead of deselecting the video stream plainly, use the slightly more robust error_on_track() function. Also give it an error code (although I'm not sure if this one is confusing, it's better than the one before).
* command: fix or document some property/option consistency issueswm42016-09-011-5/+5
| | | | | | | | | | | | | Make some existing properties behave more like options. This mostly means they don't deny access if the associated component is not active, but redirects to the option. One kind of fishy change is that we apply --brightness etc. only if they're not set to the default value. This won't necessarily work with --vo=xv, but affects only cases where 1. the Xv adapter has been changed to non-defaults, and 2. the user tries to reset them with mpv by passing e.g. --brightness=0. We don't care about Xv, and the noted use-case is dumb, so this change is acceptable.
* player: slightly adjust framerate guessing heuristicwm42016-08-291-6/+8
| | | | | | | | | | | | | | | | | | | | | | Some files not only use rounded timestamps, but they also do it incorrectly. They may jitter between up to 4 specific frame durations. In this case, I found a file that mostly used 41ms and 42ms, but also had 40ms and 43ms outliers (often but not always following each other). This breaks the assumption of the framerate estimation code that the frame duration can deviate up to 1ms. If it jitters around 4 possible frame durations, the maximum deviation is 3ms. Increase it accordingly. The change might make playback of "true VFR" video via display-sync mode worse, but it's not like it was particularly good in the first place. Also, the check whether to usen the container FPS needs to be stricter. In the worst case, num_dur is 1, which doesn't really indicate any evidence that the framerate is correct. Only if there are "enough" frames the deviation check will become meaningful. 16 is an arbitrary value that has been designated "enough" by myself. Also otuput the frame duration values for --dump-stats.
* player: log if video is considered an imagewm42016-08-211-0/+1
| | | | It's a heristic that can fail, so better log it.
* player: refresh very low framerate video on filter changeswm42016-08-191-1/+3
| | | | | | | Limit the max. time the refresh is delayed. Make it refresh at all if image mode is enabled. Fixes #3435.
* vf_rotate: allow arbitrary rotationwm42016-08-191-2/+2
| | | | | | | vf_rotate selects the correct filter for 90° rotation, but it can be extended to use lavfi's vf_rotate as fallback. See #3434.
* video: don't discard video frames after endptswm42016-08-181-3/+5
| | | | | | | | | | | | Instead of letting it keep decoding by trying to find a new frame, "plug" the frame queue by not removing it. (Or actually, by putting it back instead of discarding it.) Matters for seamless looping (following commits), and possibly some other corner cases. The added function vf_unread_output_frame() is a bit of a sin, but still reasonable, since its implementation is trivial.
* player: add option to control duration of image displaywm42016-08-171-12/+22
| | | | | | | | | | | | | The --image-display-duration option controls how long an image is displayed. It's also possible to display the image forever (until manual user interaction stops playback). With this, the core drops the old method to "drain" video (i.e. waiting for the last frame duration on end of playback). Instead, we reuse MPContext.time_frame. The old mechanism was disabled for non-images anyway. Fixes #3425.
* player: allow passing flags to queue_seek()wm42016-08-151-1/+1
| | | | | | | | | | | | Change the last parameter from a bool to an int, which is supposed to take bit-flags. The at this point only flag is MPSEEK_FLAG_DELAY, which replaces the previous bool parameter. The old false parameter becomes 0, the old true parameter becomes MPSEEK_FLAG_DELAY. Since the old "immediate" parameter is now essentially inverted, two coalesced immediate and delayed seeks end up as delayed instead of immediate. This change doesn't matter, since there are no relative immediate seeks anyway.
* player: fix display-sync timing if audio take long on resumewm42016-08-071-0/+6
| | | | | | | | | | | | | | | | | In display-sync mode, the very first video frame is idiotically fully timed, even though audio has not been synced yet at this point, and the video frame is more like a "preview" frame. But since it's fully timed, an underflow is detected if audio takes longer than the display time of the frame (we send the second frame only after audio is done). The timing code will try to compensate for the determined desync, but it really shouldn't. So explicitly discard the timing info in this specific case. On the other hand, if the first frame still hasn't finished display, we can pretend everything is ok. This is a hack - ideally, we either would send a frame without timing info (and then send it again or so when playback starts properly), or we would add real pause support to the VO, and pause it during syncing.
* player: disable DS with spdif transcoding toowm42016-07-241-2/+5
| | | | | | Otherwise it behaves dumb. (Although you could argue it shouldn't try to guess whether speed changes work, but instead simply disable DS if they don't work.)
* video: respect --deinterlace=autowm42016-07-121-1/+2
| | | | | | | | | | | | | | | | | | | | --deinterlace=auto is the default, and has the obscure semantics that deinterlacing is disabled, unless the user has manually inserted a deinterlacing filter. While in software decoding this doesn't matter, and we will happily insert 2 yadif filters (if the user has already added one), or not remove the yadif filter (if deinterlacing is disabled, but the user has added the filter manually), this is different with hardware deinterlacer filters. These support VFCTRL_SET_DEINTERLACE for toggling deinterlacing filtering at runtime. It exists mainly for legacy reasons, and possibly because it makes switching deinterlacing modes more efficient. It might also gives us an entry-point for VO deinterlacing, maybe. For whatever reasons this mechanism exists, we still support and use it. This commit fixes that video.c always used VFCTRL_SET_DEINTERLACE to disable deinterlacing, even if --deinterlace=auto was set. Fix this by checking the value of the option directly.
* video: fix midstream video configuration changeswm42016-07-081-0/+1
| | | | | | | Commit 771a8bf5 added code to avoid unnecessary vf_reconfig() calls for unrelated reasons, but forget to consider that it has to be called at least once if the input format changes. As a consequence it got "stuck" due to not being able to decode more frames.
* video: limit number of frames sent to VO to the VO requested amountwm42016-07-071-1/+3
| | | | | | | | | | vo_frame can have more than 1 frame - the extra frames are future references, which are sometimes useful for filtering (vo_opengl interpolation). There's no harm in reducing the number of frames sent to the VO requested amount of future frames, so do that. Doesn't actually reduce the number of concurrently in use frames in practice.
* video: fix deinterlace filter handling for VFCTRL_SET_DEINTERLACE filterswm42016-07-061-18/+20
| | | | | | | | | | | | | | | | | | | | Some filters support VFCTRL_SET_DEINTERLACE. This affects most hardware deinterlace filters. They can be inserted by the user manually, or auto- inserted by vf.c itself as conversion filter (vf_d3d11vpp). In these cases, we shouldn't insert or remove filters outselves, and instead VFCTRL_SET_DEINTERLACE should be invoked to switch the mode. This wasn't done correctly in the recently refactored code and could have broken with --deinterlace. (The refactor only considered switching via property in this case.) Fix it by making it a proper part of the filter_reconfig() function, and making set_deinterlacing() (which is called by the property handler) merely call filter_reconfig() in all cases to do the real work. We can even avoid rebuilding the filter chain - though only if no other auto-filters are inserted. It probably also provides a slightly cleaner way to implement functionality in the VO while still inserting video filter fallbacks correctly if required.
* video: fix deinterlace filter handling on pixel format changeswm42016-07-061-7/+4
| | | | | | | | | | | | | | | | | The test scenario at hand was hardware decoding a file with d3d11 and with deinterlacing enabled. The file switches to a non-hardware dedocdeable format mid-stream. This failed, because it tried to call vf_reconfig() with the old filters inserted, with was fatal due to vf_d3d11vpp accepting only hardware input formats. Fix this by always strictly removing all auto-inserted filters (including the deinterlacing one), and reconfiguring only after that. Note that this change is good for other situations too, because we generally don't want to use a hardware deinterlacer for software decoding by default. They're not necessarily optimal, and VAAPI VPP even has incomprehensible deinterlacer bugs specifically with software frames not coming from a hardware decoder.
* player: rewrite deinterlace filter auto-insertionwm42016-07-051-22/+82
| | | | | | | | | | | | | | Instead of using the "vf" command code (which changes filters at runtime on user input), use the general filter-insertion code. The latter was added later, and is more suitable for automatically inserted filters. The old code failed in particular when using watch-later saving, which stored the filter list in the resume config file. If a user changed the hardware decoding mode via command line, the stored filter chain was out of date and could cause failure due to not working with hardware or software decoding mode. Storing the deinterlace filter in the filter list was unavoidable, because it was part of the user state. (The new code only edits the actually instantiated filters.)
* video: refactor how VO exports hwdec device handleswm42016-05-091-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | The main change is with video/hwdec.h. mp_hwdec_info is made opaque (and renamed to mp_hwdec_devices). Its accessors are mainly thread-safe (or documented where not), which makes the whole thing saner and cleaner. In particular, thread-safety rules become less subtle and more obvious. The new internal API makes it easier to support multiple OpenGL interop backends. (Although this is not done yet, and it's not clear whether it ever will.) This also removes all the API-specific fields from mp_hwdec_ctx and replaces them with a "ctx" field. For d3d in particular, we drop the mp_d3d_ctx struct completely, and pass the interfaces directly. Remove the emulation checks from vaapi.c and vdpau.c; they are pointless, and the checks that matter are done on the VO layer. The d3d hardware decoders might slightly change behavior: dxva2-copy will not use the VO device anymore if the VO supports proper interop. This pretty much assumes that any in such cases the VO will not use any form of exclusive mode, which makes using the VO device in copy mode unnecessary. This is a big refactor. Some things may be untested and could be broken.
* player: always show the first frame in DS modewm42016-04-241-0/+4
| | | | Fixes bogus frame drop counter in cover art mode.
* player: assume video forwards timestamps jumps only with some formatswm42016-04-241-1/+1
| | | | | | | | | Another crappy fix for timestamp reset issues. This time, we try to fix files which have very weird but legitimate frame durations, such as cdgraphics. It can have many short frames, but once in a while there are potentially very long frames. Fixes #3027.
* player: cleaner determination of current playback PTSwm42016-04-231-1/+0
| | | | | In particular, this won't overwrite the playback PTS in coverart mode, which actually fixes relative seeks.
* player: fix breakage when combining 3D and rotate auto-filterswm42016-03-281-9/+5
| | | | | | | | | | | This would get stuck in reconfiguring the filter chain forever, because params was mutated ("params.rotate = 0;"). This was used as input for vf_reconfig(), but the filter chain input must always be equivalent to the decoder output, or filter chain reconfiguration will be triggered. The line of code to reset the rotation is from a time when this used to work differently. Also remove the unnecessary try_filter() parameter.
* player: remove auto-inserted filters before adding them againwm42016-03-281-1/+11
| | | | | | | Makes certain cases of runtime changes actually work. Also change the label for the stereo3d filter and make it consistent with the rotate one.
* player: minor simplificationwm42016-02-271-3/+4
| | | | | | No need to pass endpts down in such a dumb way. Also remove an outdated comment somewhere.
* player: slightly simplify how demuxer streams are enabled/disabledwm42016-02-251-1/+1
| | | | | Instead of having reselect_demux_streams() look at all streams, make it look at the current stream that is being enabled/disabled.
* audio/video: expose codec info as separate fieldwm42016-02-151-0/+1
| | | | | Preparation for the timeline rewrite. The codec will be able to change, the stream header not.
* video: remove pointless parameter indirectionwm42016-02-151-1/+1
| | | | This is always the same value.
* player: remove dead codewm42016-02-121-1/+1
| | | | Fixes CID 1350055 and CID 1350054.
* player: fix crash if no video decoder can be initializedwm42016-02-101-0/+1
| | | | Caused by the recent refactoring for complex filters.
* player: force refresh seek when changing audio filterswm42016-02-091-0/+2
| | | | | | | | | | | | | Unfortunately I see no better solution. The refresh seek is skipped if the amount of buffered audio is not overly huge. Unfortunately softvol af_volume insertion still can cause this issue, because it's outside of the normal dynamic filter chain changing code. Move the video refresh call to reinit_video_filters() to make it more uniform along with the audio code.
* player: remove some further current_track dependencieswm42016-02-051-3/+3
| | | | Now it's used for initialization only for audio and video.
* player: add complex filter graph supportwm42016-02-051-28/+62
| | | | | | | | | | | | | | | | See --lavfi-complex option. This is still quite rough. There's no support for dynamic configuration of any kind. There are probably corner cases where playback might freeze or burn 100% CPU (due to dataflow problems when interaction with libavfilter). Future possible plans might include: - freely switch tracks by providing some sort of default track graph label - automatically enabling audio visualization - automatically mix audio or stack video when multiple tracks are selected at once (similar to how multiple sub tracks can be selected)
* player: move audio and video decoder init to separate functionswm42016-02-051-22/+41
| | | | Preparation.
* player: use different variable to indicate coverartwm42016-02-011-10/+7
| | | | Slightly better.
* audio/video: merge decoder return valueswm42016-02-011-2/+2
| | | | | | Will be helpful for the coming filter support. I planned on merging audio/video decoding, but this will have to wait a bit longer, so only remove the duplicate status codes.
* player: refactor: some more minor decoder/output decouplingwm42016-01-291-8/+12
| | | | | | These changes don't make too much sense without context, but are preparation for later. Then the audio_src/video_src fields will be actually be NULL under circumstances.
* player: fix initial audio sync in certain caseswm42016-01-291-2/+0
| | | | | | | | | | | | | | | | | Regression caused by commit 3b95dd47. Also see commit 4c25b000. We can either use video_next_pts and add "delay", or we just use video_pts. Any other combination breaks. The reason why the assumption that delay==0 at this point was wrong exactly because after displaying the first video frame (usually done before audio resync) a new frame might be "added" immediately, resulting in a new video_next_pts and "delay", which will still amount to video_pts. Fixes #2770. (The reason why display-sync was blamed in this issue is because enabling display-sync in the options forces a prefetch by 2 instead of 1 frames for seeks/playback restart, which triggers the issue, even if display-sync is not actually enabled. In this case, display-sync is never enabled because the frames have a unusually high frame duration. This is also what exposed the initial desync issue.)
* video: fix coverart switchingwm42016-01-271-2/+3
| | | | | | If cover art is re-enabled during playback, the covert art picture (which has pts==0) will be discarded. Add another corner case to the list.
* video: slightly improve video stream switchingwm42016-01-261-0/+5
| | | | | Resync newly switched video streams to the current playback position. (Normal seeks will reset playback_pts to NOPTS.)
* video: limit maximum number of VO frames correctlywm42016-01-241-1/+1
| | | | | Otherwise, vo_frame.frames can be unintentionally overflown, leading to undefined behavior in corner cases.
* video: don't wait for last video frame in the normal casewm42016-01-221-4/+8
| | | | | | | | | | | | | | | | | | Even though the timing logic is correct, it tends to mess with looping videos and such in unappreciated ways. It also has to be admitted that most file formats seem not to properly define the duration of the last video frame (or libavformat does not export it in a useful way), so whether or not we should use the demuxer reported framerate for the last frame is questionable. (Still, why would you essentially just discard the last frame?) The timing logic is kept, but disabled for video with "normal" FPS values. In particular, we want to keep it for displaying images, which implicitly set the frame duration to 1 second by reporting 1 FPS. It's also good for slide shows with mf://. Fixes #2745.
* player: fix some oversights in video refactoringwm42016-01-221-5/+10
| | | | | | | | | | vo_chain_uninit() isn't supposed to care much about the decoder (although decoders and outputs still go strictly together, so there is not much of an actual difference now). Also unset track.d_video correctly. Remove a stale declaration from dec_video.h as well.
* player: refactor: eliminate MPContext.d_audiowm42016-01-221-1/+1
|
* audio: refactor: work towards unentangling audio decoding and filteringwm42016-01-221-2/+2
| | | | | | | | | Similar to the video path. dec_audio.c now handles decoding only. It also looks very similar to dec_video.c, and actually contains some of the rewritten code from it. (A further goal might be unifying the decoders, I guess.) High potential for regressions.
* player: refactor: eliminate MPContext.d_videowm42016-01-171-38/+42
| | | | | | | | | | | | | | Eventually we want the VO be driven by a A->V filter, so a decoder doesn't even have to exist. Some features definitely require a decoder though (like reporting the decoder in use, hardware decoding, etc.), so for each thing which accessed d_video, it has to be redecided if and how it can access decoder state. At least the "framedrop" property slightly changes semantics: you can now always set this property, even if no video is active. Some untested changes in this commit, but our bio-based distributed test suite has to take care of this.
* video: refactor: disentangle decoding/filtering some morewm42016-01-161-84/+53
| | | | | | | | | | | This moves some code related to decoding from video.c to dec_video.c, and also removes some accesses to dec_video.c from the filtering code. dec_video.ch is starting to make sense, and simply returns video frames from a demuxer stream. The API exposed is also somewhat intended to be easily changeable to move decoding to a separate thread, if we ever want this (due to libavcodec already being threaded, I don't see much of a reason, but it might still be helpful).
* video: refactor: slightly disentangle video filteringwm42016-01-151-43/+32
|
* video: decouple filtering/decoding slightly morewm42016-01-141-65/+80
| | | | | | | | | | | | | | | | | | | Lots of noise to remove the vfilter/vo fields from dec_video. From now on, video filtering and output will still be done together, summarized under struct vo_chain. There is the question where exactly the vf_chain should go in such a decoupled architecture. The end goal is being able to place a "complex" filter between video decoders and output (which will culminate in natural integration of A->V filters for natural integration of libavfilter audio visualizations). The vf_chain is still useful for "final" processing, such as format conversions and deinterlacing. Also, there's only 1 VO and 1 --vf option. So having 1 vf_chain for a VO seems ideal, since otherwise there would be no natural way to handle all these existing options and mechanisms. There is still some work required to truly decouple decoding.
* video: refactor: shuffle code aroundwm42016-01-141-0/+71
| | | | | | struct dec_video should have nothing to do with video filters or outputs, and this huge chunk of code was somehow stuck directly in dec_video.c.
* video: refactor: handle video format fixups closer to decoderwm42016-01-141-3/+5
| | | | | | | | | | Instead of handling this on filter chain reinit, do it directly after the decoder. This makes the code less entangled. In particular, this gets rid of the really weird "override params" concept in the video filter code. The last_format/fixed_formats have some redundance with decoder_output, but unfortunately the latter has a slightly different use.
* player: simplify backsteppingwm42016-01-121-7/+14
| | | | | | | | | | | | | | Basically reimplement it. The old implementation was quite stupid, and was probably done this way because video filtering and output used to be way less decoupled. Now we can reimplement it in a very simple way: when backstepping, seek to current time, but keep the last frame that was supposed to be discarded when reaching the target time. When the seek finishes, prepend the saved frame to the video frame queue. A disadvantage is that the new implementation fails to skip over timeline boundaries (ordered chapters etc.), but this never worked properly anyway. It's possible that this will be fixed some time in the future.
* player: handle hrseek framedrop correctlywm42016-01-121-1/+1
| | | | | This was non-sense and checked the option instead of the actual flag. Possibly could lead to incorrect hr-seeks.
* demux: merge sh_video/sh_audio/sh_subwm42016-01-121-2/+2
| | | | | | | | | | This is mainly a refactor. I'm hoping it will make some things easier in the future due to cleanly separating codec metadata and stream metadata. Also, declare that the "codec" field can not be NULL anymore. demux.c will set it to "" if it's NULL when added. This gets rid of a corner case everything had to handle, but which rarely happened.
* mpv_talloc.h: rename from talloc.hDmitrij D. Czarkoff2016-01-111-1/+1
| | | | This change helps avoiding conflict with talloc.h from libtalloc.
* player: detect audio PTS jumps, make video PTS heuristic less aggressivewm42016-01-091-12/+0
| | | | | | | | | | | | | | | | | | | | | | This is another attempt at making files with sparse video frames work better. The problem is that you generally can't know whether a jump in video timestamps is just a (very) long video frame, or a timestamp reset. Due to the existence of files with sparse video frames (new frame only every few seconds or longer), every heuristic will be arbitrary (in general, at least). But we can use the fact that if video is continuous, audio should also be continuous. Audio discontinuities can be easily detected, and if that happens, reset some of the playback state. The way the playback state is reset is rather radical (resets decoders as well), but it's just better not to cause too much obscure stuff to happen here. If the A/V sync code were to be rewritten, it should probably strictly use PTS values (not this strange time_frame/delay stuff), which would make it much easier to detect such situations and to react to them.
* video: fix debug messagewm42016-01-061-1/+1
| | | | Should not be a warning, and the message text was also very useless.
* video: do not disable hr-seek framedrop too earlywm42015-12-301-7/+5
| | | | | | This didn't make too much sense, and just made seeking slower. Strictly suggest the decoder to drop a frame if its PTS is before the seek target.
* sub: change how subtitles are readwm42015-12-291-1/+5
| | | | | | | | Slightly change how it is decided when a new packet should be read. Switch to demux_read_packet_async(), and let the player "wait properly" until required subtitle packets arrive, instead of blocking everything. Move distinguishing the cases of passive and active reading into the demuxer, where it belongs.
* video: switch from using display aspect to sample aspectwm42015-12-191-2/+5
| | | | | | | | | | | | | | | | MPlayer traditionally always used the display aspect ratio, e.g. 16:9, while FFmpeg uses the sample (aka pixel) aspect ratio. Both have a bunch of advantages and disadvantages. Actually, it seems using sample aspect ratio is generally nicer. The main reason for the change is making mpv closer to how FFmpeg works in order to make life easier. It's also nice that everything uses integer fractions instead of floats now (except --video-aspect option/property). Note that there is at least 1 user-visible change: vf_dsize now does not set the display size, only the display aspect ratio. This is because the image_params d_w/d_h fields did not just set the display aspect, but also the size (except in encoding mode).
* player: remove redundant checkwm42015-12-051-1/+1
| | | | Found by Coverity.
* player: don't make display-sync panic on timestamp discontinuitieswm42015-12-041-2/+2
|
* player: resync audio only on larger timestamp discontinuitieswm42015-12-041-2/+2
| | | | | | | | | Helps with files that have occasional broken timestamps. For larger discontinuities, e.g. caused by actual timestamp resets, we still want to realign audio. (I guess in general, this should be removed and replaced by a more general resync-on-desync logic, but not now.)
* vo_opengl: fix interpolation with display-syncwm42015-11-281-1/+2
| | | | | | | | | | | | | | | | | | | | At least I hope so. Deriving the duration from the pts was not really correct. It doesn't include speed adjustments, and becomes completely wrong of the user e.g. changes the playback speed by a huge amount. Pass through the accurate duration value by adding a new vo_frame field. The value for vsync_offset was not correct either. We don't need the error for the next frame, but the error for the current one. This wasn't noticed because it makes no difference in symmetric cases, like 24 fps on 60 Hz. I'm still not entirely confident in the correctness of this, but it sure is an improvement. Also, remove the MP_STATS() calls - they're not really useful to debug anything anymore.
* player: fix commit 50bb209awm42015-11-281-1/+1
| | | | Well, this was stupid.
* vo: change vo_frame field unitswm42015-11-271-1/+2
| | | | | | | This was just converting back and forth between int64_t/microseconds and double/seconds. Remove this stupidity. The pts/duration fields are still in microseconds, but they have no meaning in the display-sync case (also drop printing the pts field from opengl/video.c - it's always 0).
* player: always disable display-sync on desyncswm42015-11-271-22/+12
| | | | | | | | | | | | | | | Instead of periodically trying to enable it again. There are two cases that can happen: 1. A random discontinuity messed everything up, 2. Things are just broken and will desync all the time Until now, it tried to deal with case 1 - but maybe this is really rare, and we don't really need to care about it. On the other hand, case 2 is kind of hard to diagnose if the user doesn't use the terminal. Seeking will reenable display-sync, so you can fix playback if case 1 happens, but still get predictable behavior in case 2.
* player: make display-vdrop mode do what the manpage claimswm42015-11-261-4/+7
| | | | | Don't change video speed in this mode, which is closer to the claim on the manpage that it's close to the behavior of the "audio" mode.
* player: log some more display-sync informationwm42015-11-251-3/+6
|
* player: use demuxer ts offset to simplify timeline ts handlingwm42015-11-161-4/+0
| | | | | | | | | Use the demux_set_ts_offset() added in the previous commit to base each timeline segment to use timestamps according to its relative position within the overall timeline. As a consequence we don't need to care about these timestamps anymore, and everything becomes simpler. (Another minor but delicious nugget of sanity.)
* player: account for minor VO underrunswm42015-11-141-2/+2
| | | | | | | | | If the player sends a frame with duration==0 to the VO, it can trivially underrun. Don't panic, but keep the correct time. Also, returning the absolute time from vo_get_next_frame_start_time() just to turn it into a float with relative time was silly. Rename it and make it return what the caller needs.
* player: fix audio drift computation at different playback speedswm42015-11-141-8/+9
| | | | | This computed nonsense if the user set a playback speed other than 1 (in addition to the display-sync speed change).
* player: stricter framedrop thresholdwm42015-11-131-3/+2
| | | | | | 80ms allowable desync was a bit too much. It'd allow for a range of 160ms, which everyone can notice. It might also be a bother to apply compensation resampling speed for that long.
* player: try to compensate actual audio driftwm42015-11-131-0/+40
| | | | | | | | | | | | | | | | We always let audio slowly desync until a threshold is reached, and then pushed it back by applying a maximum compensation speed. Refine what comes afterwards: instead of playing with the nominal video speed, use the actual required audio speed for keeping sync as measured by the A/V difference. (The "actual" speed is the ideal speed with A/V differences added.) Although this works in theory, it's somewhat questionable how much this works in practice. The ideal time value is actually not exact, but is the time at which the frame is scheduled (could be compensated by using the time_left calculations in handle_display_sync_frame()). It doesn't account for speed changes or catastrophic discontinuities. It uses only 10 past frames.
* player: change display-sync audio speed only if neededwm42015-11-131-38/+48
| | | | | | | | | | As long as it's within the desync tolerance, do not change the audio speed at all for resampling. This reduces speed changes which might be caused by jittering timestamps and similar cases. (While in theory you could just not care and change speed every single frame, I'm afraid that such changes could possibly cause audio artifacts. So better just avoid it in the first place.)
* player: remove display_sync_disable_counterwm42015-11-131-10/+8
| | | | We can implement it differently and drop a tiny bit of state.
* command: add vsync-ratio propertywm42015-11-131-4/+7
| | | | | | | | This is very "illustrative", unlike the video-speed-correction property, and thus useful. It can also be used to observe scheduling errors, which are not detected by the core. (These happen due to rounding errors; possibly not evne our fault, but coming from files with rounded timestamps and so on.)
* player: compute required display-sync speed change differentlywm42015-11-131-22/+36
| | | | | | | | | | | | | Instead of looking at the current frame duration for the intended speedup, look at all past frames, and find a good average speed. This ties in with not wanting to average _all_ frame durations, which doesn't make sense in VFR situations. This is currently done in the most naive way possible, but already sort of works for VFR which switches between frame durations that are integer multiples of a base rate. Certainly more improvements could be made, such as trying to adjust directly on FPS changes, instead of averaging everything, but for now this is not needed at all.
* player: smooth out frame durations by averaging themwm42015-11-131-1/+1
| | | | | | | | | | | | | | | | | | Helps somewhat with muxer-rounded timestamps. There is some danger that this introduces a timestamp drift. But since they are averaged values (unlike as when using an incorrect container framerate hint), any potential drift shouldn't be too brutal, or compensate itself soon. So I won't bother yet with comparing the results with the real timestamp, unless we run into actual problems. Of course we still prefer potentially real timestamps over the approximated ones. But unless the timestamps match the container FPS, we can't know whether they are (no, checking whether the they have microsecond components would be cheating). Perhaps in future, we could let the demuxer export the timebase - if the timebase is not 1000 (or divisible by it), we know that millisecond-rounded timestamps won't happen.
* player: refactor display-sync frame duration calculationswm42015-11-131-103/+83
| | | | | | | | | | | | | | | | | | | | | | | | | | | Get rid of get_past_frame_durations(), which was a bit too messy. Add a past_frames array, which contains the same information in a more reasonable way. This also means that we can get the exact current and past frame durations without going through awful stuff. (The main problem is that vo_pts_history contains future frames as well, which is needed for frame backstepping etc., but gets in the way here.) Also disable the automatic disabling of display-sync if the frame duration changes, and extend the frame durations allowed for display sync. To allow arbitrarily high durations, vo.c needs to be changed to pause and potentially redraw OSD while showing a single frame, so they're still limited. In an attempt to deal with VFR, calculate the overall speed using the average FPS. The frame scheduling itself does not use the average FPS, but the duration of the current frame. This does not work too well, but provides a good base for further improvements. Where this commit actually helps a lot is dealing with rounded timestamps, e.g. if the container framerate is wrong or unknown, or if the muxer wrote incorrectly rounded timestamps. While the rounding errors apparently can't be get rid of completely in the general case, this is still much better than e.g. disabling display-sync completely just because some frame durations go out of bounds.
* player: always require a future frame with display-sync enabledwm42015-11-131-2/+6
| | | | | | We need a frame duration even on start, because the number of vsyncs the frame is shown is predetermined. (vo_opengl actually makes use of this property in certain cases.)
* player: less naive roundingwm42015-11-111-1/+1
|
* player: use input instead of output format for spdif checkwm42015-11-041-1/+1
| | | | | | This check disables the display-sync resample method. If the filters convert PCM to AC3, we can still insert a filter to change speed. This is because filters are inserted at the beginning of the filter chain.
* player: move audio speed adjustment codewm42015-11-041-54/+60
| | | | | | | | | Move it (in a cosmetic sense), and also move its invocation to below all the video handling. All other changes remain cosmetic, including moving the framedrop calculation code, and getting rid of the video_speed_correction variable.
* player: another fix to A/V difference calculation in display-sync modewm42015-11-011-1/+1
| | | | | | | update_av_diff() works on the timestamps, while time_left is in real time. When playing at not-1 speed, these are very different, and cause the A/V difference to jitter. Fix this by scaling the expected A/V desync to the correct range.
* video: fix another A/V difference bug in display-sync modewm42015-10-311-2/+3
| | | | | | | | | | | | | | | This didn't show up with cases where the frame pattern has a cycle of 1 or 2 like it is the case with 24-on-24 fps, or 24-on-60 fps. It did show up with 25-on-60 fps. (We don't slow down 25 fps video to 24 on default settings.) In this case, we must not add the timing error of the next frame to the A/V difference estimation of the current frame. Use the previous timing error instead. This is another bug resulting from the confusion about whether we calculate parameters for the currently playing frame, or the one we're about to queue.
* command: add mistimed-frame-count propertywm42015-10-301-0/+4
| | | | | Does what the manpage says. This is a replacement incrementing the dropped frame counter (see previous commit).
* video: fix framedrop accounting in display-sync modewm42015-10-301-2/+0
| | | | | | | | | | | | | | Commit a1315c76 broke this slightly. Frame drops got counted multiple times, and also vo.c was actually trying to "render" the dropped frame over and over again (normally not a problem, since frames are always queued "tightly" in display-sync mode, but could have caused 100% CPU usage in some rare corner cases). Do not repeat already dropped frames, but still treat new frames with num_vsyncs==0 as dropped frames. Also, strictly count dropped frames in the VO. This means we don't count "soft" dropped frames anymore (frames that are shown, but for fewer vsyncs than intended). This will be adjusted in the next commit.
* player: raise display sync desync tolerancewm42015-10-281-5/+2
| | | | | | | Bump it to 80, and 2 vsyncs. This is another measure against vsync jitter. Admittedly this is a bit simplistic (and we should probably estimate a stable estimated vsync phase instead), but for now this will do.
* player: minor refactor for A/V diff computationwm42015-10-281-19/+27
| | | | | | | | Calculate the A/V difference directly in the display sync code, instead of the awkward current way, which reuses the fields for audio sync. We still set time_frame, because it makes falling back to audio sync somewhat smoother.
* player: fix display sync A/V difference estimation on dropswm42015-10-281-0/+2
| | | | | | | When dropping or repeating frames, we essentially influence when the frame after the next frame will be shown, not the next frame. This led to dropping/repeating frames 2 times, because the A/V difference had a delay of one frame. Compensate it with the expected value.
* player: disable total-avsync-change update in display-sync modewm42015-10-271-0/+4
| | | | | The total-avsync-change property made no sense in display-sync mode (in addition to making not all that much sense in general).
* player: fix display-sync A/V calculation on high playback speedswm42015-10-271-0/+1
| | | | | | This is all kinds of stupid - update_avsync_after_frame() will multiply this value with the speed at a later point, and we only update this field for this function. (This should be refactored.)
* player: add audio drop/duplicate modewm42015-10-271-1/+1
| | | | Not very robust in the moment.
* player: be slightly less prone to framedrop in display sync modewm42015-10-191-3/+7
| | | | | 1 to 2 frames desync is still tolerable, and will be quickly compensated (if everything works).
* player: do not use copysign()wm42015-10-191-1/+1
| | | | | | | Apparently this function caused weird problems to me. I have no idea why. The usage of the function looks perfectly fine to me, and even rounding issues can be excluded. In any case, getting rid of this solved my problem, and makes the code actually more readable.
* player: fix an adjustment in display sync modewm42015-10-141-1/+1
| | | | | | This adjustment is supposed to improve the audio speed calculation in case of unexpected desync. The flipped sign made it actually worse, although the total impact of this bug was very minor.
* player: fix missed wakeup on video EOFwm42015-10-091-0/+3
| | | | | | | If video EOF happens during playback restart, and audio is syncing, and the demuxer packet queue overflows (i.e. no new packets will be read), then it could happen that the player accidentally enters sleeping, and continues playing anything only after e.g. user input wakes it up.
* video/out: remove an unused parameterwm42015-10-031-1/+1
| | | | | | | | | | | This parameter has been unused for years (the last flag was removed in commit d658b115). Get rid of it. This affects the general VO API, as well as the vo_opengl backend API, so it touches a lot of files. The VOFLAGs are still used to control OpenGL context creation, so move them to the OpenGL backend code.
* video: replace vf_format outputlevels option with global optionwm42015-09-291-0/+1
| | | | | | | | | | | The vf_format suboption is replaced with --video-output-levels (a global option and property). In particular, the parameter is removed from mp_image_params. The mechanism is moved to the "video equalizer", which also handles common video output customization like brightness and contrast controls. The new code is slightly cleaner, and the top-level option is slightly more user-friendly than as vf_format sub-option.
* player: fix excessive CPU usage in audio-only modewm42015-09-221-3/+4
| | | | | | | | | Caused by one of the --force-window commits. The unconditional uninit_video_out() call (which normally should be idempotent) raised sporadic MPV_EVENT_VIDEO_RECONFIG events. This is ok, except for the fact that clients (like a Lua script or libmpv users) would cause the event loop to run again after receiving it, triggering a feedback loop. Fix it by sending the events really only on a change.
* video: disable interpolation during framesteppingwm42015-08-251-0/+1
| | | | | | | It just causes annoying artifacts. Interestingly, this means keeping down the frame stepping key (".") will play video with interpolation disabled.
* video: don't decode 2 frames ahead with display-syncwm42015-08-191-2/+1
| | | | | This is not needed. It was used only temporarily in a development branch, and is a leftover from earlier rebasing.
* player: add display sync modewm42015-08-101-2/+204
| | | | | | | | | | | | | | | | | | | | | | | | If this mode is enabled, the player tries to strictly synchronize video to display refresh. It will adjust playback speed to match the display, so if you play 23.976 fps video on a 24 Hz screen, playback speed is increased by approximately 1/1000. Audio wll be resampled to keep up with playback. This is different from the default sync mode, which will sync video to audio, with the consequence that video might skip or repeat a frame once in a while to make video keep up with audio. This is still unpolished. There are some major problems as well; in particular, mkv VFR files won't work well. The reason is that Matroska is terrible and rounds timestamps to milliseconds. This makes it rather hard to guess the framerate of a section of video that is playing. We could probably fix this by just accepting jittery timestamps (instead of explicitly disabling the sync code in this case), but I'm not ready to accept such a solution yet. Another issue is that we are extremely reliant on OS video and audio APIs working in an expected manner, which of course is not too often the case. Consequently, the new sync mode is a bit fragile.
* player: separate controls for user and video controlled speedwm42015-08-101-5/+5
| | | | | | | | | | For video sync, we want separate playback speed controls for user- requested speed and the "correction" speed for video timing. Further, we use this separation to make sure only a resampler is inserted if playback speed is only changed for video sync correction. As of this commit, this is basically inactive code. It's just preparation for the video sync code (the following commit).
* player: redo estimated-vf-fps calculationwm42015-08-101-0/+72
| | | | | | | | | | | | | | Additionally to taking the average, this tries to use the demuxer FPS to eliminate jitter, and applies some other heuristics to check if the result is sane. This code will also be used for the display sync code (it will actually make use of the require_exact parameter). (The value of doing this over keeping the simpler demux_mkv hack is somewhat questionable. But at least it allows us to deal with other container formats that use jittery timestamps, such as mp4 remuxed from mkv.)
* video: unbreak EOF with video-only files that have timestamp resetswm42015-08-031-1/+2
| | | | | | | Normally when there's a timestamp reset, we make audio resync to make sure audio and video line up (again). But in video-only mode, just setting audio to resyncing breaks EOF detection, because there's no code which would get audio_status out of this bogus state.
* video: move frame duration code to a separate functionwm42015-08-011-11/+23
| | | | Minor preparation for something else.
* video: move up vo_frame setupwm42015-07-281-12/+12
|
* video: always decode at least 2 frames in advancewm42015-07-261-5/+1
| | | | | | | Remove the exception for decoding only 1 frame if VO framedrop is disabled. This was originally done to be able to test potential regressions when we enabled VO framedrop and decoding 2 frames by default. It's not needed anymore.
* video: always re-probe auto deint filter on filter reconfigwm42015-07-211-2/+5
| | | | | | | | | If filters are disabled or reconfigured, attempt to remove and probe the deinterlace filter again. This fixes behavior if e.g. a software deint filter was automatically inserted, and then hardware decoding is enabled during playback. Without this commit, initializing hw decoding would fail because of the software filter; with this commit, it'll replace it with the hw deinterlacer instead.
* vo: minor simplification for queue size handlingwm42015-07-201-2/+2
| | | | | | | | | | Instead of calling it "future frames" and adding or subtracting 1 from it, always call it "requested frames". This simplifies it a bit. MPContext.next_frames had 2 added to it; this was mainly to ensure a minimum size of 2. Drop it and assume VO_MAX_REQ_FRAMES is at least 2; together with the other changes, this can be the exact size of the array.
* video: don't force video refresh if video is restartingwm42015-07-101-1/+3
|
* player: never overwrite stop_play fieldwm42015-07-081-1/+1
| | | | | | | This is a real pain: if a quit command is received, it's set to PT_QUIT. And then other code could overwrite it, making it not quit. The annoying bit is that stop_play is written and read in many places. Just not overwriting it unconditionally seems to be the best course of action.
* vo: change internal API for drawing frameswm42015-07-011-11/+21
| | | | | | | | | | | | | | draw_image_timed is renamed to draw_frame. struct frame_timing is renamed to vo_frame. flip_page_timed is merged into draw_frame (the additional parameters are part of struct vo_frame). draw_frame also deprecates VOCTRL_REDRAW_FRAME, and replaces it with a method that works for both VOs which can cache the current frame, and VOs which need to redraw it anyway. This is preparation to making the interpolation and (work in progress) display sync code saner. Lots of other refactoring, and also some simplifications.
* video: pass future frames to VOwm42015-07-011-33/+59
| | | | | | | | | | Now the VO can request a number of future frames with the last parameter of vo_set_queue_params(). This will be helpful to fix the interpolation code. Note that the first frame (after playback start or seeking) will usually not have any future frames (to make seeking fast). Near the end of the file, the number of future frames will become lower as well.
* player: slim down A/V desync warningwm42015-06-301-17/+5
| | | | | I don't think most of these suggestions are overly helpful. Just get rid of them.
* player: add some debug output for seekingwm42015-06-181-0/+1
|
* player: actually play videowm42015-06-181-1/+1
| | | | Broken by e00e9d65.
* player: make decoding cover art more robustwm42015-06-181-3/+18
| | | | | | | | | | | | | | When showing cover art, the decoding logic pretends that the source has an infinite number of frames. This slightly simplifies dealing with filter data flow. It was done by feeding the same packet repeatedly to the decoder (each decode run produces new output). Change this by decoding once at the video initialization. This is easier to follow, and increases robustness in case of broken images. Usually, we try to tolerate decoding errors, so decoding normally continues, but in this case it would just burn the CPU for no reason. Fixes #2056.
* video: remove worthless log messagewm42015-06-051-6/+0
| | | | | All this information is already output otherwise. Except the FourCC, which lost most of its importance in mpv.
* vf_sub: minor simplificationwm42015-06-051-2/+1
|
* video: do not use MP_NOPTS_VALUE for A/V differencewm42015-05-241-1/+2
| | | | | | There's no need for this, it just creates more corner cases. Also always reset it on seeks etc..
* video: force audio resync after video discontinuitywm42015-05-201-0/+1
|
* video: better heuristic for timestamp resetswm42015-05-201-2/+13
| | | | | | | | | | | | | | | Reduce the default tolerance for timestamp jumps from 60 to 15 seconds. For .ts files, where ts_resets_possible coming from AVFMT_TS_DISCONT is set, apply a more sophisticated heuristic. It's clear that such a file wouldn't have a framerate below, say, 23hz. If the demuxer reports a lower fps, we allow longer PTS jumps. This should replace long pauses on discontinuities with .ts files with at most a short stutter. Of course, all kinds of things could go wrong anyway if the source is VFR, or FFmpeg's frame rate detection fails in some other way. I haven't found such a file yet, though.
* player: flush decoder even if cover art is decodedwm42015-04-241-1/+1
| | | | | | | | | | | | | | | Fixes PNG cover art not showing up immediately (for example when running with --pause). libavformat exports embedded cover art as a single packet. For example, a PNG attachment simply contains the PNG image, which can be sent to the decoder. Normally you would expect that the PNG decoder would return 1 frame for 1 packet, without any delays. But this stopped working, and it incurs a 1 frame delay. This is perfectly legal (even if unexpected), so let our code feed the decoder packets until we get something back. (In theory feeding the packet instead of a real flush packet is still somewhat questionable.)
* player: don't show A/V desync message in non-sense situationswm42015-04-241-2/+2
| | | | | | | last_av_difference can be MP_NOPTS_VALUE under certain circumstances (like no video timestamp yet). This triggered the desync message, because fabs(MP_NOPTS_VALUE) is quite a large value. We don't want to show a message in this situation.
* player: cleanup update_fps() functionwm42015-04-201-12/+5
| | | | | It was called only in 2 places, one of them redundant (the container FPS can not change).
* video: cleanup some old log messageswm42015-04-201-0/+2
| | | | | These are basically MPlayer leftovers, and barely useful due to being redundant with other messages. The FPS message is used somewhere else.
* video: do not show decoder framedrops if they're not requestedwm42015-04-161-1/+2
| | | | | | | | | | | libavcodec makes it impossible to distinguish dropped frames (requested with AVCodecContext.skip_frame), and cases when the decoder simply does not return a frame by default (such as with VP9, which has invisible reference frames). This confuses users when decoding VP9 video. It's basically a cosmetic issue, so just paint it over by ignoring them if framedropping is disabled.
* player: silence spam in verbose mode when playing audio with cover artwm42015-04-141-1/+1
| | | | | | When playing cover art, it conceptually reaches EOF as soon as the image was put on the VO, causing the EOF message to be repeated every time new audio was decoded. Just silence the message.
* Update license headersMarcin Kurczewski2015-04-131-5/+4
| | | | Signed-off-by: wm4 <wm4@nowhere>
* video: cleanup stereo mode parsingwm42015-04-021-1/+1
| | | | | | | | | Use OPT_CHOICE_C() instead of the custom parser. The functionality is pretty much equivalent. (On a side note, it seems --video-stereo-mode can't be removed, because it controls whether to "reduce" stereo video to mono, which is also the default. In fact I'm not sure how this should be handled at all.)
* video: fix seek-to-last-framewm42015-03-261-3/+1
| | | | | Accidentally broken in 79779616; we really need to check for true EOF, not just whether there are no frames yet.
* video: make frame skipping code slightly more readablewm42015-03-251-13/+8
|
* video: refactor aspects of queue and EOF handlingwm42015-03-251-41/+62
| | | | | Instead of touching the 2-entry queue in mpctx->next_frame directly, move some of it to functions.
* video: use less technical language for PTS warningwm42015-03-231-1/+1
| | | | | | | "Non-monotonic" isn't even 100% correct; it's missing "strictly" (for briefness I guess), and also the message is printed if the PTS jumps forward. So just print something that is likely a bit easier to understand.
* video: fix update of vo-configured propertywm42015-03-231-0/+1
| | | | It obviously needs to be updated after the VO was destroyed.
* player: warn against non-monotonic video PTS only oncewm42015-03-181-8/+9
| | | | | | | | For some reason there were two points in the code where it warned against non-monotonic video PTS. The one in video.c triggered on PTS going backwards or making large jumps forwards, while dec_video.c triggered on PTS going backwards or PTS not changing. Merge them into a single check, which warns against all cases.
* player: use symbolic constant for seek precisionwm42015-03-041-2/+4
| | | | Meh.
* player: adjust A/V desync messagewm42015-02-261-3/+3
| | | | | | Broken drivers are an issue rather often. Maybe this gives the user an idea that this could be the reason. (We can't dump much more info on a 80x24 terminal.)
* vf_vapoursynth: add display refresh rate propertyJulian2015-02-131-0/+2
| | | | | This value is not necessarily trustworthy (it might change) and can be 0.
* player: remove --fixed-vowm42015-02-031-1/+1
| | | | | | | In ancient times, this was needed because it was not default, and many VOs had problems with it. But it was always default in mpv, and all VOs are required to deal with it. Also, running --fixed-vo=no is not useful and just creates weird corner cases. Get rid of it.
* player: dump audio jitter to stats filewm42015-02-011-4/+5
| | | | | This allows us to plot the difference between video timestamps, and the adjusted video timestamps due to syncing video to audio speed.
* player: minor simplification in A/V-sync related codewm42015-01-301-3/+2
| | | | Just minor things.
* Revert "player: allow seeking audio between video frames"wm42015-01-301-9/+1
| | | | | | | | This reverts commit 7b3feecbc23e3e0b0d9cf66f02af53d127a0b681. It's broken, hr-seek never ends at a video position before seek pts. Not sure what I was thinking, although it did work anyway when artificially forcing a video frame to display before seek pts.
* player: print desync message on negative A/V-sync toowm42015-01-301-1/+1
| | | | | | | | At least there is _some_ problem if this happens. It would mean that audio is playing slower than video. Normally, video is synced to audio, so if audio stops playback completely, video will not advance at all. But using things like --autosync, it's well possible that this kind of desync happens.
* player: rearrange some A/V-sync related codewm42015-01-301-6/+5
| | | | | | | | | | | | Move the update_avsync_before_frame() call further down. Moving it closer to where the time_frame value is used (and which the function updates) should make the code more readable. With this change, there's no need anymore to reset the time_frame value on the video reconfig path. Move the update_avsync_after_frame() up. Now no meaningful amount of time passes since the previous get_relative_time() call anymore, and the second one can be removed.
* player: use correct type for some relative timeswm42015-01-301-3/+3
| | | | | | We use double for these things everywhere, just this code didn't. It likely doesn't matter much, and this code is for an optional feature too.
* player: remove redundant variablewm42015-01-291-4/+5
| | | | | | mpctx->audio_delay always has the same value as opts->audio_delay. (This was not the case a long time ago, when the audio-delay property didn't actually write to opts->audio_delay. I think.)
* player: allow seeking audio between video frameswm42015-01-281-1/+9
| | | | | | | | | | | | | | | This allows seeking audio between two video frames that are relatively far away. The implementation of this is a bit subtle. It pretend the audio position is different, and the actual PTS adjustment happens in audio.c with this line: sync_pts -= mpctx->audio_delay - mpctx->delay; Effectively this is the same as setting sync_pts to hrseek_pts after this line, though. (I'm actually not sure if this could be written in a more straightforward way; probably yes.)
* player: mention mpv encoding support for transcoding in desync. warningwm42015-01-191-1/+1
|
* video: fix waiting for last frame/format reconfigwm42015-01-191-1/+1
| | | | | | | | | | | We still need to send the VO a duration in these cases. Disabling framedrop has logically absolutely nothing to do with these cases; it was overlooked in commit 918b06c4. So we always send the frame duration (or a guess for it), and check whether framedropping is actually enabled in the VO code. (It would be cleaner to send framedrop as a flag, but I don't care about that right now.)
* player: respect --untimed on last framewm42015-01-161-1/+3
| | | | | | | | | | | | | | The last video frame is another case that has a separate code path, although it's pretty similar to the one in commit 73e5aa87. Fix this in a different way, which also takes care of the last frame case, although without context the code becomes slightly more tricky. As further cleanup, move the decision about framedropping itself to the same place, so the check in vo.c becomes much simpler. The check for the vo->driver->encode flag, which is remvoed completely, was redundant too. Fixes #1480.
* player: respect --untimed on video format changeswm42015-01-161-1/+1
| | | | | | | | | | | If the video format changes (e.g. different frame size), a special code path is entered to wait until the currently displayed frame is done. Otherwise, the frame before the change would be destroyed by the vo_reconfig() call. This code path didn't respect --untimed; correct this. Fixes #1475.
* video: fix timeline with some container formatswm42015-01-061-0/+2
| | | | | | Using edl or --merge-files with .avi files didn't work, because the DTS was not offset. Only the PTS was adjusted, which led to nonsense timestamps.
* video: batch query_format callswm42015-01-031-2/+1
| | | | | | | There are currently 568 pixel formats (actually fewer, but the namespace is this big), and for each format elaborate synchronization was done to call it synchronously on the VO. This is completely unnecessary, and we can do with just a single call.
* vf_vapoursynth: pass through container FPS valuewm42015-01-031-10/+8
| | | | | | | | This is basically a hack; but apparently a needed one, since many vapoursynth filters insist on having a FPS set. We need to apply the FPS override before creating the filters. Also change some terminal output related to the FPS value.
* video: better pipelining with vf_vapoursynthwm42015-01-031-3/+24
| | | | | | | | | | Most of this is explained in the code comments. This change should improve performance with vapoursynth, especially if concurrent requests are used. This should change nothing if vf_vapoursynth is not in the filter chain, since non-threaded filters obviously can not asynchronously finish filtering of frames.
* vo_opengl_cb: pass context directlywm42014-12-311-3/+1
| | | | | This is simpler than setting the context after VO creation, which requires the code to check for the context on every entrypoint.
* video: pass some VO params as structwm42014-12-311-3/+6
| | | | | Not particularly elegant, but better than adding more and more stuff to the relevant function signatures.
* player: fix a typo in message outputwm42014-12-241-1/+1
| | | | This typo has been around for over a year. Oops.
* client API: expose OpenGL rendererwm42014-12-091-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds API to libmpv that lets host applications use the mpv opengl renderer. This is a more flexible (and possibly more portable) option to foreign window embedding (via --wid). This assumes that methods like context sharing and multithreaded OpenGL rendering are infeasible, and that a way is needed to integrate it with an application that uses a single thread to render everything. Add an example that does this with QtQuick/qml. The example is relatively lazy, but still shows how relatively simple the integration is. The FBO indirection could probably be avoided, but would require more work (and would probably lead to worse QtQuick integration, because it would have to ignore transformations like rotation). Because this makes mpv directly use the host application's OpenGL context, there is no platform specific code involved in mpv, except for hw decoding interop. main.qml is derived from some Qt example. The following things are still missing: - a way to do better video timing - expose GL renderer options, allow changing them at runtime - support for color equalizer controls - support for screenshots
* player: when seeking past EOF with --keep-open, seek to last framewm42014-12-071-4/+16
| | | | | | | | | | | | | | | | | | | | | It feels strange that seeking past EOF with --keep-open actually leaves the player at a random position. You can't even unpause, because the demuxer is in the EOF state, and what you see on screen is just what was around before the seek. Improve this by attempting to seek to the last video frame if EOF happens. We explicitly don't do this if EOF was reached normally to increase robustness (if the VO got a frame since the last seek, it obviously means we had normal playback before EOF). If an error happens when trying to find the last frame (such as not actually finding a last frame because e.g. the demuxer misbehaves), this will probably turn your CPU into a heater. There is no logic to prevent reinitiating the last-frame search if the last-frame search reached EOF. (Pausing usually prevents that EOF is reached again after a successful last-frame search.) Fixes #819.
* Remove some superfluous NULL checkswm42014-11-211-3/+0
| | | | | | | | In all of these situations, NULL is logically not allowed, making the checks redundant. Coverity complained about accessing the pointers before checking them for NULL later.
* player: print anamorphic size only if video is anamorphicwm42014-11-021-2/+5
| | | | Has been annoying me since forever.
* player: update meaning of drop_frame_cntwm42014-11-011-2/+2
| | | | | Rename the variable, update comments, and update the documentation of the property which returns its value.
* player: show AV-desync message in all framedrop modeswm42014-11-011-2/+1
| | | | | | | | | | | | This was shown only if decoder-framedropping was enabled, and only if at least 50 frames were dropped by it. Since drop_frame_cnt used to mean "number of late frames", this code made sense, but this is not the case anymore: drop_frame_cnt can be even 0, all while video gets hopelessly behind audio. One problem with this is that short desync spikes (which usually can probably dealt with) will also cause this message to be shown. If it gets triggered too often, the code will need to be adjusted.
* client API: better error reportingwm42014-10-281-1/+4
| | | | Give somewhat more information on playback failure.
* video: send MPV_EVENT_VIDEO_RECONFIG on uninitwm42014-10-241-0/+1
| | | | | This event basically means "something about video changed", and uninit is certainly an important change.
* player: fix exiting if both audio and video fail initializingwm42014-10-231-4/+2
| | | | | | | | | | | | | The player was supposed to exit playback if both video and audio failed to initialize (or if one of the streams was not selected when the other stream failed). This didn't work; for one this check was missing from one of the failure paths. And more importantly, both checked the current_track array incorrectly. Fix these issues, and move the failure handling code into a common function. CC: @mpv-player/stable
* player: fix --frameswm42014-10-141-1/+3
| | | | | | | | | This could produce an extra frame, because reaching the maximum merely signals the playloop to exit, without strictly enforcing the limit. Fixes #1181. CC: @mpv-player/stable
* player: signal EOF when using --frameswm42014-10-101-1/+1
|
* video: try harder to decode cover art picture only oncewm42014-10-091-2/+7
| | | | | | | | | | | | | | | | | For cover art, we pretend that the video stream is infinite, but also stop decoding once we have an image on the VO (this seems advantageous for the case when strange filters are inserted or the VO image gets lost). Since a while ago, the video chain started decoding 2 images though ("Non-monotonic video pts: 0.000000 <= 0.000000"), which is annoying and wasteful. Improve this by handling a certain corner case at initialization, which will decode a second image while the first one is still stuck in the filter chain. Also, just in case there are filters which buffer a lot, also force EOF filtering (which means we tell the filters to flush buffered frames). CC: @mpv-player/stable
* player: remove central uninit_player() function and flags messwm42014-10-031-9/+23
| | | | | | | | | | | | | | Each subsystem (or similar thing) had an INITIALIZED_ flag assigned. The main use of this was that you could pass a bitmask of these flags to uninit_player(). Except in some situations where you wanted to uninitialize nearly everything, this wasn't really useful. Moreover, it was quite annoying that subsystems had most of the code in a specific file, but the uninit code in loadfile.c (because that's where uninit_player() was implemented). Simplify all this. Remove the flags; e.g. instead of testing for the INITIALIZED_AO flag, test whether mpctx->ao is set. Move uninit code to separate functions, e.g. uninit_audio_out().
* player: don't print audio/video init failure message twicewm42014-10-021-2/+2
| | | | | | | The messages "Audio: no audio" and "Video: no video" could be printed twice each if initializing them failed. Prevent his silliness. CC: @mpv-player/stable
* video: change automatic rotation and 3D filter insertionwm42014-09-271-6/+3
| | | | | | | | | | | | | | | | | | | | | We inserted these filters with fixed parameters, which was ok. But this also didn't change image parameters for the filters down the filter chain and the VO. For example, if rotation by 90° was requested by the file, we would insert a filter and rotate the video, but the VO would still receive image parameters that direct rotation by 90°. This wasn't a problem, but it could become one. Fix this by letting the filters automatically pick up the image params. The image params are reset on application. (We could probably also always try to apply and reset image params in a filter, instead of having special "auto" parameters. This would probably work, and video.c would insert a "rotate=0" filter. But I'm afraid this would be confusing and the current solution is cosmetically slightly nicer.) Unfortunately, the vf_stereo3d.c change turned out a big mess, but once the "internal" filter is fully replaced with libavfilter, most of this can be radically simplified.
* player: rate-limit OSD text updatewm42014-09-251-1/+2
| | | | | | | | | | | | | | | There's no need to update OSD messages and the terminal status if nobody is going to see it. Since the player doesn't block on video display anymore, this update happens to often and probably burns slightly more CPU than necessary. (OSD redrawing is handled separately, so it's just mostly useless text processing and such.) Change it so that it's updated only on every video frame or all 50ms (whatever comes first). For VO OSD, we could in theory try to lock to the OSD redraw heuristic or the display refresh rate, but that's more complicated and doesn't work for the terminal status.
* video: filter new frames at a better time (2)wm42014-09-221-7/+9
| | | | | | | | | | | | | | | | | We generally want 2 things: 1. minimal wakeups for decoding each frame 2. minimal number of frames decoded on continuous seeking Commit 35810cb8 changed this a bit, and fixed 1. But it broke 2., and now it decodes 2 frames instead of 1 when you keep seeking (arrow key held down or such). This made seeking appear slower. Fix this by making the logic more explicit. In particular, call the filters only if we actually try to get a new frame. When playing with --no-audio and all other distractions disabled (like OSC), it still wakes up 2 times per frame - but the second time is merely because the VO didn't accept the new frame yet.
* video: actually count decoder-dropped frameswm42014-09-201-4/+7
| | | | | | | | | Normally, feeding a packet to the decoder should always return a frame _if_ we received a frame before. So while we can't know exactly whether a frame was dropped, at least the normal case is easily detectable. This means we display something closer to the actual framedrop count, instead of a bad guess.
* video: improve decoder-based framedropping modewm42014-09-201-6/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is the "old" framedropping mode (derived from MPlayer). At least in the mplayer2/mpv source base, it stopped working properly years ago (or maybe it never worked properly). For one, it depends on the video framerate, which assume constant framerate. Another problem was that it could lead to freezing video display: video could get so much behind that it couldn't recover from framedrop. Make some small changes to improve this. Don't use the current audio position to check how much we are behind. Instead, use the last known A/V difference. last_av_difference is updated only when a video frame is scheduled for display. This means we can keep stop dropping once we're done catching up, even if video is technically still behind. What helps us here that this forces a video frame to be displayed after a while. Likewise, we reset the dropped_frames count only when scheduling a new frame for display as well. Some inspiration was taken from earlier work by xnor (see issue #620), although the implementation turned out quite different. This still uses the demuxer-reported (possibly broken) FPS value. It also doesn't account for filters changing FPS. We can't do much about this, because without decoding _and_ filtering, we just can't know how long a frame is. In theory, you could derive that from the raw packet timestamps and the filter chain contents, but actually doing this is too involved. Fortunately, the main thing the FPS affects is actually the displayed framedrop count.
* player: reset last_av_difference if not applicablewm42014-09-201-0/+1
| | | | | | Don't let stale values linger around. Also fix a slightly related case in audio.c.
* video: separate calling decoder/filterwm42014-09-181-14/+22
| | | | | | | | | | | Rename video_decode_and_filter to video_filter, and add a new video_decode_and_filter function. This function now calls the decoder. This is done so that we can check filters a second time after decoding, which avoids a useless playloop iteration. (This and the previous commits are really just microoptimizations, which simply reduce the number of times the playloop has to recheck everything.)
* video: check whether there are enough frames after filteringwm42014-09-181-6/+11
| | | | | | | Move the check to a function. Run the check a second time after decoding/filtering. This second check is strictly speaking redundant (which is why it wasn't done until now), but it avoids a useless playloop iteration.
* video: filter new frames at a better timewm42014-09-181-24/+24
| | | | | | | Move this code below the code that "shifts" the newly filtered frame. This allows us to skip a useless playloop iteration later, because obviously we need to filter a new frame after the previous frame has been "shifted", and not before that.
* video: initial Matroska 3D supportwm42014-08-301-0/+12
| | | | | | | | | | | | | | | | | | | | | This inserts an automatic conversion filter if a Matroska file is marked as 3D (StereoMode element). The basic idea is similar to video rotation and colorspace handling: the 3D mode is added as a property to the video params. Depending on this property, a video filter can be inserted. As of this commit, extending mp_image_params is actually completely unnecessary - but the idea is that it will make it easier to integrate with VOs supporting stereo 3D mogrification. Although vo_opengl does support some stereo rendering, it didn't support the mode my sample file used, so I'll leave that part for later. Not that most mappings from Matroska mode to vf_stereo3d mode are probably wrong, and some are missing. Assuming that Matroska modes, and vf_stereo3d in modes, and out modes are all the same might be an oversimplification - we'll see. See issue #1045.
* player: minor changeswm42014-08-251-8/+3
| | | | | | | | | | | | This shouldn't change anything functionally. Change the A/V desync message. --framedrop is enabled by default now, so the text must be changed a little. I've never heard of audio outputs messing up A/V sync recently, so remove that part. Remove the unused ao_pts field. Reorder 2 A/V sync related expressions so that they look the same.
* player: restore silent seekingwm42014-08-231-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | Commit 846257da introduced an accidental feature: if you kept seeking (so playback never really resumes), the audio would never be played. This was nice, but commit 4c25b000 accidentally removed it again (due to the video_next_pts being earlier available than it used to be, so audio could be played before the player executed the next queued seek). Implicitly reintroduce the old behavior again by not decoding a second video frame immediately. Usually, the second frame is used to compute the frame duration needed to for accurate framedropping, but since the first frame after a seek is never dropped, we don't need this. Now the video code will queue the new frame to the VO immediately, and since fill_audio_out_buffers() is called in the playloop before write_video() and execute_queued_seek(), it never gets the chance to enter STATUS_READY, and seeks will be silent. This also has a nice side-effect: since the second frame is not decoded and filtered, seeking becomes slightly faster (back to the same level as with framedrop disabled). It seems this still sometimes plays a period of audio when keeping a seek key down. In my tests, this appeared to happen because the seek finished before the next key repeat was sent.
* player: fix recent speed change regressionwm42014-08-221-0/+2
| | | | | | | | | | | | | | Commit 5afc025c broke this. The reason is that mpctx->delay is updated when a new video frame is added. This value is also needed to resync audio, but it will be for the wrong PTS. They must be consistent with each other, and if they aren't, initial sync will be off by N video frames, which results at least in worse user experience. This can be reproduced by for example heavily switching between normal and 2x speed, or similar. Fix by readding the video_next_pts field (keeping its use minimal, instead of reverting the commit that removed it).
* video: refactor queue handlingwm42014-08-221-75/+53
| | | | | | | | | | | | | | | This simplifies the code, and fixes an odd bug: the second-last frame was displayed for a very short duration if framedrop was enabled. The reason was that basically the time difference between second-last and last frame were skipped, because at this point EOF was already signaled. Also see commit b0959488 for a similar issue in the same code. This removes the messiness of the next_frame 2-frame queue, and strictly runs the "new frame" code when a frame is moved to the first position of the queue, instead of somehow messing with return codes. This also merges update_video() into video_output_image().
* video: get rid of video_next_pts fieldwm42014-08-221-9/+4
| | | | | | Not really needed anymore. Code should be mostly equivalent. Also get rid of some other now-unused or outdated things.
* video: move some code aroundwm42014-08-221-46/+45
| | | | | | No functional changes. init_vo() is now needed a bit further down, and moving it keeps definition and use close. adjust_sync() will be used by a function further up in one of the following commits.
* video: minor simplificationwm42014-08-221-21/+11
| | | | This is mostly equivalent, but simpler, and reduces some duplication.
* video: don't assume query_format is thread-safewm42014-08-201-5/+2
| | | | Although it's probably safe for most VOs, there's no guarantee.
* video: add VO framedropping modewm42014-08-151-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | This mostly uses the same idea as with vo_vdpau.c, but much simplified. On X11, it tries to get the display framerate with XF86VM, and limits the frequency of new video frames against it. Note that this is an old extension, and is confirmed not to work correctly with multi-monitor setups. But we're using it because it was already around (it is also used by vo_vdpau). This attempts to predict the next vsync event by using the time of the last frame and the display FPS. Even if that goes completely wrong, the results are still relatively good. On other systems, or if the X11 code doesn't return a display FPS, a framerate of 1000 is assumed. This is infinite for all practical purposes, and means that only frames which are definitely too late are dropped. This probably has worse results, but is still useful. "--framedrop=yes" is basically replaced with "--framedrop=decoder". The old framedropping mode is kept around, and should perhaps be improved. Dropping on the decoder level is still useful if decoding itself is too slow.
* video: reduce non-sense messages when playing coverartwm42014-08-131-11/+14
| | | | Don't print PTS warnings by skipping the normal video path.
* video: don't run new frame processing on every iterationwm42014-08-131-19/+22
| | | | | | | This ran adjust_sync() on every playloop iteration, instead of every newly decoded frame. It seems this was idempotent in the common case, but the code was originally designed to be run once only, so restore that.
* video: move some more code aroundwm42014-08-131-38/+49
| | | | No functional changes.
* video: move some code aroundwm42014-08-131-45/+40
|
* video: exit early when nothing to dowm42014-08-131-7/+7
| | | | | These cases were probably confusing. Exit early, which makes it much clearer what's going on. Should not change anything functionally.
* video: minor simplification of the old framedrop codewm42014-08-131-10/+6
| | | | | No changes in functionality, other than being slightly more correct at stream EOF.
* video: fix and simplify video format changes and last frame displaywm42014-08-121-108/+86
| | | | | | | | | | | | | | | | | | | | | | | | | | The previous commit broke these things, and fixing them is separate in this commit in order to reduce the volume of changes. Move the image queue from the VO to the playback core. The image queue is a remnant of the old way how vdpau was implemented, and increasingly became more and more an artifact. In the end, it did only one thing: computing the duration of the current frame. This was done by taking the PTS difference between the current and the future frame. We keep this, but by moving it out of the VO, we don't have to special-case format changes anymore. This simplifies the code a lot. Since we need the queue to compute the duration only, a queue size larger than 2 makes no sense, and we can hardcode that. Also change how the last frame is handled. The last frame is a bit of a problem, because video timing works by showing one frame after another, which makes it a special case. Make the VO provide a function to notify us when the frame is done, instead. The frame duration is used for that. This is not perfect. For example, changing playback speed during the last frame doesn't update the end time. Pausing will not stop the clock that times the last frame. But I don't think this matters for such a corner case.
* video: move display and timing to a separate threadwm42014-08-121-87/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
* video: don't keep multiple pointers to hwdec info structwm42014-08-111-1/+1
| | | | This makes a certain corner case simpler at a later point.
* video: fix dangling pointer issuewm42014-08-111-1/+1
| | | | | | | | | | | The function video_decode_and_filter(), called between initializing the local vf variable and using it, can actually destroy and recreate the filter. Thus, the vf variable turns into a dangling pointer if that happens. Could be observed with: --hwdec=vda --deinterlace=yes --vf=yadif (Also happens with vdpau/vaapi.)
* video: remove "hard" framedrop modewm42014-08-091-1/+1
| | | | | | | | | Completely useless, and could accidentally be enabled by cycling framedrop modes. Just get rid of it. But still allow triggering the old code with --vd-lavc-framedrop, in case someone asks for it. If nobody does, this new option will be removed eventually.
* audio: fix encoding modewm42014-08-071-1/+2
| | | | | | | | | If this code is not skipped, encoding (or dumping with --ao=pcm) will attempt to adjust video timing to audio. Since another commit (0cce8fe6) already avoids writing audio ahead, this didn't slow down encoding to realtime, but it was still significantly slower. This change should actually remove all extra sleeping.
* client API: trigger MPV_EVENT_VIDEO_RECONFIG on vf recreationwm42014-08-061-0/+2
| | | | | Until now, it was done only on VO reconfig, but this easily can miss some events, in case the VO output format doesn't change.
* player: some further playloop cleanupswm42014-08-031-0/+12
| | | | | | | | | | | | | Handle --term-playing-msg at a better place. Move MPV_EVENT_TICK hack into a separate function. Also add some words to the client API that you shouldn't use it. (But better leave breaking it for later.) Handle --frames and frame_step differently. Remove the mess from the playloop, and do it after frame display. Give up on the weird semantics for audio-only mode (they didn't make sense anyway), and adjust the manpage accordingly.
* video: fix attached picture modewm42014-07-311-1/+3
| | | | | | | | | Playing audio files with embedded cover art broke due to some of the recent changes. Treat video EOF properly, and don't burn the CPU. Disable hrseek for video in attached picture mode, since the decoder will always produce a new image, which makes hrseek never terminate. Fixes #970.
* player: move video display code out of the playloopwm42014-07-301-2/+322
| | | | | | | | | | Basically move the code from playloop.c to video.c. The new function write_video() now contains the code that was part of run_playloop(). There are no functional changes, except handling "new_frame_shown" slightly differently. This is done so that we don't need new a new MPContext field or a return value for write_video() to signal this condition. Instead, it's handled indirectly.
* player: split seek_reset()wm42014-07-301-8/+22
| | | | | | | | This also reduces some code duplication with other parts of the code. The changfe is mostly cosmetic, although there are also some subtle changes in behavior. At least one change is that the big desync message is now printed after every seek.
* video: actually flush filter chainwm42014-07-301-1/+4
| | | | | | | Frames buffered in filters weren't flushed, so on EOF, the last frames were dropped, depending on how much filters buffered. Oops. Test case: "mpv something.jpg --vf=buffer"
* player: fix desync when seeking and switching external trackswm42014-07-291-1/+1
| | | | | | | | | | | | If you for example use --audio-file, disable the external track, seek, and enable the external track again, the playback position of the external file was off, and you would get major A/V desync. This was actually supposed to work, but broke at some time ago (probably commit 2b87415f). It didn't work, because it attempted to seek the stream if it was already selected, which was always true due to reselect_demux_streams() being called before that. Fix by putting the initial selection and the seek together.
* audio: change playback restart and resyncingwm42014-07-281-6/+11
| | | | | | | | | | | | | | | | | | | | | This commit makes audio decoding non-blocking. If e.g. the network is too slow the playloop will just go to sleep, instead of blocking until enough data is available. For video, this was already done with commit 7083f88c. For audio, it's unfortunately much more complicated, because the audio decoder was used in a blocking manner. Large changes are required to get around this. The whole playback restart mechanism must be turned into a statemachine, especially since it has close interactions with video restart. Lots of video code is thus also changed. (For the record, I don't think switching this code to threads would make this conceptually easier: the code would still have to deal with external input while blocked, so these in-between states do get visible [and thus need to be handled] anyway. On the other hand, it certainly should be possible to modularize this code a bit better.) This will probably cause a bunch of regressions.
* video: fix corner case with accidental EOFwm42014-07-221-5/+5
| | | | | | | | | | | | | | The video flushing logic was broken: if there are no more packets, decode_image() will feed flush packets to the decoder. Even if an image was produced, it will return the demuxer EOF state, and since commit 7083f88c, this EOF state is returned to the caller, which is incorrect. Revert this part of the change, and explicitly check for VD_WAIT (the bogus change was intended to forward this error code to the caller). Also, turn the "r < 1" into something equivalent that doesn't rely on the exact value of VD_EOF. "r < 0" is ok, because at least here, errors are always negative.
* video: use symbolic constants instead of magic integerswm42014-07-181-31/+29
| | | | | | | | | In my opinion this is not really necessary, since there's only a single user of update_video(), but others reading this code would probably hate me for using magic integer values instead of symbolic constants. This should be a purely cosmetic commit; any changes in behavior are bugs.
* video: don't block when reading video packetswm42014-07-181-6/+12
| | | | | | | | | | | | Instead of blocking on the demuxer when reading a packet, let packets be read asynchronously. Basically, it polls whether a packet is available, and if not, the playloop goes to sleep until the demuxer thread wakes it up. Note that the player will still block for I/O, because audio is still read synchronously. It's much harder to do the same change for audio (because of the design of the audio decoding path and especially initialization), so audio will have to be done later.
* dvd, bluray, cdda: add demux_disc containing all related hackswm42014-07-051-5/+0
| | | | | | | | | | | | DVD and Bluray (and to some extent cdda) require awful hacks all over the codebase to make them work. The main reason is that they act like container, but are entirely implemented on the stream layer. The raw mpeg data resulting from these streams must be "extended" with the container-like metadata transported via STREAM_CTRLs. The result were hacks all over demux.c and some higher-level parts. Add a "disc" pseudo-demuxer, and move all these hacks and special-cases to it.
* video: correct spelling: mp_image_params_equals -> mp_image_params_equalwm42014-06-171-2/+2
| | | | | The type is struct mp_image_params, so the "params" should have a "s". "equals" shouldn't, because it's plural for 2 params. Important.
* vo: make draw_image and vo_queue_image transfer image ownershipwm42014-06-171-1/+0
| | | | Basically a cosmetic change. This is probably more intuitive.
* video/out: change aspects of OSD handlingwm42014-06-151-0/+1
| | | | | | | | | Let the VOs draw the OSD on their own, instead of making OSD drawing a separate VO driver call. Further, let it be the VOs responsibility to request subtitles with the correct PTS. We also basically allow the VO to request OSD/subtitles at any time. OSX changes untested.
* video: fix display of cover art with vo_vdpauwm42014-05-221-2/+4
| | | | | | | | | | vo_vdpau currently has a video queue larger than 1 entry, which causes the video display code to never queue display the video frame. This is because we consider cover art an endless stream of frames decoded from the same source packet, and include special logic to actually only decode and display 1 frame. Also, make decode_image() also signal EOF in the cover art case.
* player: increase seek accuracy when refreshing display on filter changewm42014-05-181-1/+1
| | | | | | | | | When the player is paused, and video filters are changed, an exact seek is executed to refresh the display. Increase the exactness of the seek in this case; this reuses the code used for frame backstepping. It might help in cases where seeking is very imprecise, such as with transport streams.
* options: add --hr-seek-framedrop optionwm42014-05-071-1/+2
| | | | | | | | This allows disabling of decoder framedrop during hr-seek. It's basically another useless option, but it will help exploring whether this framedropping really makes seeking faster, or whether disabling it helps with precise seeking (especially frame backstepping).
* player: avoid reconfig during seekingwm42014-05-071-1/+2
| | | | | This probably matters only in extremely corner-case heavy testcases, such as using mf:// with a bunch of differently sized images.
* player: remove VO from seeking code pathwm42014-05-071-8/+24
| | | | | | | | | | | | | | | Until recently, the VO was an unavoidable part of the seeking code path. This was because vdpau deinterlacing could double the framerate, and hr- seek and framestepping etc. all had to "see" the additional frames. But we've removed the frame doubling from the vdpau VO and moved it into a video filter (vf_vdpaupp), and there's no reason left why the VO should participate in seeking. Instead of queuing frames to the VO during seek and skipping them afterwards, drop the frames early. This actually might make seeking with vo_vdpau and software decoding faster, although I haven't measured it.
* player: handle video reconfig slightly different againwm42014-05-071-0/+3
| | | | | | | | | | Now we avoid calling update_video() twice on reconfig (once to check whether there are still new frames, and again to actually do the reconfig). Instead, we check whether there's still something going on before calling update_video() at all, and depending on that update_video() will be allowed to reconfig or not. This will simplify some things later.
* video: remove a corner case by introducing another onewm42014-05-031-3/+2
| | | | | | | | | | | | | | | | | | | | When loading a video, and a script reacts to MPV_EVENT_VIDEO_RECONFIG, and the script inserts a video filter, the first frame can be skipped. This happens simply because the first frame is (usually) still queued in the video filter chain, and changing the filter chain will drop all queued frames. So this is just a corner case that just happens in a weird situation. But it's still annoying when having such a script, and starting something where the first frame is very visible, and not starting in paused mode. (All in all, a corner case.) Do this by immediately queuing 1 filtered frame to the VO immediately after reconfig, instead of leaving it to the video loop doing it as "incremental" work. Simply fallthrough to the next case. We must not overwrite "r" in this case, because that contains the current status. Note that the first frame will not be filtered using the inserted filter.
* player: remove extremely obscure undefined behaviorwm42014-05-021-1/+2
| | | | | | Apparently the value of a pointer is "indeterminate" after a free() call, even if you never dereference the pointer after the free. Since talloc_free() calls free(), this applies here.
* client API, video: signal reconfig at the right timewm42014-05-021-4/+2
| | | | Filter reconfig can now happen a few frames before VO reconfig.
* video: change everythingwm42014-05-021-158/+199
| | | | | | | Change how the video decoding loop works. The structure should now be a bit easier to follow. The interactions on format changes are (probably) simpler. This also aligns the decoding loop with future planned changes, such as moving various things to separate threads.
* video: handle colorspace and aspect overrides separatelywm42014-05-021-1/+0
| | | | | Now the video filter code handles these explicitly, which should increase robustness (or at least find bugs earlier).
* video: don't drop last frame when deinterlacing with yadifwm42014-04-281-5/+7
| | | | | | | | | | | | | | | | | | Or in other words, add support for properly draining remaining frames from video filters. vf_yadif is buffering at least one frame, and the buffered frame was not retrieved on EOF. For most filters, ignore this for now, and just adjust them to the changed semantics of filter_ext. But for vf_lavfi (used by vf_yadif), real support is implemented. libavfilter handles this simply by passing a NULL frame to av_buffersrc_add_frame(), so we just have to make mp_to_av() handle NULL arguments. In load_next_vo_frame(), we first try to output a frame buffered in the VO, then the filter, and then (if EOF is reached and there's still no new frame) the VO again, with draining enabled. I guess this was implemented slightly incorrectly before, because the filter chain still could have had remaining output frames.
* video: auto-insert software rotation filterwm42014-04-211-5/+44
| | | | | | | If the VO can't do rotation, insert a filter to do this. Note that this doesn't reuse the filter insertion code from command.c (used by "vf" input command), because that would end up more complicated: we don't even want to change the user filter option.
* command: allow changing filters before video chain initializationwm42014-03-301-4/+14
| | | | | | | Apparently this is more intuitive. Somewhat tricky, because of the odd state after loading a file but before initializing the VO.
* audio/out: make ao struct opaquewm42014-03-091-1/+1
| | | | | | We want to move the AO to its own thread. There's no technical reason for making the ao struct opaque to do this. But it helps us sleep at night, because we can control access to shared state better.
* client API: add events for video and audio reconfigwm42014-02-171-0/+4
|
* player: handle seek delays differentlywm42014-02-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code removed from handle_input_and_seek_coalesce() did two things: 1. If there's a queued seek, stop accepting non-seek commands, and delay them to the next playloop iteration. 2. If a seek is executing (i.e. the seek was unqueued, and now it's trying to decode and display the first video frame), stop accepting seek commands (and in fact all commands that were queued after the first seek command). This logic is disabled if seeking started longer than 300ms ago. (To avoid starvation.) I'm not sure why 1. would be needed. It's still possible that a command immediately executed after a seek command sees a "seeking in progress" state, because it affects queued seeks only, and not seeks in progress. Drop this code, since it can easily lead to input starvation, and I'm not aware of any disadvantages. The logic in 2. is good to make seeking behave much better, as it guarantees that the video display is updated frequently. Keep the core idea, but implement it differently. Now this logic is applied to seeks only. Commands after the seek can execute freely, and like with 1., I don't see a reason why they couldn't. However, in some cases, seeks are supposed to be executed instantly, so queue_seek() needs an additional parameter to signal the need for immediate update. One nice thing is that commands like sub_seek automatically profit from the seek delay logic. On the other hand, hitting chapter seek multiple times still does not update the video on chapter boundaries (as it should be). Note that the main goal of this commit is actually simplification of the input processing logic and to allow all commands to be executed immediately.
* sub: uglify OSD code path with lockingwm42014-01-181-2/+2
| | | | | | | | | | | | | | | Do two things: 1. add locking to struct osd_state 2. make struct osd_state opaque While 1. is somewhat simple, 2. is quite horrible. Lots of code accesses lots of osd_state (and osd_object) members. To make sure everything is accessed synchronously, I prefer making osd_state opaque, even if it means adding pretty dumb accessors. All of this is meant to allow running VO in their own threads. Eventually, VOs will request OSD on their own, which means osd_state will be accessed from foreign threads.
* video: fix --brightness etc. optionswm42013-12-291-0/+14
| | | | They were set before the VO was intitialized, which silently failed.
* player: add --secondary-sid for displaying a second subtitle streamwm42013-12-241-1/+2
| | | | | | | This is relatively hacky, but it's Christmas, so it's ok. This does two things: 1. allow selecting two subtitle tracks, and 2. include a hack that renders the second subtitle always as toptitle. See manpage additions how to use this.
* player: add infrastructure to select multiple tracks at oncewm42013-12-241-2/+2
| | | | | Of course this does not allow decoding multiple tracks at once; it just adds some minor infrastructure, which could be used to achieve this.
* player: redo demuxer stream selectionwm42013-12-241-6/+7
| | | | | | | Use struct track to decide what stream to select. Add a "selected" field and use that in some places instead of checking mpctx->current_track.
* video/decode: mp_msg conversionswm42013-12-211-0/+2
| | | | Doesn't cover vdpau/vaapi parts yet, because these are a bit messier.
* video/filter: mp_msg conversionswm42013-12-211-1/+1
|
* player: replace some overlooked mp_msgswm42013-12-191-3/+3
| | | | | There are still some using IDENTIFY, and some without context in configfiles.c.
* Split mpvcore/ into common/, misc/, bstr/wm42013-12-171-3/+3
|
* Move options/config related files from mpvcore/ to options/wm42013-12-171-2/+2
| | | | | | | | | Since m_option.h and options.h are extremely often included, a lot of files have to be changed. Moving path.c/h to options/ is a bit questionable, but since this is mainly about access to config files (which are also handled in options/), it's probably ok.
* Rename mp_core.h to core.hwm42013-12-171-1/+1
| | | | Get rid of the mp_ prefix.
* Move mpvcore/player/ to player/wm42013-12-171-0/+422