summaryrefslogtreecommitdiffstats
path: root/player/video.c
Commit message (Collapse)AuthorAgeFilesLines
* video: fix timeline with some container formatswm42015-01-061-0/+2
| | | | | | Using edl or --merge-files with .avi files didn't work, because the DTS was not offset. Only the PTS was adjusted, which led to nonsense timestamps.
* video: batch query_format callswm42015-01-031-2/+1
| | | | | | | There are currently 568 pixel formats (actually fewer, but the namespace is this big), and for each format elaborate synchronization was done to call it synchronously on the VO. This is completely unnecessary, and we can do with just a single call.
* vf_vapoursynth: pass through container FPS valuewm42015-01-031-10/+8
| | | | | | | | This is basically a hack; but apparently a needed one, since many vapoursynth filters insist on having a FPS set. We need to apply the FPS override before creating the filters. Also change some terminal output related to the FPS value.
* video: better pipelining with vf_vapoursynthwm42015-01-031-3/+24
| | | | | | | | | | Most of this is explained in the code comments. This change should improve performance with vapoursynth, especially if concurrent requests are used. This should change nothing if vf_vapoursynth is not in the filter chain, since non-threaded filters obviously can not asynchronously finish filtering of frames.
* vo_opengl_cb: pass context directlywm42014-12-311-3/+1
| | | | | This is simpler than setting the context after VO creation, which requires the code to check for the context on every entrypoint.
* video: pass some VO params as structwm42014-12-311-3/+6
| | | | | Not particularly elegant, but better than adding more and more stuff to the relevant function signatures.
* player: fix a typo in message outputwm42014-12-241-1/+1
| | | | This typo has been around for over a year. Oops.
* client API: expose OpenGL rendererwm42014-12-091-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds API to libmpv that lets host applications use the mpv opengl renderer. This is a more flexible (and possibly more portable) option to foreign window embedding (via --wid). This assumes that methods like context sharing and multithreaded OpenGL rendering are infeasible, and that a way is needed to integrate it with an application that uses a single thread to render everything. Add an example that does this with QtQuick/qml. The example is relatively lazy, but still shows how relatively simple the integration is. The FBO indirection could probably be avoided, but would require more work (and would probably lead to worse QtQuick integration, because it would have to ignore transformations like rotation). Because this makes mpv directly use the host application's OpenGL context, there is no platform specific code involved in mpv, except for hw decoding interop. main.qml is derived from some Qt example. The following things are still missing: - a way to do better video timing - expose GL renderer options, allow changing them at runtime - support for color equalizer controls - support for screenshots
* player: when seeking past EOF with --keep-open, seek to last framewm42014-12-071-4/+16
| | | | | | | | | | | | | | | | | | | | | It feels strange that seeking past EOF with --keep-open actually leaves the player at a random position. You can't even unpause, because the demuxer is in the EOF state, and what you see on screen is just what was around before the seek. Improve this by attempting to seek to the last video frame if EOF happens. We explicitly don't do this if EOF was reached normally to increase robustness (if the VO got a frame since the last seek, it obviously means we had normal playback before EOF). If an error happens when trying to find the last frame (such as not actually finding a last frame because e.g. the demuxer misbehaves), this will probably turn your CPU into a heater. There is no logic to prevent reinitiating the last-frame search if the last-frame search reached EOF. (Pausing usually prevents that EOF is reached again after a successful last-frame search.) Fixes #819.
* Remove some superfluous NULL checkswm42014-11-211-3/+0
| | | | | | | | In all of these situations, NULL is logically not allowed, making the checks redundant. Coverity complained about accessing the pointers before checking them for NULL later.
* player: print anamorphic size only if video is anamorphicwm42014-11-021-2/+5
| | | | Has been annoying me since forever.
* player: update meaning of drop_frame_cntwm42014-11-011-2/+2
| | | | | Rename the variable, update comments, and update the documentation of the property which returns its value.
* player: show AV-desync message in all framedrop modeswm42014-11-011-2/+1
| | | | | | | | | | | | This was shown only if decoder-framedropping was enabled, and only if at least 50 frames were dropped by it. Since drop_frame_cnt used to mean "number of late frames", this code made sense, but this is not the case anymore: drop_frame_cnt can be even 0, all while video gets hopelessly behind audio. One problem with this is that short desync spikes (which usually can probably dealt with) will also cause this message to be shown. If it gets triggered too often, the code will need to be adjusted.
* client API: better error reportingwm42014-10-281-1/+4
| | | | Give somewhat more information on playback failure.
* video: send MPV_EVENT_VIDEO_RECONFIG on uninitwm42014-10-241-0/+1
| | | | | This event basically means "something about video changed", and uninit is certainly an important change.
* player: fix exiting if both audio and video fail initializingwm42014-10-231-4/+2
| | | | | | | | | | | | | The player was supposed to exit playback if both video and audio failed to initialize (or if one of the streams was not selected when the other stream failed). This didn't work; for one this check was missing from one of the failure paths. And more importantly, both checked the current_track array incorrectly. Fix these issues, and move the failure handling code into a common function. CC: @mpv-player/stable
* player: fix --frameswm42014-10-141-1/+3
| | | | | | | | | This could produce an extra frame, because reaching the maximum merely signals the playloop to exit, without strictly enforcing the limit. Fixes #1181. CC: @mpv-player/stable
* player: signal EOF when using --frameswm42014-10-101-1/+1
|
* video: try harder to decode cover art picture only oncewm42014-10-091-2/+7
| | | | | | | | | | | | | | | | | For cover art, we pretend that the video stream is infinite, but also stop decoding once we have an image on the VO (this seems advantageous for the case when strange filters are inserted or the VO image gets lost). Since a while ago, the video chain started decoding 2 images though ("Non-monotonic video pts: 0.000000 <= 0.000000"), which is annoying and wasteful. Improve this by handling a certain corner case at initialization, which will decode a second image while the first one is still stuck in the filter chain. Also, just in case there are filters which buffer a lot, also force EOF filtering (which means we tell the filters to flush buffered frames). CC: @mpv-player/stable
* player: remove central uninit_player() function and flags messwm42014-10-031-9/+23
| | | | | | | | | | | | | | Each subsystem (or similar thing) had an INITIALIZED_ flag assigned. The main use of this was that you could pass a bitmask of these flags to uninit_player(). Except in some situations where you wanted to uninitialize nearly everything, this wasn't really useful. Moreover, it was quite annoying that subsystems had most of the code in a specific file, but the uninit code in loadfile.c (because that's where uninit_player() was implemented). Simplify all this. Remove the flags; e.g. instead of testing for the INITIALIZED_AO flag, test whether mpctx->ao is set. Move uninit code to separate functions, e.g. uninit_audio_out().
* player: don't print audio/video init failure message twicewm42014-10-021-2/+2
| | | | | | | The messages "Audio: no audio" and "Video: no video" could be printed twice each if initializing them failed. Prevent his silliness. CC: @mpv-player/stable
* video: change automatic rotation and 3D filter insertionwm42014-09-271-6/+3
| | | | | | | | | | | | | | | | | | | | | We inserted these filters with fixed parameters, which was ok. But this also didn't change image parameters for the filters down the filter chain and the VO. For example, if rotation by 90° was requested by the file, we would insert a filter and rotate the video, but the VO would still receive image parameters that direct rotation by 90°. This wasn't a problem, but it could become one. Fix this by letting the filters automatically pick up the image params. The image params are reset on application. (We could probably also always try to apply and reset image params in a filter, instead of having special "auto" parameters. This would probably work, and video.c would insert a "rotate=0" filter. But I'm afraid this would be confusing and the current solution is cosmetically slightly nicer.) Unfortunately, the vf_stereo3d.c change turned out a big mess, but once the "internal" filter is fully replaced with libavfilter, most of this can be radically simplified.
* player: rate-limit OSD text updatewm42014-09-251-1/+2
| | | | | | | | | | | | | | | There's no need to update OSD messages and the terminal status if nobody is going to see it. Since the player doesn't block on video display anymore, this update happens to often and probably burns slightly more CPU than necessary. (OSD redrawing is handled separately, so it's just mostly useless text processing and such.) Change it so that it's updated only on every video frame or all 50ms (whatever comes first). For VO OSD, we could in theory try to lock to the OSD redraw heuristic or the display refresh rate, but that's more complicated and doesn't work for the terminal status.
* video: filter new frames at a better time (2)wm42014-09-221-7/+9
| | | | | | | | | | | | | | | | | We generally want 2 things: 1. minimal wakeups for decoding each frame 2. minimal number of frames decoded on continuous seeking Commit 35810cb8 changed this a bit, and fixed 1. But it broke 2., and now it decodes 2 frames instead of 1 when you keep seeking (arrow key held down or such). This made seeking appear slower. Fix this by making the logic more explicit. In particular, call the filters only if we actually try to get a new frame. When playing with --no-audio and all other distractions disabled (like OSC), it still wakes up 2 times per frame - but the second time is merely because the VO didn't accept the new frame yet.
* video: actually count decoder-dropped frameswm42014-09-201-4/+7
| | | | | | | | | Normally, feeding a packet to the decoder should always return a frame _if_ we received a frame before. So while we can't know exactly whether a frame was dropped, at least the normal case is easily detectable. This means we display something closer to the actual framedrop count, instead of a bad guess.
* video: improve decoder-based framedropping modewm42014-09-201-6/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is the "old" framedropping mode (derived from MPlayer). At least in the mplayer2/mpv source base, it stopped working properly years ago (or maybe it never worked properly). For one, it depends on the video framerate, which assume constant framerate. Another problem was that it could lead to freezing video display: video could get so much behind that it couldn't recover from framedrop. Make some small changes to improve this. Don't use the current audio position to check how much we are behind. Instead, use the last known A/V difference. last_av_difference is updated only when a video frame is scheduled for display. This means we can keep stop dropping once we're done catching up, even if video is technically still behind. What helps us here that this forces a video frame to be displayed after a while. Likewise, we reset the dropped_frames count only when scheduling a new frame for display as well. Some inspiration was taken from earlier work by xnor (see issue #620), although the implementation turned out quite different. This still uses the demuxer-reported (possibly broken) FPS value. It also doesn't account for filters changing FPS. We can't do much about this, because without decoding _and_ filtering, we just can't know how long a frame is. In theory, you could derive that from the raw packet timestamps and the filter chain contents, but actually doing this is too involved. Fortunately, the main thing the FPS affects is actually the displayed framedrop count.
* player: reset last_av_difference if not applicablewm42014-09-201-0/+1
| | | | | | Don't let stale values linger around. Also fix a slightly related case in audio.c.
* video: separate calling decoder/filterwm42014-09-181-14/+22
| | | | | | | | | | | Rename video_decode_and_filter to video_filter, and add a new video_decode_and_filter function. This function now calls the decoder. This is done so that we can check filters a second time after decoding, which avoids a useless playloop iteration. (This and the previous commits are really just microoptimizations, which simply reduce the number of times the playloop has to recheck everything.)
* video: check whether there are enough frames after filteringwm42014-09-181-6/+11
| | | | | | | Move the check to a function. Run the check a second time after decoding/filtering. This second check is strictly speaking redundant (which is why it wasn't done until now), but it avoids a useless playloop iteration.
* video: filter new frames at a better timewm42014-09-181-24/+24
| | | | | | | Move this code below the code that "shifts" the newly filtered frame. This allows us to skip a useless playloop iteration later, because obviously we need to filter a new frame after the previous frame has been "shifted", and not before that.
* video: initial Matroska 3D supportwm42014-08-301-0/+12
| | | | | | | | | | | | | | | | | | | | | This inserts an automatic conversion filter if a Matroska file is marked as 3D (StereoMode element). The basic idea is similar to video rotation and colorspace handling: the 3D mode is added as a property to the video params. Depending on this property, a video filter can be inserted. As of this commit, extending mp_image_params is actually completely unnecessary - but the idea is that it will make it easier to integrate with VOs supporting stereo 3D mogrification. Although vo_opengl does support some stereo rendering, it didn't support the mode my sample file used, so I'll leave that part for later. Not that most mappings from Matroska mode to vf_stereo3d mode are probably wrong, and some are missing. Assuming that Matroska modes, and vf_stereo3d in modes, and out modes are all the same might be an oversimplification - we'll see. See issue #1045.
* player: minor changeswm42014-08-251-8/+3
| | | | | | | | | | | | This shouldn't change anything functionally. Change the A/V desync message. --framedrop is enabled by default now, so the text must be changed a little. I've never heard of audio outputs messing up A/V sync recently, so remove that part. Remove the unused ao_pts field. Reorder 2 A/V sync related expressions so that they look the same.
* player: restore silent seekingwm42014-08-231-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | Commit 846257da introduced an accidental feature: if you kept seeking (so playback never really resumes), the audio would never be played. This was nice, but commit 4c25b000 accidentally removed it again (due to the video_next_pts being earlier available than it used to be, so audio could be played before the player executed the next queued seek). Implicitly reintroduce the old behavior again by not decoding a second video frame immediately. Usually, the second frame is used to compute the frame duration needed to for accurate framedropping, but since the first frame after a seek is never dropped, we don't need this. Now the video code will queue the new frame to the VO immediately, and since fill_audio_out_buffers() is called in the playloop before write_video() and execute_queued_seek(), it never gets the chance to enter STATUS_READY, and seeks will be silent. This also has a nice side-effect: since the second frame is not decoded and filtered, seeking becomes slightly faster (back to the same level as with framedrop disabled). It seems this still sometimes plays a period of audio when keeping a seek key down. In my tests, this appeared to happen because the seek finished before the next key repeat was sent.
* player: fix recent speed change regressionwm42014-08-221-0/+2
| | | | | | | | | | | | | | Commit 5afc025c broke this. The reason is that mpctx->delay is updated when a new video frame is added. This value is also needed to resync audio, but it will be for the wrong PTS. They must be consistent with each other, and if they aren't, initial sync will be off by N video frames, which results at least in worse user experience. This can be reproduced by for example heavily switching between normal and 2x speed, or similar. Fix by readding the video_next_pts field (keeping its use minimal, instead of reverting the commit that removed it).
* video: refactor queue handlingwm42014-08-221-75/+53
| | | | | | | | | | | | | | | This simplifies the code, and fixes an odd bug: the second-last frame was displayed for a very short duration if framedrop was enabled. The reason was that basically the time difference between second-last and last frame were skipped, because at this point EOF was already signaled. Also see commit b0959488 for a similar issue in the same code. This removes the messiness of the next_frame 2-frame queue, and strictly runs the "new frame" code when a frame is moved to the first position of the queue, instead of somehow messing with return codes. This also merges update_video() into video_output_image().
* video: get rid of video_next_pts fieldwm42014-08-221-9/+4
| | | | | | Not really needed anymore. Code should be mostly equivalent. Also get rid of some other now-unused or outdated things.
* video: move some code aroundwm42014-08-221-46/+45
| | | | | | No functional changes. init_vo() is now needed a bit further down, and moving it keeps definition and use close. adjust_sync() will be used by a function further up in one of the following commits.
* video: minor simplificationwm42014-08-221-21/+11
| | | | This is mostly equivalent, but simpler, and reduces some duplication.
* video: don't assume query_format is thread-safewm42014-08-201-5/+2
| | | | Although it's probably safe for most VOs, there's no guarantee.
* video: add VO framedropping modewm42014-08-151-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | This mostly uses the same idea as with vo_vdpau.c, but much simplified. On X11, it tries to get the display framerate with XF86VM, and limits the frequency of new video frames against it. Note that this is an old extension, and is confirmed not to work correctly with multi-monitor setups. But we're using it because it was already around (it is also used by vo_vdpau). This attempts to predict the next vsync event by using the time of the last frame and the display FPS. Even if that goes completely wrong, the results are still relatively good. On other systems, or if the X11 code doesn't return a display FPS, a framerate of 1000 is assumed. This is infinite for all practical purposes, and means that only frames which are definitely too late are dropped. This probably has worse results, but is still useful. "--framedrop=yes" is basically replaced with "--framedrop=decoder". The old framedropping mode is kept around, and should perhaps be improved. Dropping on the decoder level is still useful if decoding itself is too slow.
* video: reduce non-sense messages when playing coverartwm42014-08-131-11/+14
| | | | Don't print PTS warnings by skipping the normal video path.
* video: don't run new frame processing on every iterationwm42014-08-131-19/+22
| | | | | | | This ran adjust_sync() on every playloop iteration, instead of every newly decoded frame. It seems this was idempotent in the common case, but the code was originally designed to be run once only, so restore that.
* video: move some more code aroundwm42014-08-131-38/+49
| | | | No functional changes.
* video: move some code aroundwm42014-08-131-45/+40
|
* video: exit early when nothing to dowm42014-08-131-7/+7
| | | | | These cases were probably confusing. Exit early, which makes it much clearer what's going on. Should not change anything functionally.
* video: minor simplification of the old framedrop codewm42014-08-131-10/+6
| | | | | No changes in functionality, other than being slightly more correct at stream EOF.
* video: fix and simplify video format changes and last frame displaywm42014-08-121-108/+86
| | | | | | | | | | | | | | | | | | | | | | | | | | The previous commit broke these things, and fixing them is separate in this commit in order to reduce the volume of changes. Move the image queue from the VO to the playback core. The image queue is a remnant of the old way how vdpau was implemented, and increasingly became more and more an artifact. In the end, it did only one thing: computing the duration of the current frame. This was done by taking the PTS difference between the current and the future frame. We keep this, but by moving it out of the VO, we don't have to special-case format changes anymore. This simplifies the code a lot. Since we need the queue to compute the duration only, a queue size larger than 2 makes no sense, and we can hardcode that. Also change how the last frame is handled. The last frame is a bit of a problem, because video timing works by showing one frame after another, which makes it a special case. Make the VO provide a function to notify us when the frame is done, instead. The frame duration is used for that. This is not perfect. For example, changing playback speed during the last frame doesn't update the end time. Pausing will not stop the clock that times the last frame. But I don't think this matters for such a corner case.
* video: move display and timing to a separate threadwm42014-08-121-87/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The VO is run inside its own thread. It also does most of video timing. The playloop hands the image data and a realtime timestamp to the VO, and the VO does the rest. In particular, this allows the playloop to do other things, instead of blocking for video redraw. But if anything accesses the VO during video timing, it will block. This also fixes vo_sdl.c event handling; but that is only a side-effect, since reimplementing the broken way would require more effort. Also drop --softsleep. In theory, this option helps if the kernel's sleeping mechanism is too inaccurate for video timing. In practice, I haven't ever encountered a situation where it helps, and it just burns CPU cycles. On the other hand it's probably actively harmful, because it prevents the libavcodec decoder threads from doing real work. Side note: Originally, I intended that multiple frames can be queued to the VO. But this is not done, due to problems with OSD and other certain features. OSD in particular is simply designed in a way that it can be neither timed nor copied, so you do have to render it into the video frame before you can draw the next frame. (Subtitles have no such restriction. sd_lavc was even updated to fix this.) It seems the right solution to queuing multiple VO frames is rendering on VO-backed framebuffers, like vo_vdpau.c does. This requires VO driver support, and is out of scope of this commit. As consequence, the VO has a queue size of 1. The existing video queue is just needed to compute frame duration, and will be moved out in the next commit.
* video: don't keep multiple pointers to hwdec info structwm42014-08-111