summaryrefslogtreecommitdiffstats
path: root/mpvcore/player/audio.c
Commit message (Collapse)AuthorAgeFilesLines
* Move mpvcore/player/ to player/wm42013-12-171-471/+0
|
* video: replace d_video->pts field, change PTS jump checkswm42013-11-271-1/+1
| | | | | | | | | | | | | The d_video->pts field was a bit strange. The code overwrote it multiple times (on decoding, on filtering, then once again...), and it wasn't really clear what purpose this field had exactly. Replace it with the mpctx->video_next_pts field, which is relatively unambiguous. Move the decreasing PTS check to dec_video.c. This means it acts on decoder output, not on filter output. (Just like in the previous commit, assume the filter chain is sane.) Drop the jitter vs. reset semantics; the dec_video.c determined PTS never goes backwards, and demuxer timestamps don't "jitter".
* audio: respect --end/--length with spdif passthroughwm42013-11-231-1/+2
| | | | | | | | | | | | | | | | | | | | | | In theory, we can't really do this, because we don't know when a spdif frame ends. Spdif transports compressed audio through audio setups that were originally designed for PCM only (which includes the audio filter chain, the AO API, most audio output APIs, etc.), and to reach this goal, spdif pretends to be PCM. Compressed data frames are padded with zeros, until a certain data rate is reached, which corresponds to a pseudo-PCM format with 2 bytes per sample and 2 channels at 48000 Hz. Of course an actual spdif frame is significantly larger than a frame of the PCM format it pretends to be, so cutting audio data on frame boundaries (as according to the pseudo-PCM format) merely yields an incomplete and broken frame, not audio that plays for the desired duration. However, sending an incomplete frame might still be much better than the current behavior, which simply ignores --end/--length (but still lets the video end at the exact end time). Should this result in trouble with real spdif receivers, this commit probably has to be reverted.
* video: move decoder context from sh_video into new structwm42013-11-231-2/+3
| | | | | | | | | | This is similar to the sh_audio commit. This is mostly cosmetic in nature, except that it also adds automatical freeing of the decoder driver's state struct (which was in sh_video->context, now in dec_video->priv). Also remove all the stheader.h fields that are not needed anymore.
* audio: move decoder context from sh_audio into new structwm42013-11-231-38/+47
| | | | | | | | | Move all state that basically changes during decoding or is needed in order to manage decoding itself into a new struct (dec_audio). sh_audio (defined in stheader.h) is supposed to be the audio stream header. This should reflect the file headers for the stream. Putting the decoder context there is strange design, to say the least.
* audio/filter: remove unneeded AF_CONTROLs, convert to enumwm42013-11-181-2/+1
| | | | | | | | The AF control commands used an elaborate and unnecessary organization for the command constants. Get rid of all that and convert the definitions to a simple enum. Also remove the control commands that were not really needed, because they were not used outside of the filters that implemented them.
* audio: use the decoder buffer's format, not sh_audiowm42013-11-181-6/+19
| | | | | | | | | | | | | | | | | | When the decoder detects a format change, it overwrites the values stored in sh_audio (this affects the members sample_format, samplerate, channels). In the case when the old audio data still needs to be played/filtered, the audio format as identified by sh_audio and the format used for the decoder buffer can mismatch. In particular, they will mismatch in the very unlikely but possible case the audio chain is reinitialized while old data is draining during a format change. Or in other words, sh_audio might contain the new format, while the audio chain is still configured to use the old format. Currently, the audio code (player/audio.c and init_audio_filters) access sh_audio to get the current format. This is in theory incorrect for the reasons mentioned above. Use the decoder buffer's format instead, which should be correct at any point.
* audio: add support for using non-interleaved audio from decoders directlywm42013-11-121-8/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | Most libavcodec decoders output non-interleaved audio. Add direct support for this, and remove the hack that repacked non-interleaved audio back to packed audio. Remove the minlen argument from the decoder callback. Instead of forcing every decoder to have its own decode loop to fill the buffer until minlen is reached, leave this to the caller. So if a decoder doesn't return enough data, it's simply called again. (In future, I even want to change it so that decoders don't read packets directly, but instead the caller has to pass packets to the decoders. This fits well with this change, because now the decoder callback typically decodes at most one packet.) ad_mpg123.c receives some heavy refactoring. The main problem is that it wanted to handle format changes when there was no data in the decode output buffer yet. This sounds reasonable, but actually it would write data into a buffer prepared for old data, since the caller doesn't know about the format change yet. (I.e. the best place for a format change would be _after_ writing the last sample to the output buffer.) It's possible that this code was not perfectly sane before this commit, and perhaps lost one frame of data after a format change, but I didn't confirm this. Trying to fix this, I ended up rewriting the decoding and also the probing.
* audio/filter: fix mul/delay scale and valueswm42013-11-121-1/+1
| | | | | | | | | | | | | Before this commit, the af_instance->mul/delay values were in bytes. Using bytes is confusing for non-interleaved audio, so switch mul to samples, and delay to seconds. For delay, seconds are more intuitive than bytes or samples, because it's used for the latency calculation. We also might want to replace the delay mechanism with real PTS tracking inside the filter chain some time in the future, and PTS will also require time-adjustments to be done in seconds. For most filters, we just remove the redundant mul=1 initialization. (Setting this used to be required, but not anymore.)
* audio: switch output to mp_audio_bufferwm42013-11-121-62/+72
| | | | | | Replace the code that used a single buffer with mp_audio_buffer. This also enables non-interleaved output operation, although it's still disabled, and no AO supports it yet.
* player: set PulseAudio stream title to window titlewm42013-11-101-0/+1
| | | | | | | Set the PulseAudio stream title, just like the VO window title is set. Refactor update_vo_window_title() so that we can use it for AOs too. The ao_pulse.c bit is stolen from MPlayer.
* Remove sh_audio->samplesizewm42013-11-091-1/+1
| | | | | | | | | This member was redundant. sh_audio->sample_format indicates the sample size already. The TV code is a bit strange: the redundant sample size was part of the internal TV interface. Assume it's really redundant and not something else. The PCM decoder ignores the sample size anyway.
* player: factor audio buffer clearing codewm42013-11-081-0/+17
| | | | | | | Note that the change in seek_reset is not entirely equivalent: we even drop the remainder of buffered audio when seeking. This should be more correct, because the whole point of the reset_ao parameter is to control whether audio queued for output should be dropped or not.
* audio: don't let ao_lavc access frontend internals, change gapless audiowm42013-11-081-8/+17
| | | | | | | | | | | | | | | | | | | | | | | ao_lavc.c accesses ao->buffer, which I consider internal. The access was done in ao_lavc.c/uninit(), which tried to get the left-over audio in order to write the last (possibly partial) audio frame. The play() function didn't accept partial frames, because the AOPLAY_FINAL_CHUNK flag was not correctly set, and handling it otherwise would require an internal FIFO. Fix this by making sure that with gapless audio (used with encoding), the AOPLAY_FINAL_CHUNK is set only once, instead when each file ends. Basically, move the hack in ao_lavc's uninit to uninit_player. One thing can not be entirely correctly handled: if gapless audio is active, we don't know really whether the AO is closed because the file ended playing (i.e. we want to send the buffered remainder of the audio to the AO), or whether the user is quitting the player. (The stop_play flag is overwritten, fixing that is perhaps not worth it.) Handle this by adding additional code to drain the AO and the buffers when playback is quit (see play_current_file() change). Test case: mpv avdevice://lavfi:sine=441 avdevice://lavfi:sine=441 -length 0.2267 -gapless-audio
* Split mplayer.cwm42013-10-301-0/+414
mplayer.c was a bit too big. Split it into multiple files. I hope the way it's split makes sense. Maybe some things don't make too much sense, or go against intuition. These will fixed as soon as I notice them. Some files are a bit questionable (misc.c, osd.c, configfiles.c), and suggestions how to organize this better are welcome. Regressions are possible due to reorganized include statements. Obviously I didn't just copy mplayer.c's orgy of include statements, but recreated them for each file. It's easily possible that there are oversights and mistakes, which will show up on other platforms. There is one actual change: the public avutil.h include is removed from encode.h, and I tried to replace most FFMIN/FFMAX/av_clip uses. I consider using libavutil too much as dangerous, because the set of include files they recursively pull in is rather arbitrary and is different between FFmpeg and Libav.