summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--DOCS/OUTDATED-tech/formats.txt160
-rw-r--r--DOCS/OUTDATED-tech/general.txt228
-rw-r--r--DOCS/OUTDATED-tech/hwac3.txt145
-rw-r--r--DOCS/OUTDATED-tech/libao2.txt56
-rw-r--r--DOCS/OUTDATED-tech/libvo.txt173
-rw-r--r--DOCS/OUTDATED-tech/mpsub.sub100
-rw-r--r--DOCS/OUTDATED-tech/swscaler_filters.txt19
-rw-r--r--DOCS/OUTDATED-tech/swscaler_methods.txt68
8 files changed, 0 insertions, 949 deletions
diff --git a/DOCS/OUTDATED-tech/formats.txt b/DOCS/OUTDATED-tech/formats.txt
deleted file mode 100644
index ee4aa8f6ba..0000000000
--- a/DOCS/OUTDATED-tech/formats.txt
+++ /dev/null
@@ -1,160 +0,0 @@
-1. Input layer, supported devices, methods:
- - plain file, with seeking
- - STDIN, without seeking backward
- - network streaming (currently plain wget-like HTTP and MMS (.asx))
- - VCD (Video CD) track, by direct CDROM device access (not requires mounting disc)
- - DVD titles using .IFO structure, by direct DVD device access (not requires mounting disc)
- - DVD titles using menu navigation (experimental/alpha, not yet finished!!!)
- - CDDA - raw audio from audio CD-ROM discs (using cdparanoia libs)
- - RTP streaming (mpeg-ps over multicast only)
- - LIVE555 streaming - support SDP/RTSP (using the LIVE555 libs)
- - SMB - file access over samba (experimental)
-
-2. Demuxer/parser layer, supported file/media formats:
-
- - MPEG streams (ES,PES,PS. no TS support yet)
- note: mpeg demuxer silently ignore non-mpeg content, and find mpeg packets
- in arbitrary streams. it means you can play directly VCD images (for example
- CDRwin's .BIN files) without extracting mpeg files first (with tools like vcdgear)
- It accepts all PES variants, including files created by VDR.
- Note: VOB (video object) is simple mpeg stream, but it usually has 01BD
- packets which may contain subtitles and non-mpeg audio. Usually found on DVD discs.
-
- Headers: mpeg streams has no global header. each frame sequence (also called GOP,
- group of pictures) contains an sequence header, it describes that block.
- In normal mpeg 1/2 content there are groups of 12-15 frames (24/30 fps).
- It means you can freely seek in mpeg streams, and even can cut it to
- small parts with standard file tools (dd, cut) without destroying it.
-
- Codecs: video is always mpeg video (mpeg1, mpeg2 or mpeg4).
- audio is usually mpeg audio (any layer allowed, but it's layer 2 in most files)
- but 01BD packets may contain AC3, DTS or LPCM too.
-
- FPS: mpeg2 content allows variable framerate, in form of delayed frames.
- It's mostly used for playback 24fps content at 29.97/30 fps (NTSC) rate.
- (so called Telecine or 3:2 pulldown effect)
- It means you see 30 frames per second, but there are only 24 different
- pictures and some of them are shown longer to fill 30 frame time.
- If you encode such files with mencoder, using -ofps 24 or -ofps 23.976
- is recommended.
-
- - AVI streams.
- Two kind of RIFF AVI files exists:
- 1. interleaved: audio and video content is interleaved. it's faster and
- requires only 1 reading thread, so it's recommended (and mostly used).
- 2. non-interleaved: audio and video aren't interleaved, i mean first come
- whole video followed by whole audio. it requires 2 reading process or
- 1 reading with lots of seeking. very bad for network or cdrom.
- 3. badly interleaved streams: mplayer detects interleaving at startup and
- enables -ni option if it finds non-interleaved content. but sometimes
- the stream seems to be interleaved, but with bad sync so it should be
- played as non-interleaved otherwise you get a-v desync or buffer overflow.
-
- MPlayer supports 2 kind of timing for AVI files:
- - bps-based: it is based on bitrate/samplerate of video/audio stream.
- this method is used by most players, including avifile and wmp.
- files with broken headers, and files created with VBR audio but not
- vbr-compliant encoder will result a-v desync with this method.
- (mostly at seeking).
- - interleaving-based: note: it can't be used togethwer with -ni
- it doesn't use bitrate stuff of header, it uses the relative position
- of interleaved audio and video chunks. makes some badly encoded file
- with vbr audio playable.
-
- Headers: AVI files has a mandatory header at the begin of the file,
- describing video parameters (resolution, fps) and codecs. Optionally
- they have an INDEX block at the end of the file. It's optional, but
- most files has such block, because it's REQUIRED for seeking.
- Btw usually it can be rebuilt from file content, mplayer does it with
- the -idx switch. MPlayer can recreate broken index blocks using -forceidx.
- As AVI files needs index for random access, broken files with no index
- are usually unplayable.
- Of course, cutting/joining AVI files needs special programs.
-
- Codecs: any audio and video codecs allowed, but I note that VBR audio is
- not well supported by most players. The file format makes it possible to
- use VBR audio, but most players expect CBR audio and fails with VBR,
- as VBR is unusual, and Microsoft's AVI specs only describe CBR audio.
- I also note, that most AVI encoders/multiplexers create bad files if
- using VBR audio. only 2 exception (known by me): NaNDub and MEncoder.
-
- FPS: only constant framerate allowed, but it's possible to skip frames.
-
- - ASF streams:
- ASF (active streaming format) comes from Microsoft. they developed two
- variant of ASF, v1.0 and v2.0. v1.0 is used by their media tools (wmp and
- wme) and v2.0 is published and patented :). of course, they differ,
- no compatibility at all. (it's just a legality game)
- MPlayer supports only v1.0, as nobody ever seen v2.0 files :)
- Note, that .ASF files are nowdays come with extension .WMA or .WMV.
- UPDATE: MS recently released the ASF v1.0 specs too, but it has some
- restrictions making it illegal to read by us :)
-
- Headers: Stream headers (codecs parameters) can be everywhere (in theory),
- but all files i've seen had it at the beginning of the file.
- Asf uses fixed packet size, so it is seekable without any INDEX block,
- and broken files are playable well.
-
- Codecs: video is mostly microsoft's mpeg4 variants: MP42, MP43 (aka DivX),
- WMV1 and WMV2. but any codecs allowed.
- audio is usually wma or voxware, sometimes mp3, but any codecs allowed.
-
- FPS: no fixed fps, every video frame has an exact timestamp instead.
- I've got stream with up to 3 sec frame display times.
-
- - QuickTime / MOV files:
- They come from Mac users, usually with .mov or .qt extension, but as
- MPEG Group chose quicktime as recommended file format for MPEG4,
- sometimes you meet quicktime files with .mpg or .mp4 extension.
-
- At first look, it's a mixture of ASF and AVI.
- It requires INDEX block for random access and seeking, and even for
- playback, like AVI, but uses timestamps instead of constant framerate
- and has more flexible stream options (including network stuff) like ASF.
-
- Headers: header can be placed at the beginning or at the end of file.
- About half of my files have it at the beginning, others have it at the end.
- Broken files are only playable if they have header at the beginning!
-
- Codecs: any codecs allowed, both CBR and VBR.
- Note: most new mov files use Sorenson video and QDesign Music audio,
- they are patented, closed, secret, (TM)-ed etc formats, only Apple's
- quicktime player is able to playback these files (on win/mac only).
-
- - VIVO files:
- They are funny streams. They have a human-readable ascii header at
- the beginning, followed by interleaved audio and video chunks.
- It has no index block, has no fixed packetsize or sync bytes, and most
- files even has no keyframes, so forget seeking!
- Video is standard h.263 (in vivo/2.0 files it's modified, non-standard
- h.263), audio is either standard g.723 or Vivo Siren codec.
-
- Note, that microsoft licensed vivo stuff, and included in their netshow
- v2.0 program, so there are VfW/ACM codecs for vivo video and audio.
-
- - RealMedia files:
- A mixture of AVI and ASF features. It has mandatory headers at the
- beginning and an optional INDEX (missing in most files).
- The file is constructed of variable size chunks, with small header
- telling the stream ID, timestamp, flags (keyframe...) and size.
- But it has some features found in ASF files:
- The video is actually double-muxed, the video chunks are really
- appended fragments of the video frame. RV30+ supports B frames, so
- you have to parse some bits of the first fragment to get the real PTS.
- The audio frames are fixed size (CBR) but using the same scrambling
- (out-of-order interleaving) as in the ASF files.
-
- Codecs: Audio is either COOK(er), SIPR(o), ATRAC3 or DNET.
- The DNET is actually a byte-swapped low-bitrate Dolby AC3 variant :)
- Video is RV10 (h263 variant), RV20 (rp G2), RV30 (rp v8) or RV40 (rp v9).
-
- FPS: variable, just like in ASF.
-
- Note, that similarity of real and asf has some background - they worked
- together on the (never finished/used) ASF v2 spec for some time.
-
- - GIF files:
- The GIF format is a common format for web graphics that supports
- animation. These are read through libungif or compatible library.
- Variable frame delays are supported, but seeking is not supported.
- Seeking will be supported once an index of gif frames can be built.
diff --git a/DOCS/OUTDATED-tech/general.txt b/DOCS/OUTDATED-tech/general.txt
deleted file mode 100644
index 36a584b746..0000000000
--- a/DOCS/OUTDATED-tech/general.txt
+++ /dev/null
@@ -1,228 +0,0 @@
-So, I'll describe how this stuff works.
-
-The main modules:
-
-1. stream.c: this is the input layer, this reads the input media (file, stdin,
- vcd, dvd, network etc). what it has to know: appropriate buffering by
- sector, seek, skip functions, reading by bytes, or blocks with any size.
- The stream_t (stream.h) structure describes the input stream, file/device.
-
- There is a stream cache layer (cache2.c), it's a wrapper for the stream
- API. It does fork(), then emulates stream driver in the parent process,
- and stream user in the child process, while proxying between them using
- preallocated big memory chunk for FIFO buffer.
-
-2. demuxer.c: this does the demultiplexing (separating) of the input to
- audio, video or dvdsub channels, and their reading by buffered packages.
- The demuxer.c is basically a framework, which is the same for all the
- input formats, and there are parsers for each of them (mpeg-es,
- mpeg-ps, avi, avi-ni, asf), these are in the demux_*.c files.
- The structure is the demuxer_t. There is only one demuxer.
-
-2.a. demux_packet_t, that is DP.
- Contains one chunk (avi) or packet (asf,mpg). They are stored in memory as
- in linked list, cause of their different size.
-
-2.b. demuxer stream, that is DS.
- Struct: demux_stream_t
- Every channel (a/v/s) has one. This contains the packets for the stream
- (see 2.a). For now, there can be 3 for each demuxer :
- - audio (d_audio)
- - video (d_video)
- - DVD subtitle (d_dvdsub)
-
-2.c. stream header. There are 2 types (for now): sh_audio_t and sh_video_t
- This contains every parameter essential for decoding, such as input/output
- buffers, chosen codec, fps, etc. There are each for every stream in
- the file. At least one for video, if sound is present then another,
- but if there are more, then there'll be one structure for each.
- These are filled according to the header (avi/asf), or demux_mpg.c
- does it (mpg) if it founds a new stream. If a new stream is found,
- the ====> Found audio/video stream: <id> messages is displayed.
-
- The chosen stream header and its demuxer are connected together
- (ds->sh and sh->ds) to simplify the usage. So it's enough to pass the
- ds or the sh, depending on the function.
-
- For example: we have an asf file, 6 streams inside it, 1 audio, 5
- video. During the reading of the header, 6 sh structs are created, 1
- audio and 5 video. When it starts reading the packet, it chooses the
- stream for the first found audio & video packet, and sets the sh
- pointers of d_audio and d_video according to them. So later it reads
- only these streams. Of course the user can force choosing a specific
- stream with
- -vid and -aid switches.
- A good example for this is the DVD, where the english stream is not
- always the first, so every VOB has different language :)
- That's when we have to use for example the -aid 128 switch.
-
- Now, how this reading works?
- - demuxer.c/demux_read_data() is called, it gets how many bytes,
- and where (memory address), would we like to read, and from which
- DS. The codecs call this.
- - this checks if the given DS's buffer contains something, if so, it
- reads from there as much as needed. If there isn't enough, it calls
- ds_fill_buffer(), which:
- - checks if the given DS has buffered packages (DP's), if so, it moves
- the oldest to the buffer, and reads on. If the list is empty, it
- calls demux_fill_buffer() :
- - this calls the parser for the input format, which reads the file
- onward, and moves the found packages to their buffers.
- Well it we'd like an audio package, but only a bunch of video
- packages are available, then sooner or later the:
- DEMUXER: Too many (%d in %d bytes) audio packets in the buffer
- error shows up.
-
-2.d. video.c: this file/function handle the reading and assembling of the
- video frames. each call to video_read_frame() should read and return a
- single video frame, and it's duration in seconds (float).
- The implementation is splitted to 2 big parts - reading from mpeg-like
- streams and reading from one-frame-per-chunk files (avi, asf, mov).
- Then it calculates duration, either from fixed FPS value, or from the
- PTS difference between and after reading the frame.
-
-2.e. other utility functions: there are some useful code there, like
- AVI muxer, or mp3 header parser, but leave them for now.
-
-So everything is ok 'till now. It can be found in libmpdemux/ library.
-It should compile outside of mplayer tree, you just have to implement few
-simple functions, like mp_msg() to print messages, etc.
-See libmpdemux/test.c for example.
-
-See also formats.txt, for description of common media file formats and their
-implementation details in libmpdemux.
-
-Now, go on:
-
-3. mplayer.c - ooh, he's the boss :)
- Its main purpose is connecting the other modules, and maintaining A/V
- sync.
-
- The given stream's actual position is in the 'timer' field of the
- corresponding stream header (sh_audio / sh_video).
-
- The structure of the playing loop :
- while(not EOF) {
- fill audio buffer (read & decode audio) + increase a_frame
- read & decode a single video frame + increase v_frame
- sleep (wait until a_frame>=v_frame)
- display the frame
- apply A-V PTS correction to a_frame
- handle events (keys,lirc etc) -> pause,seek,...
- }
-
- When playing (a/v), it increases the variables by the duration of the
- played a/v.
- - with audio this is played bytes / sh_audio->o_bps
- Note: i_bps = number of compressed bytes for one second of audio
- o_bps = number of uncompressed bytes for one second of audio
- (this is = bps*samplerate*channels)
- - with video this is usually == 1.0/fps, but I have to note that
- fps doesn't really matters at video, for example asf doesn't have that,
- instead there is "duration" and it can change per frame.
- MPEG2 has "repeat_count" which delays the frame by 1-2.5 ...
- Maybe only AVI and MPEG1 has fixed fps.
-
- So everything works right until the audio and video are in perfect
- synchronity, since the audio goes, it gives the timing, and if the
- time of a frame passed, the next frame is displayed.
- But what if these two aren't synchronized in the input file?
- PTS correction kicks in. The input demuxers read the PTS (presentation
- timestamp) of the packages, and with it we can see if the streams
- are synchronized. Then MPlayer can correct the a_frame, within
- a given maximal bounder (see -mc option). The summary of the
- corrections can be found in c_total .
-
- Of course this is not everything, several things suck.
- For example the soundcards delay, which has to be corrected by
- MPlayer! The audio delay is the sum of all these:
- - bytes read since the last timestamp:
- t1 = d_audio->pts_bytes/sh_audio->i_bps
- - if Win32/ACM then the bytes stored in audio input buffer
- t2 = a_in_buffer_len/sh_audio->i_bps
- - uncompressed bytes in audio out buffer
- t3 = a_buffer_len/sh_audio->o_bps
- - not yet played bytes stored in the soundcard's (or DMA's) buffer
- t4 = get_audio_delay()/sh_audio->o_bps
-
- From this we can calculate what PTS we need for the just played
- audio, then after we compare this with the video's PTS, we have
- the difference!
-
- Life didn't get simpler with AVI. There's the "official" timing
- method, the BPS-based, so the header contains how many compressed
- audio bytes or chunks belong to one second of frames.
- In the AVI stream header there are 2 important fields, the
- dwSampleSize, and dwRate/dwScale pairs:
- - If the dwSampleSize is 0, then it's VBR stream, so its bitrate
- isn't constant. It means that 1 chunk stores 1 sample, and
- dwRate/dwScale gives the chunks/sec value.
- - If the dwSampleSize is >0, then it's constant bitrate, and the
- time can be measured this way: time = (bytepos/dwSampleSize) /
- (dwRate/dwScale) (so the sample's number is divided with the
- samplerate). Now the audio can be handled as a stream, which can
- be cut to chunks, but can be one chunk also.
-
- The other method can be used only for interleaved files: from
- the order of the chunks, a timestamp (PTS) value can be calculated.
- The PTS of the video chunks are simple: chunk number * fps
- The audio is the same as the previous video chunk was.
- We have to pay attention to the so called "audio preload", that is,
- there is a delay between the audio and video streams. This is
- usually 0.5-1.0 sec, but can be totally different.
- The exact value was measured until now, but now the demux_avi.c
- handles it: at the audio chunk after the first video, it calculates
- the A/V difference, and take this as a measure for audio preload.
-
-3.a. audio playback:
- Some words on audio playback:
- Not the playing is hard, but:
- 1. knowing when to write into the buffer, without blocking
- 2. knowing how much was played of what we wrote into
- The first is needed for audio decoding, and to keep the buffer
- full (so the audio will never skip). And the second is needed for
- correct timing, because some soundcards delay even 3-7 seconds,
- which can't be forgotten about.
- To solve this, the OSS gives several possibilities:
- - ioctl(SNDCTL_DSP_GETODELAY): tells how many unplayed bytes are in
- the soundcard's buffer -> perfect for timing, but not all drivers
- support it :(
- - ioctl(SNDCTL_DSP_GETOSPACE): tells how much can we write into the
- soundcard's buffer, without blocking. If the driver doesn't
- support GETODELAY, we can use this to know how much the delay is.
- - select(): should tell if we can write into the buffer without
- blocking. Unfortunately it doesn't say how much we could :((
- Also, doesn't/badly works with some drivers.
- Only used if none of the above works.
-
-4. Codecs. Consists of libmpcodecs/* and separate files or libs,
- for example libmpeg2, loader, mp3lib.
-
- mplayer.c doesn't call them directly, but through the dec_audio.c and
- dec_video.c files, so the mplayer.c doesn't have to know anything about
- the codecs.
-
- libmpcodecs contains wrapper for every codecs, some of them include the
- codec function implementation, some calls functions from other files
- included with mplayer, some calls optional external libraries.
- file naming convention in libmpcodecs:
- ad_*.c - audio decoder (called through dec_audio.c)
- vd_*.c - video decoder (called through dec_video.c)
- ve_*.c - video encoder (used by mencoder)
- vf_*.c - video filter (see option -vf)
-
- On this topic, see also:
- libmpcodecs.txt - The structure of the codec-filter path, with explanation
- dr-methods.txt - Direct rendering, MPI buffer management for video codecs
- codecs.conf.txt - How to write/edit codec configuration file (codecs.conf)
- codec-devel.txt - Mike's hints about codec development - a bit OUTDATED
- hwac3.txt - about SP/DIF audio passthrough
-
-5. libvo: this displays the frame.
-
- for details on this, read libvo.txt
-
-6. libao2: this control audio playing
-6.a audio plugins
-
- for details on this, read libao2.txt
diff --git a/DOCS/OUTDATED-tech/hwac3.txt b/DOCS/OUTDATED-tech/hwac3.txt
deleted file mode 100644
index 25a2c5bc7a..0000000000
--- a/DOCS/OUTDATED-tech/hwac3.txt
+++ /dev/null
@@ -1,145 +0,0 @@
-mails by A'rpi and Marcus Blomenkamp <Marcus.Blomenkamp@epost.de>
-describing how this ac3-passtrough hack work under linux and mplayer...
------------------------------------------------------------------------
-Hi,
-
-> I received the following patch from Steven Brookes <stevenjb@mda.co.uk>.
-> He is working on fixing the digital audio output of the dxr3 driver and
-> told me he fixed some bugs in mplayer along the way. I don't know shit
-> about hwac3 output so all I did was to make sure the patch applied
-> against latest cvs.
-> This is from his e-mail to me:
->
-> "Secondly there is a patch to dec_audio.c and
-> ac3-iec958 to fix the -ac hwac3 codec stuff and to use liba52 to sync it.
-
-> Seems to work for everything I've thrown at and maintains sync for all audio
-> types through the DXR3."
-
-patch applied (with some comments added and an unwanted change (in software
-a52 decoder) removed)
-
-now i understand how this whole hwac3 mess work.
-it's very very tricky. it virtually decodes ac3 to LPCM packets, but really
-it keeps the original compressed data padded by zeros. this way it's
-constant bitrate, and sync is calculated just like for stereo PCM.
-(so it bypass LPCM-capable media converters...)
-
-so, every ac3 frame is translated to 6144 byte long tricky LPCM packet.
-6144 = 4*(6*256) = 4 * samples_per_ac3_frame = LPCM size of uncompressed ac3
-frame.
-
-i wanna know if it works for sblive and other ac3-capable cards too?
-(i can't test it, lack of ac3 decoder)
-
-A'rpi / Astral & ESP-team
-
------------------------------------------------------------------------
-Hi folks.
-I spend some time fiddling with ac3 passthrough in mplayer. The
-traditional way of setting the output format to AFMT_AC3 was no ideal
-solution since not all digital io cards/drivers supported this format or
-honoured it to set the spdif non-audio bit. To make it short, it only
-worked with oss sblive driver IIRC.
-
-Inspired by alsa's ac3dec program I found an alternative way by
-inspecting to which format the alsa device had been set. Suprise: it was
-simple 16bit_le 2_channel pcm. So setting the non-audio bit doesn't
-necessarily mean the point. The only important thing seems to be
-bit-identical output at the correct samplerate. Modern AV-Receivers seem
-to be quite tolerant/compatible.
-
-So I changed the output format of hwac3 from
-
-AFMT_AC3 channels=1
- to
-AFMT_S16_LE channels=2
-
-and corrected the absolute time calculation. That was all to get it
-running for me.
-
------------------------------------------------------------------------
-Hi there.
-
-Perhaps I can clear up some mystification about AC3 passthrough in
-general and mplayer in special:
-
-To get the external decoder solution working, it must be fed with data
-which is bitidentical to the chunks in the source ac3 file (compressed
-data is very picky about bit errors). Additionally - or better to say
-'historically' - the non-audio bit should be set in the spdif status
-fields to prevent old spdif hardware from reproducing ugly scratchy
-noise. Note: for current decoders (probably those with DTS capability)
-this safety bit isn't needed anymore. At least I can state that for my
-Sherwood RVD-6095RDS. I think it is due to DTS because DTS sound can
-reside on a ordinary AudioCD and an ordinary AudioCD-Player will always
-have it's audio-bit set.
-
-The sample format of the data must be 2channel 16bit (little endian
-IIRC). Samplerates are 48kHz - although my receiver also accepts
-44100Hz. I do not know if this is due to an over-compatability of my
-receiver or if 44100 is also possible in the ac3 specs. For safety's
-sake lets keep this at 48000Hz. AC3 data chunks are inserted into the
-stream every 0x1600 bytes (don't bite me on that, look into
-'ac3-iec958.c': 'ac3_iec958_build_burst').
-
-To come back to the problem: data must be played bit-identically through
-the soundcard at the correct samplerate and should optionally have it's
-non-audio bit set. There are two ways to accomplish this:
-
-1) Some OSS guy invented the format AFMT_AC3. Soundcard drivers
-implementing this format should therefore adjust it's mixers and
-switches to produce the desired output. Unfortunately some soundcard
-drivers do not support this format correctly and most do not even
-support it at all (including ALSA).
-
-2) The alternative approach currently in mplayer CVS is to simply set
-the output format to 48kHz16bitLE and rely on the user to have the
-soundcard mixers adjusted properly.
-
-I do have two soundcards with digital IO facilities (CMI8738 and
-Trident4DWaveNX based) plus the mentioned decoder. I'm currently running
-Linux-2.4.17. Following configurations are happily running here:
-
-1. Trident with ALSA drivers (OSS does not support Hoontech's dig. IO)
-2. CMI with ALSA drivers
-3. CMI with OSS drivers
-
-For Linux I'd suggest using ALSA because of it's cleaner architecture
-and more consitent user interface. Not to mention that it'll be the
-standard sound support in Linux soon.
-
-For those who want to stick to OSS drivers: The CMI8738 drivers works
-out-of-the-box, if the PCM/Wave mixer is set to 100%.
-
-For ALSA I'd suggest using its OSS emulation. More on that later.
-ALSA-0.9 invented the idea of cards, devices and dubdevices. You can
-reach the digital interface of all supported cards consitently by using
-the device 'hw:x,2' (x counting from 0 is the number of your soundcard).
-So most people would end up at 'hw:0,2'. This device can only be opened
-in sample formats and rates which are directly supported in hardware
-hence no samplerate conversion is done keeping the stream as-is. However
-most consumer soundcards do not support 44kHz so it would definitively
-be a bad idea to use this as your standard device if you wanted to
-listen to some mp3s (most of them are 44kHz due to CD source). Here the
-OSS comes to play again. You can configure which OSS device (/dev/dsp
-and /dev/adsp) uses which ALSA device. So I'd suggest pointing the
-standard '/dev/dsp' to standard 'hw:0,0' which suports mixing and
-samplerate conversion. No further reconfiguration would be needed for
-your sound apps. For movies I'd point '/dev/adsp' to 'hw:0,2' and
-configure mplayer to use adsp instead of dsp. The samplerate constrain
-is no big deal here since movies usually are in 48Khz anyway. The
-configuration in '/etc/modules.conf' is no big deal also:
-
-alias snd-card-0 snd-card-cmipci # insert your card here
-alias snd-card-1 snd-pcm-oss # load OSS emulation
-options snd-pcm-oss snd_dsp_map=0 snd_adsp_map=2 # do the mapping
-
-This works flawlessly in combination with alsa's native
-SysVrc-init-script 'alsasound'. Be sure to disable any distribution
-dependent script (e.g. Mandrake-8.1 has an 'alsa' script which depends
-on ALSA-0.5).
-
-Sorry for you *BSD'lers out there. I have no grasp on sound support there.
-
-HTH Marcus
diff --git a/DOCS/OUTDATED-tech/libao2.txt b/DOCS/OUTDATED-tech/libao2.txt
deleted file mode 100644
index 49cb0284f7..0000000000
--- a/DOCS/OUTDATED-tech/libao2.txt
+++ /dev/null
@@ -1,56 +0,0 @@
-6. libao2: this control audio playing
-
- As in libvo (see 5.) also here are some drivers, based on the same API:
-
-static int control(int cmd, int arg);
- This is for reading/setting driver-specific and other special parameters.
- Not really used for now.
-
-static int init(int rate,int channels,int format,int flags);
- The init of driver, opens device, sets sample rate, channels, sample format
- parameters.
- Sample format: usually AFMT_S16_LE or AFMT_U8, for more definitions see
- dec_audio.c and linux/soundcards.h files!
-
-static void uninit(void);
- Guess what.
- Ok I help: closes the device, not (yet) called when exit.
-
-static void reset(void);
- Resets device. To be exact, it's for deleting buffers' contents,
- so after reset() the previously received stuff won't be output.
- (called if pause or seek)
-
-static int get_space(void);
- Returns how many bytes can be written into the audio buffer without
- blocking (making caller process wait). MPlayer occasionally checks the
- remaining space and tries to fill the buffer with play() if there's free
- space. The buffer size used should be sane; a buffer that is too small
- could run empty before MPlayer tries filling it again (normally once per
- video frame), a buffer that is too big would force MPlayer decode the file
- far ahead trying to find enough audio data to fill it.
-
-static int play(void* data,int len,int flags);
- Plays a bit of audio, which is received throught the "data" memory area, with
- a size of "len". It has to copy the data, because they can be overwritten
- after the call is made. Doesn't have to use all the bytes; it has to
- return the number of bytes used used (copied to buffer). If
- flags|AOPLAY_FINAL_CHUNK is true then this is the last audio in the file.
- The purpose of this flag is to tell aos that round down the audio played
- from "len" to a multiple of some chunksize that this "len" should not be
- rounded down to 0 or the data will never be played (as MPlayer will never
- call play() with a larger len).
-
-static float get_delay(void);
- Returns how long time it will take to play the data currently in the
- output buffer. Be exact, if possible, since the whole timing depends
- on this! In the worst case, return the maximum delay.
-
-!!! Because the video is synchronized to the audio (card), it's very important
-!!! that the get_delay function is correctly implemented!
-
-static void audio_pause(void);
- Pause playing but do not delete buffered data if possible.
-
-static void audio_resume(void);
- Continue playing after audio_pause().
diff --git a/DOCS/OUTDATED-tech/libvo.txt b/DOCS/OUTDATED-tech/libvo.txt
deleted file mode 100644
index 245d29eade..0000000000
--- a/DOCS/OUTDATED-tech/libvo.txt
+++ /dev/null
@@ -1,173 +0,0 @@
-libvo --- the library to handle video output by A'rpi, 2002.04
-============================================
-
-Note: before start on this, read colorspaces.txt !
-
-The constants for different pixelformats are defined in img_format.h,
-their usage is mandatory.
-
-WARNING: Please keep in mind that some of this information may be out-dated,
-so if you are working on a new vo, consider submitting preliminary patches
-very early on. Currently vo_gl is one of the more up-to-date VOs to use
-as reference if you are unsure about something and do not want to ask on the
-list.
-vo_vdpau and vo_direct3d may be a good choice, too, they use different
-approaches closer to the sometimes convoluted way DirectX works.
-
-Each vo driver _has_ to implement these:
-
- preinit():
- init the video system (to support querying for supported formats)
-
- uninit():
- Uninit the whole system, this is on the same "level" as preinit.
-
- control():
- Current controls (VOCTRL_QUERY_FORMAT must be implemented,
- VOCTRL_DRAW_IMAGE, VOCTRL_FULLSCREEN, VOCTRL_UPDATE_SCREENINFO
- should be implemented):
- VOCTRL_QUERY_FORMAT - queries if a given pixelformat is supported.
- It also returns various flags decsirbing the capabilities
- of the driver with teh given mode. for the flags, see
- file vfcaps.h !
- the most important flags, every driver must properly report
- these:
- 0x1 - supported (with or without conversion)
- 0x2 - supported without conversion (define 0x1 too!)
- 0x100 - driver/hardware handles timing (blocking)
- also SET sw/hw scaling and osd support flags, and flip,
- and accept_stride if you implement VOCTRL_DRAW_IMAGE (see bellow)
- NOTE: VOCTRL_QUERY_FORMAT may be called _before_ first config()
- but is always called between preinit() and uninit()
- VOCTRL_GET_IMAGE
- libmpcodecs Direct Rendering interface
- You need to update mpi (mp_image.h) structure, for example,
- look at vo_x11, vo_sdl, vo_xv or mga_common.
- VOCTRL_DRAW_IMAGE
- replacement for the current draw_slice/draw_frame way of
- passing video frames. by implementing SET_IMAGE, you'll get
- image in mp_image struct instead of by calling draw_*.
- unless you return VO_TRUE for VOCTRL_DRAW_IMAGE call, the
- old-style draw_* functils will be called!
- Note: draw_slice is still mandatory, for per-slice rendering!
- VOCTRL_RESET - reset the video device
- This is sent on seeking and similar and is useful if you are
- using a device which prebuffers frames that need to flush them
- before refilling audio/video buffers.
- VOCTRL_PAUSE
- VOCTRL_RESUME
- VOCTRL_GUISUPPORT
- return true only if driver supports co-operation with
- MPlayer's GUI (not yet used by GUI)
- VOCTRL_SET_EQUALIZER
- set the video equalizer to the given values
- two arguments are provided: item and value
- item is a string, the possible values are (currently):
- brightness, contrast, saturation, hue
- VOCTRL_GET_EQUALIZER
- get the current video equalizer values
- two arguments are provided: item and value
- item is a string, the possible values are (currently):
- brightness, contrast, saturation, hue
- VOCTRL_ONTOP
- Makes the player window stay-on-top. Only supported (currently)
- by drivers which use X11, except SDL, as well as directx and
- gl2 under Windows.
- VOCTRL_BORDER
- Makes the player window borderless.
- VOCTRL_FULLSCREEN
- Switch from and to fullscreen mode
- VOCTRL_GET_PANSCAN
- VOCTRL_SET_PANSCAN
- Needed to implement pan-scan support ('w' and 'e' keys during
- playback in fullscreen mode)
- VOCTRL_START_SLICE
- Called before the first draw_slice of each frame, useful if
- you need to do some set-up work.
- VOCTRL_DRAW_EOSD
- Required for EOSD (ASS subtitle) support. Provides all
- information necessary to draw the EOSD for the current video
- frame.
- VOCTRL_GET_EOSD_RES
- Required for EOSD (ASS subtitle) support. Informs the ASS
- renderer about the properties of the drawing area (size,
- borders).
- VOCTRL_SET_DEINTERLACE
- VOCTRL_GET_DEINTERLACE
- Get or set deinterlacing status for VOs that support some kind
- of deinterlacing.
- VOCTRL_UPDATE_SCREENINFO
- Should set the xinerama_x, xinerama_y, vo_screenwidth and
- vo_screenheight appropriately for the currently used
- monitor and -xineramascreen option.
- Usually should simply call the w32_update_xinerama_info or
- update_xinerama_info function.
- By supporting this, the VO also requests the newer API
- that sets vo_dx, vo_dy etc. appropriately before config()
- is called.
-
- config():
- Set up the video system. You get the dimensions and flags.
- width, height: size of the source image
- d_width, d_height: wanted scaled/display size (it's a hint)
- Flags:
- 0x01 - force fullscreen (-fs)
- 0x02 - allow mode switching (-vm)
- 0x04 - allow software scaling (-zoom)
- 0x08 - flipping (-flip)
- They're defined as VOFLAG_* (see libvo/video_out.h)
-
- IMPORTANT NOTE: config() may be called 0 (zero), 1 or more (2,3...)
- times between preinit() and uninit() calls. You MUST handle it, and
- you shouldn't crash at second config() call or at uninit() without
- any config() call! To make your life easier, vo_config_count is
- set to the number of previous config() call, counted from preinit().
- It's set by the caller (vf_vo.c), you don't have to increase it!
- So, you can check for vo_config_count>0 in uninit() when freeing
- resources allocated in config() to avoid crash!
-
- You should implement VOCTRL_UPDATE_SCREENINFO so that vo_dx, vo_dy,
- vo_dwidth and vo_dheight are already pre-set to values that take
- aspect and -geometry into account. It is also necessary to properly
- support multi-monitor setups (if based on x11_common, w32_common).
-
- draw_slice(): this displays YV12 pictures (3 planes, one full sized that
- contains brightness (Y), and 2 quarter-sized which the colour-info
- (U,V). MPEG codecs (libmpeg2, opendivx) use this. This doesn't have
- to display the whole frame, only update small parts of it.
- If this is not supported, it must be signaled in QUERY_FORMAT with
- VOCAP_NOSLICES.
-
- draw_frame(): this is the older interface, this displays only complete
- frames, and can do only packed format (YUY2, RGB/BGR).
- Win32 codecs use this (DivX, Indeo, etc).
- If y