diff options
author | Uoti Urpala <uau@glyph.nonexistent.invalid> | 2010-04-26 18:25:34 +0300 |
---|---|---|
committer | Uoti Urpala <uau@glyph.nonexistent.invalid> | 2010-04-26 18:25:34 +0300 |
commit | 7795726e0f8c70edd6ecde7fd2137214af302f4f (patch) | |
tree | 87a087e69a0e2912183736de409676f824fb2248 /DOCS/tech | |
parent | ba3b65b92f3f822fa75b0210b841557f5b20f6d1 (diff) | |
parent | e16f02fe4001f3056b8efd1a099a563569b73f5d (diff) | |
download | mpv-7795726e0f8c70edd6ecde7fd2137214af302f4f.tar.bz2 mpv-7795726e0f8c70edd6ecde7fd2137214af302f4f.tar.xz |
Merge svn changes up to r31033
Diffstat (limited to 'DOCS/tech')
-rw-r--r-- | DOCS/tech/codec-devel.txt | 2 | ||||
-rw-r--r-- | DOCS/tech/colorspaces.txt | 6 | ||||
-rw-r--r-- | DOCS/tech/general.txt | 252 | ||||
-rw-r--r-- | DOCS/tech/hwac3.txt | 6 | ||||
-rw-r--r-- | DOCS/tech/libvo.txt | 232 | ||||
-rw-r--r-- | DOCS/tech/mirrors/mirror_howto.txt | 4 | ||||
-rw-r--r-- | DOCS/tech/mpdsf.txt | 26 | ||||
-rw-r--r-- | DOCS/tech/mpsub.sub | 14 | ||||
-rw-r--r-- | DOCS/tech/osd.txt | 54 | ||||
-rw-r--r-- | DOCS/tech/realcodecs/audio-codecs.txt | 28 | ||||
-rw-r--r-- | DOCS/tech/realcodecs/streaming.txt | 4 | ||||
-rw-r--r-- | DOCS/tech/realcodecs/video-codecs.txt | 96 | ||||
-rw-r--r-- | DOCS/tech/slave.txt | 15 | ||||
-rw-r--r-- | DOCS/tech/subcp.txt | 14 | ||||
-rw-r--r-- | DOCS/tech/swscaler_filters.txt | 8 | ||||
-rw-r--r-- | DOCS/tech/swscaler_methods.txt | 64 | ||||
-rw-r--r-- | DOCS/tech/vidix.txt | 78 |
17 files changed, 459 insertions, 444 deletions
diff --git a/DOCS/tech/codec-devel.txt b/DOCS/tech/codec-devel.txt index 765bf1c281..bc968c9499 100644 --- a/DOCS/tech/codec-devel.txt +++ b/DOCS/tech/codec-devel.txt @@ -43,7 +43,7 @@ understand, then you will either need to somehow convert the format to a media file format that the program does understand, or write your own MPlayer file demuxer that can handle the data. Writing a file demuxer is beyond the scope of this document. - Try to obtain media that stresses all possible modes of a + Try to obtain media that stresses all possible modes of a decoder. If an audio codec is known to work with both mono and stereo data, search for sample media of both types. If a video codec is known to work at 7 different bit depths, then, as painful as it may be, do what you diff --git a/DOCS/tech/colorspaces.txt b/DOCS/tech/colorspaces.txt index 291a435f30..eaf9d221e4 100644 --- a/DOCS/tech/colorspaces.txt +++ b/DOCS/tech/colorspaces.txt @@ -66,9 +66,9 @@ The most misunderstood thingie... In MPlayer, we usually have 3 pointers to the Y, U and V planes, so it doesn't matter what the order of the planes in the memory is: for mp_image_t and libvo's draw_slice(): - planes[0] = Y = luminance - planes[1] = U = Cb = blue - planes[2] = V = Cr = red + planes[0] = Y = luminance + planes[1] = U = Cb = blue + planes[2] = V = Cr = red Note: planes[1] is ALWAYS U, and planes[2] is V, the FOURCC (YV12 vs. I420) doesn't matter here! So, every codec using 3 pointers (not only the first one) normally supports YV12 and I420 (=IYUV), too! diff --git a/DOCS/tech/general.txt b/DOCS/tech/general.txt index 631ee3f9de..4ca3671cf3 100644 --- a/DOCS/tech/general.txt +++ b/DOCS/tech/general.txt @@ -14,64 +14,64 @@ The main modules: 2. demuxer.c: this does the demultiplexing (separating) of the input to audio, video or dvdsub channels, and their reading by buffered packages. - The demuxer.c is basically a framework, which is the same for all the - input formats, and there are parsers for each of them (mpeg-es, - mpeg-ps, avi, avi-ni, asf), these are in the demux_*.c files. - The structure is the demuxer_t. There is only one demuxer. + The demuxer.c is basically a framework, which is the same for all the + input formats, and there are parsers for each of them (mpeg-es, + mpeg-ps, avi, avi-ni, asf), these are in the demux_*.c files. + The structure is the demuxer_t. There is only one demuxer. 2.a. demux_packet_t, that is DP. Contains one chunk (avi) or packet (asf,mpg). They are stored in memory as - in linked list, cause of their different size. + in linked list, cause of their different size. 2.b. demuxer stream, that is DS. Struct: demux_stream_t Every channel (a/v/s) has one. This contains the packets for the stream (see 2.a). For now, there can be 3 for each demuxer : - - audio (d_audio) - - video (d_video) - - DVD subtitle (d_dvdsub) + - audio (d_audio) + - video (d_video) + - DVD subtitle (d_dvdsub) 2.c. stream header. There are 2 types (for now): sh_audio_t and sh_video_t This contains every parameter essential for decoding, such as input/output - buffers, chosen codec, fps, etc. There are each for every stream in - the file. At least one for video, if sound is present then another, - but if there are more, then there'll be one structure for each. - These are filled according to the header (avi/asf), or demux_mpg.c - does it (mpg) if it founds a new stream. If a new stream is found, - the ====> Found audio/video stream: <id> messages is displayed. - - The chosen stream header and its demuxer are connected together - (ds->sh and sh->ds) to simplify the usage. So it's enough to pass the - ds or the sh, depending on the function. - - For example: we have an asf file, 6 streams inside it, 1 audio, 5 - video. During the reading of the header, 6 sh structs are created, 1 - audio and 5 video. When it starts reading the packet, it chooses the - stream for the first found audio & video packet, and sets the sh - pointers of d_audio and d_video according to them. So later it reads - only these streams. Of course the user can force choosing a specific - stream with - -vid and -aid switches. - A good example for this is the DVD, where the english stream is not - always the first, so every VOB has different language :) - That's when we have to use for example the -aid 128 switch. + buffers, chosen codec, fps, etc. There are each for every stream in + the file. At least one for video, if sound is present then another, + but if there are more, then there'll be one structure for each. + These are filled according to the header (avi/asf), or demux_mpg.c + does it (mpg) if it founds a new stream. If a new stream is found, + the ====> Found audio/video stream: <id> messages is displayed. + + The chosen stream header and its demuxer are connected together + (ds->sh and sh->ds) to simplify the usage. So it's enough to pass the + ds or the sh, depending on the function. + + For example: we have an asf file, 6 streams inside it, 1 audio, 5 + video. During the reading of the header, 6 sh structs are created, 1 + audio and 5 video. When it starts reading the packet, it chooses the + stream for the first found audio & video packet, and sets the sh + pointers of d_audio and d_video according to them. So later it reads + only these streams. Of course the user can force choosing a specific + stream with + -vid and -aid switches. + A good example for this is the DVD, where the english stream is not + always the first, so every VOB has different language :) + That's when we have to use for example the -aid 128 switch. Now, how this reading works? - - demuxer.c/demux_read_data() is called, it gets how many bytes, - and where (memory address), would we like to read, and from which + - demuxer.c/demux_read_data() is called, it gets how many bytes, + and where (memory address), would we like to read, and from which DS. The codecs call this. - - this checks if the given DS's buffer contains something, if so, it - reads from there as much as needed. If there isn't enough, it calls - ds_fill_buffer(), which: - - checks if the given DS has buffered packages (DP's), if so, it moves - the oldest to the buffer, and reads on. If the list is empty, it - calls demux_fill_buffer() : - - this calls the parser for the input format, which reads the file - onward, and moves the found packages to their buffers. - Well it we'd like an audio package, but only a bunch of video - packages are available, then sooner or later the: - DEMUXER: Too many (%d in %d bytes) audio packets in the buffer - error shows up. + - this checks if the given DS's buffer contains something, if so, it + reads from there as much as needed. If there isn't enough, it calls + ds_fill_buffer(), which: + - checks if the given DS has buffered packages (DP's), if so, it moves + the oldest to the buffer, and reads on. If the list is empty, it + calls demux_fill_buffer() : + - this calls the parser for the input format, which reads the file + onward, and moves the found packages to their buffers. + Well it we'd like an audio package, but only a bunch of video + packages are available, then sooner or later the: + DEMUXER: Too many (%d in %d bytes) audio packets in the buffer + error shows up. 2.d. video.c: this file/function handle the reading and assembling of the video frames. each call to video_read_frame() should read and return a @@ -101,7 +101,7 @@ Now, go on: The given stream's actual position is in the 'timer' field of the corresponding stream header (sh_audio / sh_video). - The structure of the playing loop : + The structure of the playing loop : while(not EOF) { fill audio buffer (read & decode audio) + increase a_frame read & decode a single video frame + increase v_frame @@ -111,89 +111,89 @@ Now, go on: handle events (keys,lirc etc) -> pause,seek,... } - When playing (a/v), it increases the variables by the duration of the - played a/v. - - with audio this is played bytes / sh_audio->o_bps - Note: i_bps = number of compressed bytes for one second of audio - o_bps = number of uncompressed bytes for one second of audio - (this is = bps*samplerate*channels) - - with video this is usually == 1.0/fps, but I have to note that - fps doesn't really matters at video, for example asf doesn't have that, - instead there is "duration" and it can change per frame. - MPEG2 has "repeat_count" which delays the frame by 1-2.5 ... - Maybe only AVI and MPEG1 has fixed fps. - - So everything works right until the audio and video are in perfect - synchronity, since the audio goes, it gives the timing, and if the - time of a frame passed, the next frame is displayed. - But what if these two aren't synchronized in the input file? - PTS correction kicks in. The input demuxers read the PTS (presentation - timestamp) of the packages, and with it we can see if the streams - are synchronized. Then MPlayer can correct the a_frame, within - a given maximal bounder (see -mc option). The summary of the - corrections can be found in c_total . - - Of course this is not everything, several things suck. - For example the soundcards delay, which has to be corrected by - MPlayer! The audio delay is the sum of all these: - - bytes read since the last timestamp: - t1 = d_audio->pts_bytes/sh_audio->i_bps - - if Win32/ACM then the bytes stored in audio input buffer - t2 = a_in_buffer_len/sh_audio->i_bps - - uncompressed bytes in audio out buffer - t3 = a_buffer_len/sh_audio->o_bps - - not yet played bytes stored in the soundcard's (or DMA's) buffer - t4 = get_audio_delay()/sh_audio->o_bps - - From this we can calculate what PTS we need for the just played - audio, then after we compare this with the video's PTS, we have - the difference! - - Life didn't get simpler with AVI. There's the "official" timing - method, the BPS-based, so the header contains how many compressed - audio bytes or chunks belong to one second of frames. - In the AVI stream header there are 2 important fields, the - dwSampleSize, and dwRate/dwScale pairs: - - If the dwSampleSize is 0, then it's VBR stream, so its bitrate - isn't constant. It means that 1 chunk stores 1 sample, and - dwRate/dwScale gives the chunks/sec value. - - If the dwSampleSize is >0, then it's constant bitrate, and the - time can be measured this way: time = (bytepos/dwSampleSize) / - (dwRate/dwScale) (so the sample's number is divided with the - samplerate). Now the audio can be handled as a stream, which can - be cut to chunks, but can be one chunk also. - - The other method can be used only for interleaved files: from - the order of the chunks, a timestamp (PTS) value can be calculated. - The PTS of the video chunks are simple: chunk number * fps - The audio is the same as the previous video chunk was. - We have to pay attention to the so called "audio preload", that is, - there is a delay between the audio and video streams. This is - usually 0.5-1.0 sec, but can be totally different. - The exact value was measured until now, but now the demux_avi.c - handles it: at the audio chunk after the first video, it calculates - the A/V difference, and take this as a measure for audio preload. + When playing (a/v), it increases the variables by the duration of the + played a/v. + - with audio this is played bytes / sh_audio->o_bps + Note: i_bps = number of compressed bytes for one second of audio + o_bps = number of uncompressed bytes for one second of audio + (this is = bps*samplerate*channels) + - with video this is usually == 1.0/fps, but I have to note that + fps doesn't really matters at video, for example asf doesn't have that, + instead there is "duration" and it can change per frame. + MPEG2 has "repeat_count" which delays the frame by 1-2.5 ... + Maybe only AVI and MPEG1 has fixed fps. + + So everything works right until the audio and video are in perfect + synchronity, since the audio goes, it gives the timing, and if the + time of a frame passed, the next frame is displayed. + But what if these two aren't synchronized in the input file? + PTS correction kicks in. The input demuxers read the PTS (presentation + timestamp) of the packages, and with it we can see if the streams + are synchronized. Then MPlayer can correct the a_frame, within + a given maximal bounder (see -mc option). The summary of the + corrections can be found in c_total . + + Of course this is not everything, several things suck. + For example the soundcards delay, which has to be corrected by + MPlayer! The audio delay is the sum of all these: + - bytes read since the last timestamp: + t1 = d_audio->pts_bytes/sh_audio->i_bps + - if Win32/ACM then the bytes stored in audio input buffer + t2 = a_in_buffer_len/sh_audio->i_bps + - uncompressed bytes in audio out buffer + t3 = a_buffer_len/sh_audio->o_bps + - not yet played bytes stored in the soundcard's (or DMA's) buffer + t4 = get_audio_delay()/sh_audio->o_bps + + From this we can calculate what PTS we need for the just played + audio, then after we compare this with the video's PTS, we have + the difference! + + Life didn't get simpler with AVI. There's the "official" timing + method, the BPS-based, so the header contains how many compressed + audio bytes or chunks belong to one second of frames. + In the AVI stream header there are 2 important fields, the + dwSampleSize, and dwRate/dwScale pairs: + - If the dwSampleSize is 0, then it's VBR stream, so its bitrate + isn't constant. It means that 1 chunk stores 1 sample, and + dwRate/dwScale gives the chunks/sec value. + - If the dwSampleSize is >0, then it's constant bitrate, and the + time can be measured this way: time = (bytepos/dwSampleSize) / + (dwRate/dwScale) (so the sample's number is divided with the + samplerate). Now the audio can be handled as a stream, which can + be cut to chunks, but can be one chunk also. + + The other method can be used only for interleaved files: from + the order of the chunks, a timestamp (PTS) value can be calculated. + The PTS of the video chunks are simple: chunk number * fps + The audio is the same as the previous video chunk was. + We have to pay attention to the so called "audio preload", that is, + there is a delay between the audio and video streams. This is + usually 0.5-1.0 sec, but can be totally different. + The exact value was measured until now, but now the demux_avi.c + handles it: at the audio chunk after the first video, it calculates + the A/V difference, and take this as a measure for audio preload. 3.a. audio playback: - Some words on audio playback: - Not the playing is hard, but: - 1. knowing when to write into the buffer, without blocking - 2. knowing how much was played of what we wrote into - The first is needed for audio decoding, and to keep the buffer - full (so the audio will never skip). And the second is needed for - correct timing, because some soundcards delay even 3-7 seconds, - which can't be forgotten about. - To solve this, the OSS gives several possibilities: - - ioctl(SNDCTL_DSP_GETODELAY): tells how many unplayed bytes are in - the soundcard's buffer -> perfect for timing, but not all drivers - support it :( - - ioctl(SNDCTL_DSP_GETOSPACE): tells how much can we write into the - soundcard's buffer, without blocking. If the driver doesn't - support GETODELAY, we can use this to know how much the delay is. - - select(): should tell if we can write into the buffer without - blocking. Unfortunately it doesn't say how much we could :(( - Also, doesn't/badly works with some drivers. - Only used if none of the above works. + Some words on audio playback: + Not the playing is hard, but: + 1. knowing when to write into the buffer, without blocking + 2. knowing how much was played of what we wrote into + The first is needed for audio decoding, and to keep the buffer + full (so the audio will never skip). And the second is needed for + correct timing, because some soundcards delay even 3-7 seconds, + which can't be forgotten about. + To solve this, the OSS gives several possibilities: + - ioctl(SNDCTL_DSP_GETODELAY): tells how many unplayed bytes are in + the soundcard's buffer -> perfect for timing, but not all drivers + support it :( + - ioctl(SNDCTL_DSP_GETOSPACE): tells how much can we write into the + soundcard's buffer, without blocking. If the driver doesn't + support GETODELAY, we can use this to know how much the delay is. + - select(): should tell if we can write into the buffer without + blocking. Unfortunately it doesn't say how much we could :(( + Also, doesn't/badly works with some drivers. + Only used if none of the above works. 4. Codecs. Consists of libmpcodecs/* and separate files or libs, for example liba52, libmpeg2, xa/*, alaw.c, opendivx/*, loader, mp3lib. diff --git a/DOCS/tech/hwac3.txt b/DOCS/tech/hwac3.txt index 9ae5a4f136..25a2c5bc7a 100644 --- a/DOCS/tech/hwac3.txt +++ b/DOCS/tech/hwac3.txt @@ -131,9 +131,9 @@ configure mplayer to use adsp instead of dsp. The samplerate constrain is no big deal here since movies usually are in 48Khz anyway. The configuration in '/etc/modules.conf' is no big deal also: -alias snd-card-0 snd-card-cmipci # insert your card here -alias snd-card-1 snd-pcm-oss # load OSS emulation -options snd-pcm-oss snd_dsp_map=0 snd_adsp_map=2 # do the mapping +alias snd-card-0 snd-card-cmipci # insert your card here +alias snd-card-1 snd-pcm-oss # load OSS emulation +options snd-pcm-oss snd_dsp_map=0 snd_adsp_map=2 # do the mapping This works flawlessly in combination with alsa's native SysVrc-init-script 'alsasound'. Be sure to disable any distribution diff --git a/DOCS/tech/libvo.txt b/DOCS/tech/libvo.txt index 945aeab952..245d29eade 100644 --- a/DOCS/tech/libvo.txt +++ b/DOCS/tech/libvo.txt @@ -17,7 +17,7 @@ approaches closer to the sometimes convoluted way DirectX works. Each vo driver _has_ to implement these: preinit(): - init the video system (to support querying for supported formats) + init the video system (to support querying for supported formats) uninit(): Uninit the whole system, this is on the same "level" as preinit. @@ -26,95 +26,95 @@ Each vo driver _has_ to implement these: Current controls (VOCTRL_QUERY_FORMAT must be implemented, VOCTRL_DRAW_IMAGE, VOCTRL_FULLSCREEN, VOCTRL_UPDATE_SCREENINFO should be implemented): - VOCTRL_QUERY_FORMAT - queries if a given pixelformat is supported. - It also returns various flags decsirbing the capabilities - of the driver with teh given mode. for the flags, see - file vfcaps.h ! - the most important flags, every driver must properly report - these: - 0x1 - supported (with or without conversion) - 0x2 - supported without conversion (define 0x1 too!) - 0x100 - driver/hardware handles timing (blocking) - also SET sw/hw scaling and osd support flags, and flip, - and accept_stride if you implement VOCTRL_DRAW_IMAGE (see bellow) - NOTE: VOCTRL_QUERY_FORMAT may be called _before_ first config() - but is always called between preinit() and uninit() - VOCTRL_GET_IMAGE - libmpcodecs Direct Rendering interface - You need to update mpi (mp_image.h) structure, for example, - look at vo_x11, vo_sdl, vo_xv or mga_common. - VOCTRL_DRAW_IMAGE - replacement for the current draw_slice/draw_frame way of - passing video frames. by implementing SET_IMAGE, you'll get - image in mp_image struct instead of by calling draw_*. - unless you return VO_TRUE for VOCTRL_DRAW_IMAGE call, the - old-style draw_* functils will be called! - Note: draw_slice is still mandatory, for per-slice rendering! - VOCTRL_RESET - reset the video device - This is sent on seeking and similar and is useful if you are - using a device which prebuffers frames that need to flush them - before refilling audio/video buffers. - VOCTRL_PAUSE - VOCTRL_RESUME - VOCTRL_GUISUPPORT - return true only if driver supports co-operation with - MPlayer's GUI (not yet used by GUI) - VOCTRL_SET_EQUALIZER - set the video equalizer to the given values - two arguments are provided: item and value - item is a string, the possible values are (currently): - brightness, contrast, saturation, hue - VOCTRL_GET_EQUALIZER - get the current video equalizer values - two arguments are provided: item and value - item is a string, the possible values are (currently): - brightness, contrast, saturation, hue - VOCTRL_ONTOP - Makes the player window stay-on-top. Only supported (currently) - by drivers which use X11, except SDL, as well as directx and - gl2 under Windows. - VOCTRL_BORDER - Makes the player window borderless. - VOCTRL_FULLSCREEN - Switch from and to fullscreen mode - VOCTRL_GET_PANSCAN - VOCTRL_SET_PANSCAN + VOCTRL_QUERY_FORMAT - queries if a given pixelformat is supported. + It also returns various flags decsirbing the capabilities + of the driver with teh given mode. for the flags, see + file vfcaps.h ! + the most important flags, every driver must properly report + these: + 0x1 - supported (with or without conversion) + 0x2 - supported without conversion (define 0x1 too!) + 0x100 - driver/hardware handles timing (blocking) + also SET sw/hw scaling and osd support flags, and flip, + and accept_stride if you implement VOCTRL_DRAW_IMAGE (see bellow) + NOTE: VOCTRL_QUERY_FORMAT may be called _before_ first config() + but is always called between preinit() and uninit() + VOCTRL_GET_IMAGE + libmpcodecs Direct Rendering interface + You need to update mpi (mp_image.h) structure, for example, + look at vo_x11, vo_sdl, vo_xv or mga_common. + VOCTRL_DRAW_IMAGE + replacement for the current draw_slice/draw_frame way of + passing video frames. by implementing SET_IMAGE, you'll get + image in mp_image struct instead of by calling draw_*. + unless you return VO_TRUE for VOCTRL_DRAW_IMAGE call, the + old-style draw_* functils will be called! + Note: draw_slice is still mandatory, for per-slice rendering! + VOCTRL_RESET - reset the video device + This is sent on seeking and similar and is useful if you are + using a device which prebuffers frames that need to flush them + before refilling audio/video buffers. + VOCTRL_PAUSE + VOCTRL_RESUME + VOCTRL_GUISUPPORT + return true only if driver supports co-operation with + MPlayer's GUI (not yet used by GUI) + VOCTRL_SET_EQUALIZER + set the video equalizer to the given values + two arguments are provided: item and value + item is a string, the possible values are (currently): + brightness, contrast, saturation, hue + VOCTRL_GET_EQUALIZER + get the current video equalizer values + two arguments are provided: item and value + item is a string, the possible values are (currently): + brightness, contrast, saturation, hue + VOCTRL_ONTOP + Makes the player window stay-on-top. Only supported (currently) + by drivers which use X11, except SDL, as well as directx and + gl2 under Windows. + VOCTRL_BORDER + Makes the player window borderless. + VOCTRL_FULLSCREEN + Switch from and to fullscreen mode + VOCTRL_GET_PANSCAN + VOCTRL_SET_PANSCAN Needed to implement pan-scan support ('w' and 'e' keys during - playback in fullscreen mode) - VOCTRL_START_SLICE - Called before the first draw_slice of each frame, useful if - you need to do some set-up work. - VOCTRL_DRAW_EOSD - Required for EOSD (ASS subtitle) support. Provides all - information necessary to draw the EOSD for the current video - frame. - VOCTRL_GET_EOSD_RES - Required for EOSD (ASS subtitle) support. Informs the ASS - renderer about the properties of the drawing area (size, - borders). - VOCTRL_SET_DEINTERLACE - VOCTRL_GET_DEINTERLACE - Get or set deinterlacing status for VOs that support some kind - of deinterlacing. - VOCTRL_UPDATE_SCREENINFO - Should set the xinerama_x, xinerama_y, vo_screenwidth and - vo_screenheight appropriately for the currently used - monitor and -xineramascreen option. - Usually should simply call the w32_update_xinerama_info or - update_xinerama_info function. - By supporting this, the VO also requests the newer API - that sets vo_dx, vo_dy etc. appropriately before config() - is called. + playback in fullscreen mode) + VOCTRL_START_SLICE + Called before the first draw_slice of each frame, useful if + you need to do some set-up work. + VOCTRL_DRAW_EOSD + Required for EOSD (ASS subtitle) support. Provides all + information necessary to draw the EOSD for the current video + frame. + VOCTRL_GET_EOSD_RES + Required for EOSD (ASS subtitle) support. Informs the ASS + renderer about the properties of the drawing area (size, + borders). + VOCTRL_SET_DEINTERLACE + VOCTRL_GET_DEINTERLACE + Get or set deinterlacing status for VOs that support some kind + of deinterlacing. + VOCTRL_UPDATE_SCREENINFO + Should set the xinerama_x, xinerama_y, vo_screenwidth and + vo_screenheight appropriately for the currently used + monitor and -xineramascreen option. + Usually should simply call the w32_update_xinerama_info or + update_xinerama_info function. + By supporting this, the VO also requests the newer API + that sets vo_dx, vo_dy etc. appropriately before config() + is called. config(): Set up the video system. You get the dimensions and flags. width, height: size of the source image d_width, d_height: wanted scaled/display size (it's a hint) Flags: - 0x01 - force fullscreen (-fs) - 0x02 - allow mode switching (-vm) - 0x04 - allow software scaling (-zoom) - 0x08 - flipping (-flip) + 0x01 - force fullscreen (-fs) + 0x02 - allow mode switching (-vm) + 0x04 - allow software scaling (-zoom) + 0x08 - flipping (-flip) They're defined as VOFLAG_* (see libvo/video_out.h) IMPORTANT NOTE: config() may be called 0 (zero), 1 or more (2,3...) @@ -132,42 +132,42 @@ Each vo driver _has_ to implement these: support multi-monitor setups (if based on x11_common, w32_common). draw_slice(): this displays YV12 pictures (3 planes, one full sized that - contains brightness (Y), and 2 quarter-sized which the colour-info - (U,V). MPEG codecs (libmpeg2, opendivx) use this. This doesn't have - to display the whole frame, only update small parts of it. - If this is not supported, it must be signaled in QUERY_FORMAT with - VOCAP_NOSLICES. + contains brightness (Y), and 2 quarter-sized which the colour-info + (U,V). MPEG codecs (libmpeg2, opendivx) use this. This doesn't have + to display the whole frame, only update small parts of it. + If this is not supported, it must be signaled in QUERY_FORMAT with + VOCAP_NOSLICES. draw_frame(): this is the older interface, this displays only complete - frames, and can do only packed format (YUY2, RGB/BGR). - Win32 codecs use this (DivX, Indeo, etc). - If you implement VOCTRL_DRAW_IMAGE, you do not need to implement draw_frame. + frames, and can do only packed format (YUY2, RGB/BGR). + Win32 codecs use this (DivX, Indeo, etc). + If you implement VOCTRL_DRAW_IMAGE, you do not need to implement draw_frame. draw_osd(): this displays subtitles and OSD. - It's a bit tricky to use it, since it's a callback-style stuff. - It should call vo_draw_text() with screen dimension and your - draw_alpha implementation for the pixelformat (function pointer). - The vo_draw_text() checks the characters to draw, and calls - draw_alpha() for each. As a help, osd.c contains draw_alpha for - each pixelformats, use this if possible! - Note that if you do not draw directly onto the video you should - use vo_draw_text_ext() which allows you to specify the border - values etc. needed to draw DVD menu highlights at the correct place. - If you do not want to implement this, you can still use -vf - expand=osd=1 to draw the OSD, or even implement code to insert - this filter automatically. - Make sure you set VFCAP_OSD depending on whether you implemented it - or not. - - NOTE: This one will be obsolete soon! But it's still useful when - you want to do tricks, like rendering osd _after_ hardware scaling - (tdfxfb) or render subtitles under of the image (vo_mpegpes, sdl) - - NOTE2: above NOTE is probably wrong, there are currently no plans to - obsolete draw_osd, though there is the more advanced EOSD support for - ASS subtitles. + It's a bit tricky to use it, since it's a callback-style stuff. + It should call vo_draw_text() with screen dimension and your + draw_alpha implementation for the pixelformat (function pointer). + The vo_draw_text() checks the characters to draw, and calls + draw_alpha() for each. As a help, osd.c contains draw_alpha for + each pixelformats, use this if possible! + Note that if you do not draw directly onto the video you should + use vo_draw_text_ext() which allows you to specify the border + values etc. needed to draw DVD menu highlights at the correct place. + If you do not want to implement this, you can still use -vf + expand=osd=1 to draw the OSD, or even implement code to insert + this filter automatically. + Make sure you set VFCAP_OSD depending on whether you implemented it + or not. + + NOTE: This one will be obsolete soon! But it's still useful when + you want to do tricks, like rendering osd _after_ hardware scaling + (tdfxfb) or render subtitles under of the image (vo_mpegpes, sdl) + + NOTE2: above NOTE is probably wrong, there are currently no plans to + obsolete draw_osd, though there is the more advanced EOSD support for + ASS subtitles. flip_page(): this is called after each frame, this displays the buffer for - real. This is 'swapbuffers' when doublebuffering. - Try to do as little work here as possible, since that affect jitter/ - A-V sync. + real. This is 'swapbuffers' when doublebuffering. + Try to do as little work here as possible, since that affect jitter/ + A-V sync. diff --git a/DOCS/tech/mirrors/mirror_howto.txt b/DOCS/tech/mirrors/mirror_howto.txt index 37a3d9cdfd..72116e8490 100644 --- a/DOCS/tech/mirrors/mirror_howto.txt +++ b/DOCS/tech/mirrors/mirror_howto.txt @@ -1,6 +1,6 @@ - ------------------------------ + ------------------------------ How to build an MPlayer mirror - ------------------------------ + ------------------------------ This document might be inacurate or even incomplete, please send feedback + corrections to the mplayer-mirror mailing list. diff --git a/DOCS/tech/mpdsf.txt b/DOCS/tech/mpdsf.txt index 1ed86edf94..27b51a7dff 100644 --- a/DOCS/tech/mpdsf.txt +++ b/DOCS/tech/mpdsf.txt @@ -6,17 +6,17 @@ Designed by Alex & Arpi The file starts with a variable size header: -------------------------------------------- -32-bit Stream format fourcc (MPVS or MPAS) - MPVS = MPlayer Video Stream - MPAS = MPlayer Audio Stream -8-bit Demuxer type (AVI,MOV,ASF,REAL,...) -8-bit Flags (marks dumped headers) - Values: 0x1: WAVEFORMATEX - 0x2: Audio extra codec data - 0x4: BITMAPINFOHEADER - 0x8: QT's ImageDesc - 0x16: indicates 32-bit chunk size before every data chunk -16-bit Length of headers +32-bit Stream format fourcc (MPVS or MPAS) + MPVS = MPlayer Video Stream + MPAS = MPlayer Audio Stream +8-bit Demuxer type (AVI,MOV,ASF,REAL,...) +8-bit Flags (marks dumped headers) + Values: 0x1: WAVEFORMATEX + 0x2: Audio extra codec data + 0x4: BITMAPINFOHEADER + 0x8: QT's ImageDesc + 0x16: indicates 32-bit chunk size before every data chunk +16-bit Length of headers There's strict rule in the follow-up of the codec-headers. Depending on flags, @@ -24,5 +24,5 @@ Depending on flags, Data |