summaryrefslogtreecommitdiffstats
path: root/DOCS
diff options
context:
space:
mode:
authordiego <diego@b3059339-0415-0410-9bf9-f77b7e298cf2>2010-04-12 10:56:17 +0000
committerdiego <diego@b3059339-0415-0410-9bf9-f77b7e298cf2>2010-04-12 10:56:17 +0000
commit7573c29480850d715e2f06cae70f252573098123 (patch)
treea5a2f498ad3a19806957e1d7e01f913c1650b33d /DOCS
parent86ea8d4f4abf23672516fa0ca3378aa19c44bf2c (diff)
downloadmpv-7573c29480850d715e2f06cae70f252573098123.tar.bz2
mpv-7573c29480850d715e2f06cae70f252573098123.tar.xz
the great MPlayer tab removal: part I
git-svn-id: svn://svn.mplayerhq.hu/mplayer/trunk@31032 b3059339-0415-0410-9bf9-f77b7e298cf2
Diffstat (limited to 'DOCS')
-rw-r--r--DOCS/default.css8
-rw-r--r--DOCS/man/fr/mplayer.14
-rw-r--r--DOCS/tech/codec-devel.txt2
-rw-r--r--DOCS/tech/colorspaces.txt6
-rw-r--r--DOCS/tech/general.txt252
-rw-r--r--DOCS/tech/hwac3.txt6
-rw-r--r--DOCS/tech/libvo.txt232
-rw-r--r--DOCS/tech/mirrors/mirror_howto.txt4
-rw-r--r--DOCS/tech/mpdsf.txt26
-rw-r--r--DOCS/tech/mpsub.sub14
-rw-r--r--DOCS/tech/osd.txt54
-rw-r--r--DOCS/tech/realcodecs/audio-codecs.txt28
-rw-r--r--DOCS/tech/realcodecs/streaming.txt4
-rw-r--r--DOCS/tech/realcodecs/video-codecs.txt96
-rw-r--r--DOCS/tech/subcp.txt14
-rw-r--r--DOCS/tech/swscaler_filters.txt8
-rw-r--r--DOCS/tech/swscaler_methods.txt64
-rw-r--r--DOCS/tech/vidix.txt78
-rw-r--r--DOCS/xml/README21
-rw-r--r--DOCS/xml/README.maintainers6
-rwxr-xr-xDOCS/xml/configure10
-rw-r--r--DOCS/xml/cs/video.xml2
-rw-r--r--DOCS/xml/de/cd-dvd.xml4
-rw-r--r--DOCS/xml/de/encoding-guide.xml2
-rw-r--r--DOCS/xml/de/faq.xml2
-rw-r--r--DOCS/xml/de/tvinput.xml4
-rw-r--r--DOCS/xml/de/usage.xml1910
-rw-r--r--DOCS/xml/de/video.xml8
-rw-r--r--DOCS/xml/default.css66
-rw-r--r--DOCS/xml/en/ports.xml2
-rw-r--r--DOCS/xml/en/skin.xml86
-rw-r--r--DOCS/xml/en/video.xml2
-rw-r--r--DOCS/xml/es/video.xml4
-rw-r--r--DOCS/xml/fr/encoding-guide.xml314
-rw-r--r--DOCS/xml/fr/install.xml16
-rw-r--r--DOCS/xml/fr/mencoder.xml16
-rw-r--r--DOCS/xml/fr/ports.xml23
-rw-r--r--DOCS/xml/fr/usage.xml2
-rw-r--r--DOCS/xml/fr/video.xml46
-rw-r--r--DOCS/xml/hu/encoding-guide.xml14
-rw-r--r--DOCS/xml/hu/mencoder.xml34
-rw-r--r--DOCS/xml/hu/video.xml2
-rw-r--r--DOCS/xml/it/usage.xml2
-rw-r--r--DOCS/xml/it/video.xml2
-rw-r--r--DOCS/xml/ldp.dsl6
-rw-r--r--DOCS/xml/pl/bugreports.xml2
-rw-r--r--DOCS/xml/pl/faq.xml2
-rw-r--r--DOCS/xml/pl/ports.xml231
-rw-r--r--DOCS/xml/pl/skin.xml98
-rw-r--r--DOCS/xml/ru/usage.xml2
-rw-r--r--DOCS/xml/xsl/ldp-html-common.xsl3
51 files changed, 1920 insertions, 1924 deletions
diff --git a/DOCS/default.css b/DOCS/default.css
index 557be9623a..dde261b174 100644
--- a/DOCS/default.css
+++ b/DOCS/default.css
@@ -1,5 +1,5 @@
-body,table {
- font-family : Arial, Helvetica, sans-serif;
- font-size : 14px;
- background : white;
+body,table {
+ font-family : Arial, Helvetica, sans-serif;
+ font-size : 14px;
+ background : white;
}
diff --git a/DOCS/man/fr/mplayer.1 b/DOCS/man/fr/mplayer.1
index 2988123d0c..e5d1ccbb37 100644
--- a/DOCS/man/fr/mplayer.1
+++ b/DOCS/man/fr/mplayer.1
@@ -5786,8 +5786,8 @@ Les aires de mémoire mappées contiennent une entête:
int nch /*nombre de canaux*/
int size /*taille du tampon*/
unsigned long long counter /*Utilisé pour garder la synchro, mis à jour
- chaque fois que de nouvelles données son
- exportées.*/
+ chaque fois que de nouvelles données son
+ exportées.*/
.fi
.sp 1
Le reste est charge utile, constitué de données 16bit (non-entrelacées).
diff --git a/DOCS/tech/codec-devel.txt b/DOCS/tech/codec-devel.txt
index 765bf1c281..bc968c9499 100644
--- a/DOCS/tech/codec-devel.txt
+++ b/DOCS/tech/codec-devel.txt
@@ -43,7 +43,7 @@ understand, then you will either need to somehow convert the format to a
media file format that the program does understand, or write your own
MPlayer file demuxer that can handle the data. Writing a file demuxer
is beyond the scope of this document.
- Try to obtain media that stresses all possible modes of a
+ Try to obtain media that stresses all possible modes of a
decoder. If an audio codec is known to work with both mono and stereo
data, search for sample media of both types. If a video codec is known to
work at 7 different bit depths, then, as painful as it may be, do what you
diff --git a/DOCS/tech/colorspaces.txt b/DOCS/tech/colorspaces.txt
index 291a435f30..eaf9d221e4 100644
--- a/DOCS/tech/colorspaces.txt
+++ b/DOCS/tech/colorspaces.txt
@@ -66,9 +66,9 @@ The most misunderstood thingie...
In MPlayer, we usually have 3 pointers to the Y, U and V planes, so it
doesn't matter what the order of the planes in the memory is:
for mp_image_t and libvo's draw_slice():
- planes[0] = Y = luminance
- planes[1] = U = Cb = blue
- planes[2] = V = Cr = red
+ planes[0] = Y = luminance
+ planes[1] = U = Cb = blue
+ planes[2] = V = Cr = red
Note: planes[1] is ALWAYS U, and planes[2] is V, the FOURCC
(YV12 vs. I420) doesn't matter here! So, every codec using 3 pointers
(not only the first one) normally supports YV12 and I420 (=IYUV), too!
diff --git a/DOCS/tech/general.txt b/DOCS/tech/general.txt
index 631ee3f9de..4ca3671cf3 100644
--- a/DOCS/tech/general.txt
+++ b/DOCS/tech/general.txt
@@ -14,64 +14,64 @@ The main modules:
2. demuxer.c: this does the demultiplexing (separating) of the input to
audio, video or dvdsub channels, and their reading by buffered packages.
- The demuxer.c is basically a framework, which is the same for all the
- input formats, and there are parsers for each of them (mpeg-es,
- mpeg-ps, avi, avi-ni, asf), these are in the demux_*.c files.
- The structure is the demuxer_t. There is only one demuxer.
+ The demuxer.c is basically a framework, which is the same for all the
+ input formats, and there are parsers for each of them (mpeg-es,
+ mpeg-ps, avi, avi-ni, asf), these are in the demux_*.c files.
+ The structure is the demuxer_t. There is only one demuxer.
2.a. demux_packet_t, that is DP.
Contains one chunk (avi) or packet (asf,mpg). They are stored in memory as
- in linked list, cause of their different size.
+ in linked list, cause of their different size.
2.b. demuxer stream, that is DS.
Struct: demux_stream_t
Every channel (a/v/s) has one. This contains the packets for the stream
(see 2.a). For now, there can be 3 for each demuxer :
- - audio (d_audio)
- - video (d_video)
- - DVD subtitle (d_dvdsub)
+ - audio (d_audio)
+ - video (d_video)
+ - DVD subtitle (d_dvdsub)
2.c. stream header. There are 2 types (for now): sh_audio_t and sh_video_t
This contains every parameter essential for decoding, such as input/output
- buffers, chosen codec, fps, etc. There are each for every stream in
- the file. At least one for video, if sound is present then another,
- but if there are more, then there'll be one structure for each.
- These are filled according to the header (avi/asf), or demux_mpg.c
- does it (mpg) if it founds a new stream. If a new stream is found,
- the ====> Found audio/video stream: <id> messages is displayed.
-
- The chosen stream header and its demuxer are connected together
- (ds->sh and sh->ds) to simplify the usage. So it's enough to pass the
- ds or the sh, depending on the function.
-
- For example: we have an asf file, 6 streams inside it, 1 audio, 5
- video. During the reading of the header, 6 sh structs are created, 1
- audio and 5 video. When it starts reading the packet, it chooses the
- stream for the first found audio & video packet, and sets the sh
- pointers of d_audio and d_video according to them. So later it reads
- only these streams. Of course the user can force choosing a specific
- stream with
- -vid and -aid switches.
- A good example for this is the DVD, where the english stream is not
- always the first, so every VOB has different language :)
- That's when we have to use for example the -aid 128 switch.
+ buffers, chosen codec, fps, etc. There are each for every stream in
+ the file. At least one for video, if sound is present then another,
+ but if there are more, then there'll be one structure for each.
+ These are filled according to the header (avi/asf), or demux_mpg.c
+ does it (mpg) if it founds a new stream. If a new stream is found,
+ the ====> Found audio/video stream: <id> messages is displayed.
+
+ The chosen stream header and its demuxer are connected together
+ (ds->sh and sh->ds) to simplify the usage. So it's enough to pass the
+ ds or the sh, depending on the function.
+
+ For example: we have an asf file, 6 streams inside it, 1 audio, 5
+ video. During the reading of the header, 6 sh structs are created, 1
+ audio and 5 video. When it starts reading the packet, it chooses the
+ stream for the first found audio & video packet, and sets the sh
+ pointers of d_audio and d_video according to them. So later it reads
+ only these streams. Of course the user can force choosing a specific
+ stream with
+ -vid and -aid switches.
+ A good example for this is the DVD, where the english stream is not
+ always the first, so every VOB has different language :)
+ That's when we have to use for example the -aid 128 switch.
Now, how this reading works?
- - demuxer.c/demux_read_data() is called, it gets how many bytes,
- and where (memory address), would we like to read, and from which
+ - demuxer.c/demux_read_data() is called, it gets how many bytes,
+ and where (memory address), would we like to read, and from which
DS. The codecs call this.
- - this checks if the given DS's buffer contains something, if so, it
- reads from there as much as needed. If there isn't enough, it calls
- ds_fill_buffer(), which:
- - checks if the given DS has buffered packages (DP's), if so, it moves
- the oldest to the buffer, and reads on. If the list is empty, it
- calls demux_fill_buffer() :
- - this calls the parser for the input format, which reads the file
- onward, and moves the found packages to their buffers.
- Well it we'd like an audio package, but only a bunch of video
- packages are available, then sooner or later the:
- DEMUXER: Too many (%d in %d bytes) audio packets in the buffer
- error shows up.
+ - this checks if the given DS's buffer contains something, if so, it
+ reads from there as much as needed. If there isn't enough, it calls
+ ds_fill_buffer(), which:
+ - checks if the given DS has buffered packages (DP's), if so, it moves
+ the oldest to the buffer, and reads on. If the list is empty, it
+ calls demux_fill_buffer() :
+ - this calls the parser for the input format, which reads the file
+ onward, and moves the found packages to their buffers.
+ Well it we'd like an audio package, but only a bunch of video
+ packages are available, then sooner or later the:
+ DEMUXER: Too many (%d in %d bytes) audio packets in the buffer
+ error shows up.
2.d. video.c: this file/function handle the reading and assembling of the
video frames. each call to video_read_frame() should read and return a
@@ -101,7 +101,7 @@ Now, go on:
The given stream's actual position is in the 'timer' field of the
corresponding stream header (sh_audio / sh_video).
- The structure of the playing loop :
+ The structure of the playing loop :
while(not EOF) {
fill audio buffer (read & decode audio) + increase a_frame
read & decode a single video frame + increase v_frame
@@ -111,89 +111,89 @@ Now, go on:
handle events (keys,lirc etc) -> pause,seek,...
}
- When playing (a/v), it increases the variables by the duration of the
- played a/v.
- - with audio this is played bytes / sh_audio->o_bps
- Note: i_bps = number of compressed bytes for one second of audio
- o_bps = number of uncompressed bytes for one second of audio
- (this is = bps*samplerate*channels)
- - with video this is usually == 1.0/fps, but I have to note that
- fps doesn't really matters at video, for example asf doesn't have that,
- instead there is "duration" and it can change per frame.
- MPEG2 has "repeat_count" which delays the frame by 1-2.5 ...
- Maybe only AVI and MPEG1 has fixed fps.
-
- So everything works right until the audio and video are in perfect
- synchronity, since the audio goes, it gives the timing, and if the
- time of a frame passed, the next frame is displayed.
- But what if these two aren't synchronized in the input file?
- PTS correction kicks in. The input demuxers read the PTS (presentation
- timestamp) of the packages, and with it we can see if the streams
- are synchronized. Then MPlayer can correct the a_frame, within
- a given maximal bounder (see -mc option). The summary of the
- corrections can be found in c_total .
-
- Of course this is not everything, several things suck.
- For example the soundcards delay, which has to be corrected by
- MPlayer! The audio delay is the sum of all these:
- - bytes read since the last timestamp:
- t1 = d_audio->pts_bytes/sh_audio->i_bps
- - if Win32/ACM then the bytes stored in audio input buffer
- t2 = a_in_buffer_len/sh_audio->i_bps
- - uncompressed bytes in audio out buffer
- t3 = a_buffer_len/sh_audio->o_bps
- - not yet played bytes stored in the soundcard's (or DMA's) buffer
- t4 = get_audio_delay()/sh_audio->o_bps
-
- From this we can calculate what PTS we need for the just played
- audio, then after we compare this with the video's PTS, we have
- the difference!
-
- Life didn't get simpler with AVI. There's the "official" timing
- method, the BPS-based, so the header contains how many compressed
- audio bytes or chunks belong to one second of frames.
- In the AVI stream header there are 2 important fields, the
- dwSampleSize, and dwRate/dwScale pairs:
- - If the dwSampleSize is 0, then it's VBR stream, so its bitrate
- isn't constant. It means that 1 chunk stores 1 sample, and
- dwRate/dwScale gives the chunks/sec value.
- - If the dwSampleSize is >0, then it's constant bitrate, and the
- time can be measured this way: time = (bytepos/dwSampleSize) /
- (dwRate/dwScale) (so the sample's number is divided with the
- samplerate). Now the audio can be handled as a stream, which can
- be cut to chunks, but can be one chunk also.
-
- The other method can be used only for interleaved files: from
- the order of the chunks, a timestamp (PTS) value can be calculated.
- The PTS of the video chunks are simple: chunk number * fps
- The audio is the same as the previous video chunk was.
- We have to pay attention to the so called "audio preload", that is,
- there is a delay between the audio and video streams. This is
- usually 0.5-1.0 sec, but can be totally different.
- The exact value was measured until now, but now the demux_avi.c
- handles it: at the audio chunk after the first video, it calculates
- the A/V difference, and take this as a measure for audio preload.
+ When playing (a/v), it increases the variables by the duration of the
+ played a/v.
+ - with audio this is played bytes / sh_audio->o_bps
+ Note: i_bps = number of compressed bytes for one second of audio
+ o_bps = number of uncompressed bytes for one second of audio
+ (this is = bps*samplerate*channels)
+ - with video this is usually == 1.0/fps, but I have to note that
+ fps doesn't really matters at video, for example asf doesn't have that,
+ instead there is "duration" and it can change per frame.
+ MPEG2 has "repeat_count" which delays the frame by 1-2.5 ...
+ Maybe only AVI and MPEG1 has fixed fps.
+
+ So everything works right until the audio and video are in perfect
+ synchronity, since the audio goes, it gives the timing, and if the
+ time of a frame passed, the next frame is displayed.
+ But what if these two aren't synchronized in the input file?
+ PTS correction kicks in. The input demuxers read the PTS (presentation
+ timestamp) of the packages, and with it we can see if the streams
+ are synchronized. Then MPlayer can correct the a_frame, within
+ a given maximal bounder (see -mc option). The summary of the
+ corrections can be found in c_total .
+
+ Of course this is not everything, several things suck.
+ For example the soundcards delay, which has to be corrected by
+ MPlayer! The audio delay is the sum of all these:
+ - bytes read since the last timestamp:
+ t1 = d_audio->pts_bytes/sh_audio->i_bps
+ - if Win32/ACM then the bytes stored in audio input buffer
+ t2 = a_in_buffer_len/sh_audio->i_bps
+ - uncompressed bytes in audio out buffer
+ t3 = a_buffer_len/sh_audio->o_bps
+ - not yet played bytes stored in the soundcard's (or DMA's) buffer
+ t4 = get_audio_delay()/sh_audio->o_bps
+
+ From this we can calculate what PTS we need for the just played
+ audio, then after we compare this with the video's PTS, we have
+ the difference!
+
+ Life didn't get simpler with AVI. There's the "official" timing
+ method, the BPS-based, so the header contains how many compressed
+ audio bytes or chunks belong to one second of frames.
+ In the AVI stream header there are 2 important fields, the
+ dwSampleSize, and dwRate/dwScale pairs:
+ - If the dwSampleSize is 0, then it's VBR stream, so its bitrate
+ isn't constant. It means that 1 chunk stores 1 sample, and
+ dwRate/dwScale gives the chunks/sec value.
+ - If the dwSampleSize is >0, then it's constant bitrate, and the
+ time can be measured this way: time = (bytepos/dwSampleSize) /
+ (dwRate/dwScale) (so the sample's number is divided with the
+ samplerate). Now the audio can be handled as a stream, which can
+ be cut to chunks, but can be one chunk also.
+
+ The other method can be used only for interleaved files: from
+ the order of the chunks, a timestamp (PTS) value can be calculated.
+ The PTS of the video chunks are simple: chunk number * fps
+ The audio is the same as the previous video chunk was.
+ We have to pay attention to the so called "audio preload", that is,
+ there is a delay between the audio and video streams. This is
+ usually 0.5-1.0 sec, but can be totally different.
+ The exact value was measured until now, but now the demux_avi.c
+ handles it: at the audio chunk after the first video, it calculates
+ the A/V difference, and take this as a measure for audio preload.
3.a. audio playback:
- Some words on audio playback:
- Not the playing is hard, but:
- 1. knowing when to write into the buffer, without blocking
- 2. knowing how much was played of what we wrote into
- The first is needed for audio decoding, and to keep the buffer
- full (so the audio will never skip). And the second is needed for
- correct timing, because some soundcards delay even 3-7 seconds,
- which can't be forgotten about.
- To solve this, the OSS gives several possibilities:
- - ioctl(SNDCTL_DSP_GETODELAY): tells how many unplayed bytes are in
- the soundcard's buffer -> perfect for timing, but not all drivers
- support it :(
- - ioctl(SNDCTL_DSP_GETOSPACE): tells how much can we write into the
- soundcard's buffer, without blocking. If the driver doesn't
- support GETODELAY, we can use this to know how much the delay is.
- - select(): should tell if we can write into the buffer without
- blocking. Unfortunately it doesn't say how much we could :((
- Also, doesn't/badly works with some drivers.
- Only used if none of the above works.
+ Some words on audio playback:
+ Not the playing is hard, but:
+ 1. knowing when to write into the buffer, without blocking
+ 2. knowing how much was played of what we wrote into
+ The first is needed for audio decoding, and to keep the buffer
+ full (so the audio will never skip). And the second is needed for
+ correct timing, because some soundcards delay even 3-7 seconds,
+ which can't be forgotten about.
+ To solve this, the OSS gives several possibilities:
+ - ioctl(SNDCTL_DSP_GETODELAY): tells how many unplayed bytes are in
+ the soundcard's buffer -> perfect for timing, but not all drivers
+ support it :(
+ - ioctl(SNDCTL_DSP_GETOSPACE): tells how much can we write into the
+ soundcard's buffer, without blocking. If the driver doesn't
+ support GETODELAY, we can use this to know how much the delay is.
+ - select(): should tell if we can write into the buffer without
+ blocking. Unfortunately it doesn't say how much we could :((
+ Also, doesn't/badly works with some drivers.
+ Only used if none of the above works.
4. Codecs. Consists of libmpcodecs/* and separate files or libs,
for example liba52, libmpeg2, xa/*, alaw.c, opendivx/*, loader, mp3lib.
diff --git a/DOCS/tech/hwac3.txt b/DOCS/tech/hwac3.txt
index 9ae5a4f136..25a2c5bc7a 100644
--- a/DOCS/tech/hwac3.txt
+++ b/DOCS/tech/hwac3.txt
@@ -131,9 +131,9 @@ configure mplayer to use adsp instead of dsp. The samplerate constrain
is no big deal here since movies usually are in 48Khz anyway. The
configuration in '/etc/modules.conf' is no big deal also:
-alias snd-card-0 snd-card-cmipci # insert your card here
-alias snd-card-1 snd-pcm-oss # load OSS emulation
-options snd-pcm-oss snd_dsp_map=0 snd_adsp_map=2 # do the mapping
+alias snd-card-0 snd-card-cmipci # insert your card here
+alias snd-card-1 snd-pcm-oss # load OSS emulation
+options snd-pcm-oss snd_dsp_map=0 snd_adsp_map=2 # do the mapping
This works flawlessly in combination with alsa's native
SysVrc-init-script 'alsasound'. Be sure to disable any distribution
diff --git a/DOCS/tech/libvo.txt b/DOCS/tech/libvo.txt
index 945aeab952..245d29eade 100644
--- a/DOCS/tech/libvo.txt
+++ b/DOCS/tech/libvo.txt
@@ -17,7 +17,7 @@ approaches closer to the sometimes convoluted way DirectX works.
Each vo driver _has_ to implement these:
preinit():
- init the video system (to support querying for supported formats)
+ init the video system (to support querying for supported formats)
uninit():
Uninit the whole system, this is on the same "level" as preinit.
@@ -26,95 +26,95 @@ Each vo driver _has_ to implement these:
Current controls (VOCTRL_QUERY_FORMAT must be implemented,
VOCTRL_DRAW_IMAGE, VOCTRL_FULLSCREEN, VOCTRL_UPDATE_SCREENINFO
should be implemented):
- VOCTRL_QUERY_FORMAT - queries if a given pixelformat is supported.
- It also returns various flags decsirbing the capabilities
- of the driver with teh given mode. for the flags, see
- file vfcaps.h !
- the most important flags, every driver must properly report
- these:
- 0x1 - supported (with or without conversion)
- 0x2 - supported without conversion (define 0x1 too!)
- 0x100 - driver/hardware handles timing (blocking)
- also SET sw/hw scaling and osd support flags, and flip,
- and accept_stride if you implement VOCTRL_DRAW_IMAGE (see bellow)
- NOTE: VOCTRL_QUERY_FORMAT may be called _before_ first config()
- but is always called between preinit() and uninit()
- VOCTRL_GET_IMAGE
- libmpcodecs Direct Rendering interface
- You need to update mpi (mp_image.h) structure, for example,
- look at vo_x11, vo_sdl, vo_xv or mga_common.
- VOCTRL_DRAW_IMAGE
- replacement for the current draw_slice/draw_frame way of
- passing video frames. by implementing SET_IMAGE, you'll get
- image in mp_image struct instead of by calling draw_*.
- unless you return VO_TRUE for VOCTRL_DRAW_IMAGE call, the
- old-style draw_* functils will be called!
- Note: draw_slice is still mandatory, for per-slice rendering!
- VOCTRL_RESET - reset the video device
- This is sent on seeking and similar and is useful if you are
- using a device which prebuffers frames that need to flush them
- before refilling audio/video buffers.
- VOCTRL_PAUSE
- VOCTRL_RESUME
- VOCTRL_GUISUPPORT
- return true only if driver supports co-operation with
- MPlayer's GUI (not yet used by GUI)
- VOCTRL_SET_EQUALIZER
- set the video equalizer to the given values
- two arguments are provided: item and value
- item is a string, the possible values are (currently):
- brightness, contrast, saturation, hue
- VOCTRL_GET_EQUALIZER
- get the current video equalizer values
- two arguments are provided: item and value
- item is a string, the possible values are (currently):
- brightness, contrast, saturation, hue
- VOCTRL_ONTOP
- Makes the player window stay-on-top. Only supported (currently)
- by drivers which use X11, except SDL, as well as directx and
- gl2 under Windows.
- VOCTRL_BORDER
- Makes the player window borderless.
- VOCTRL_FULLSCREEN
- Switch from and to fullscreen mode
- VOCTRL_GET_PANSCAN
- VOCTRL_SET_PANSCAN
+ VOCTRL_QUERY_FORMAT - queries if a given pixelformat is supported.
+ It also returns various flags decsirbing the capabilities
+ of the driver with teh given mode. for the flags, see
+ file vfcaps.h !
+ the most important flags, every driver must properly report
+ these:
+ 0x1 - supported (with or without conversion)
+ 0x2 - supported without conversion (define 0x1 too!)
+ 0x100 - driver/hardware handles timing (blocking)
+ also SET sw/hw scaling and osd support flags, and flip,
+ and accept_stride if you implement VOCTRL_DRAW_IMAGE (see bellow)
+ NOTE: VOCTRL_QUERY_FORMAT may be called _before_ first config()
+ but is always called between preinit() and uninit()
+ VOCTRL_GET_IMAGE
+ libmpcodecs Direct Rendering interface
+ You need to update mpi (mp_image.h) structure, for example,
+ look at vo_x11, vo_sdl, vo_xv or mga_common.
+ VOCTRL_DRAW_IMAGE
+ replacement for the current draw_slice/draw_frame way of
+ passing video frames. by implementing SET_IMAGE, you'll get
+ image in mp_image struct instead of by calling draw_*.
+ unless you return VO_TRUE for VOCTRL_DRAW_IMAGE call, the
+ old-style draw_* functils will be called!
+ Note: draw_slice is still mandatory, for per-slice rendering!
+ VOCTRL_RESET - reset the video device
+ This is sent on seeking and similar and is useful if you are
+ using a device which prebuffers frames that need to flush them
+ before refilling audio/video buffers.
+ VOCTRL_PAUSE
+ VOCTRL_RESUME
+ VOCTRL_GUISUPPORT
+ return true only if driver supports co-operation with
+ MPlayer's GUI (not yet used by GUI)
+ VOCTRL_SET_EQUALIZER
+ set the video equalizer to the given values
+ two arguments are provided: item and value
+ item is a string, the possible values are (currently):
+ brightness, contrast, saturation, hue
+ VOCTRL_GET_EQUALIZER
+ get the current video equalizer values
+ two arguments are provided: item and value
+ ite