summaryrefslogtreecommitdiffstats
path: root/video/out/vo_opengl_old.c
Commit message (Collapse)AuthorAgeFilesLines
* gl_common: move things used by vo_opengl_old.c only to vo_opengl_old.cwm42013-01-131-0/+1241
| | | | Having this in gl_common is confusing.
* mp_image: remove mp_image.bppwm42013-01-131-1/+5
| | | | | | | | This field contained the "average" bit depth per pixel. It serves no purpose anymore. Remove it. Only vo_opengl_old still uses this in order to allocate a buffer that is shared between all planes.
* video: remove img_format compat hackswm42013-01-131-23/+26
| | | | | | | | | | | | | | | | | Remove the strange things the old mp_image_setfmt() code did to the image format parameters. This includes setting chroma shift to 31 for gray (Y8) formats and more. Y8 + vo_opengl_old didn't actually work for unknown reasons (regression in this branch). Fix this. The difference is that Y8 is now interpreted as gray RGB (LUMINANCE texture) instead of involving YUV (and levels) conversion. Get rid of distinguishing RGB and BGR. There wasn't really any good reason for this. Remove mp_get_chroma_shift() and IMGFMT_IS_YUVP16*(). mp_imgfmt_desc gives the same information and more.
* video: decouple internal pixel formats from FourCCswm42013-01-131-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mplayer's video chain traditionally used FourCCs for pixel formats. For example, it used IMGFMT_YV12 for 4:2:0 YUV, which was defined to the string 'YV12' interpreted as unsigned int. Additionally, it used to encode information into the numeric values of some formats. The RGB formats had their bit depth and endian encoded into the least significant byte. Extended planar formats (420P10 etc.) had chroma shift, endian, and component bit depth encoded. (This has been removed in recent commits.) Replace the FourCC mess with a simple enum. Remove all the redundant formats like YV12/I420/IYUV. Replace some image format names by something more intuitive, most importantly IMGFMT_YV12 -> IMGFMT_420P. Add img_fourcc.h, which contains the old IDs for code that actually uses FourCCs. Change the way demuxers, that output raw video, identify the video format: they set either MP_FOURCC_RAWVIDEO or MP_FOURCC_IMGFMT to request the rawvideo decoder, and sh_video->imgfmt specifies the pixel format. Like the previous hack, this is supposed to avoid the need for a complete codecs.cfg entry per format, or other lookup tables. (Note that the RGB raw video FourCCs mostly rely on ffmpeg's mappings for NUT raw video, but this is still considered better than adding a raw video decoder - even if trivial, it would be full of annoying lookup tables.) The TV code has not been tested. Some corrective changes regarding endian and other image format flags creep in.
* video: cleanup: replace old mp_image function nameswm42013-01-131-2/+2
| | | | | mp_image_alloc() also changes argument order compared to alloc_mpi(). The format now comes first, then width/height.
* video: remove things related to old DR codewm42013-01-131-37/+36
| | | | | | | | | | | | | | | Remove mp_image.width/height. The w/h members are the ones to use. width/height were used internally by vf_get_image(), and sometimes for other purposes. Remove some image flags, most of which are now useless or completely unused. This includes VFCAP_ACCEPT_STRIDE: the vf_expand insertion in vf.c does nothing. Remove some other unused mp_image fields. Some rather messy changes in vo_opengl[_old] to get rid of legacy mp_image flags and fields. This is left from when vo_gl supported DR.
* mp_image: require using mp_image_set_size() for setting w/hwm42013-01-131-5/+2
| | | | | | | | | | | | | | Setting the size of a mp_image must be done with mp_image_set_size() now. Do this to guarantee that the redundant fields (like chroma_width) are updated consistently. Replacing the redundant fields by function calls would probably be better, but there are too many uses of them, and is a bit less convenient. Most code actually called mp_image_setfmt(), which did this as well. This commit just makes things a bit more explicit. Warning: the video filter chain still sets up mp_images manually, and vf_get_image() is not updated.
* video: remove slice based filtering and video outputwm42013-01-131-27/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | Slices allowed filtering or drawing video in horizontal bands or blocks. This allowed working on the video in smaller units. In theory, this could bring a performance win by lowering cache pressure, as you didn't have to keep the whole video frame in cache while filtering, only the slice. In practice, the slice code path was barely used for the following reasons: - Multithreaded decoding with ffmpeg didn't use slices. The ffmpeg slice callback was disabled, because it can be called from another thread, and the mplayer video chain is not thread-safe. - There was nothing that would turn "full" images into appropriate slices, so slices were rarely used. - Most filters didn't actually support slices. On the other hand, supporting slices lead to code duplication and more complex code in general. I made some experiments and didn't find any actual measurable performance improvements when using slices. Even ffmpeg removed slices based filtering from libavfilter in favor of simpler code. The most broken thing about the slices code path is that slices can't be queued, like it is done for images in vo.c.
* video/out: replace VOCTRL_QUERY_FORMAT with vo_driver.query_formatwm42013-01-131-2/+1
|
* video/out: make draw_image mandatory, remove VOCTRL_DRAW_IMAGEwm42013-01-131-5/+2
| | | | | | | | | | | | | Remove VOCTRL_DRAW_IMAGE and always set vo_driver.draw_image in VOs. Make draw_image mandatory: change some VOs (like vo_x11) to support it, and remove the image-to-slices fallback in vf_vo. Remove vo_driver.is_new. This member indicated whether draw_image is supported unconditionally, which is now always the case. draw_image_pts is a hack until the video filter chain is changed to include the PTS as field in mp_image. Then vo_vdpau and vo_lavc will be changed to use draw_image.
* vo_opengl_old: reject 9-15 bit formats if textures have less than 16 bitwm42012-12-281-1/+19
| | | | | | | | | | | | | | | | | | | For 9-15 bit material, cutting off the lower bits leads to significant quality reduction, because these formats leave the most significant bits unused (e.g. 10 bit padded to 16 bit, transferred as 8 bit -> only 2 bits left). 16 bit formats still can be played like this, as cutting the lower bits merely reduces quality in this case. This problem was encountered with the following GPU/driver combination: OpenGL vendor string: Intel Open Source Technology Center OpenGL renderer string: Mesa DRI Intel(R) 915GM x86/MMX/SSE2 OpenGL version string: 1.4 Mesa 9.0.1 It appears 16 bit support is rather common on GPUs, so testing the actual texture depth wasn't needed until now. (There are some other Mesa GPU/driver combinations which support 16 bit only when using RG textures instead of LUMINANCE_ALPHA. This is due to OpenGL driver bugs.)
* options, vo_x11: remove -zoom option, make it defaultwm42012-11-161-1/+1
| | | | | | | | | | | | | | The -zoom option enabled scaling with vo_x11. Remove the -zoom option, and make its behavior default. Since vo_x11 has to use libswscale for colorspace conversion anyway, which doesn't do actual extra scaling when vo_x11 is run in windowed mode, there should be no speed difference with this change. The code removed from vf_scale attempted to scale the video to d_width/ d_height, which matters for anamorphic video and the --xy option only. vo_x11 can handle these natively. The only case for which the removed vf_scale code could matter is encoding with vo_lavc, but since that didn't set VOFLAG_SWSCALE, nothing actually changes.
* Rename directories, move files (step 2 of 2)wm42012-11-121-6/+6
| | | | | | | | | | | | Finish renaming directories and moving files. Adjust all include statements to make the previous commit compile. The two commits are separate, because git is bad at tracking renames and content changes at the same time. Also take this as an opportunity to remove the separation between "common" and "mplayer" sources in the Makefile. ("common" used to be shared between mplayer and mencoder.)
* Rename directories, move files (step 1 of 2) (does not compile)wm42012-11-121-0/+1166
Tis drops the silly lib prefixes, and attempts to organize the tree in a more logical way. Make the top-level directory less cluttered as well. Renames the following directories: libaf -> audio/filter libao2 -> audio/out libvo -> video/out libmpdemux -> demux Split libmpcodecs: vf* -> video/filter vd*, dec_video.* -> video/decode mp_image*, img_format*, ... -> video/ ad*, dec_audio.* -> audio/decode libaf/format.* is moved to audio/ - this is similar to how mp_image.* is located in video/. Move most top-level .c/.h files to core. (talloc.c/.h is left on top- level, because it's external.) Park some of the more annoying files in compat/. Some of these are relicts from the time mplayer used ffmpeg internals. sub/ is not split, because it's too much of a mess (subtitle code is mixed with OSD display and rendering). Maybe the organization of core is not ideal: it mixes playback core (like mplayer.c) and utility helpers (like bstr.c/h). Should the need arise, the playback core will be moved somewhere else, while core contains all helper and common code.