summaryrefslogtreecommitdiffstats
path: root/video
Commit message (Collapse)AuthorAgeFilesLines
* video: approximate AVI timestamps via DTS handlingwm42016-02-114-54/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Until now (and in mplayer traditionally), avi timestamps were handled with a timestamp FIFO. AVI timestamps are essentially just strictly increasing frame numbers and are not reordered like normal timestamps. Limiting the FIFO is required because frames can be dropped. To make it worse, frame dropping can't be distinguished from the decoder not returning output due to increasing the buffering required for B-frames. ("Measuring" the buffering at playback start seems like an interesting idea, but won't work as the buffering could be increased mid-playback.) Another problem are skipped frames (packets with data, but which do not contain a video frame). Besides dropped and skipped frames, there is the problem that we can't always know the delay. External decoders like MMAL are not going to tell us. (And later perhaps others, like direct VideoToolbox usage.) In general, this works not-well enough that I prefer the solution of passing through AVI timestamps as DTS. This is slightly incorrect, because most decoders treat DTS as mpeg-style timestamps, which already include a b-frame delay, and thus will be shifted by a few frames. This means there will be a problem with A/V sync in some situations. Note that the FFmpeg AVI demuxer shifts timestamps by an additional amount (which increases after the first seek!?!?), which makes the situation worse. It works well with VfW-muxed Matroska files, though. On RPI, the first X timestamps are broken until the MMAL decoder "locks on".
* Enable building the opengl-cb video renderer on AndroidJan Ekström2016-02-102-0/+26
| | | | | | * Add Android-specific OpenGL ES feature and checks * Add missing GL_* symbols for Android (list gathered by Ilya Zhuravlev <whatever@xyz.is>)
* player: fix crash if no video decoder can be initializedwm42016-02-101-0/+2
| | | | Caused by the recent refactoring for complex filters.
* vo_opengl_cb: do also not block when drawing nothingwm42016-02-091-1/+1
| | | | | | | The ctx->redrawing field signals whether flip_page() should block. Do not block if a black frame (i.e. nothing) is to be rendered. Also, frame==NULL can never happen.
* image_writer: take care of prediction_method deprecationwm42016-02-091-1/+3
| | | | | | | The field was recently deprecated, and you're supposed to set the private codec option instead. Not sure if this really works as intended.
* vo_opengl: vdpau: call glVDPAUFiniNV only if initializedwm42016-02-081-6/+8
| | | | This is "more correct". See #2798.
* wayland: set fs mode on every configureAlexander Preisinger2016-02-071-5/+6
| | | | | Check and set the fullscreen mode on every surface configure event. This prevents gnome from resizing mpvs fullscreen window to a smaller size.
* video/decode/dxva2.c: GUID_NULL conflictskwkam2016-02-061-1/+1
| | | | | | | GUID_NULL is defined in <ks.h> gcc 6.0 refuses to link the executable because of that Signed-off-by: wm4 <wm4@nowhere>
* vd_lavc: fix use after free in some hwdecsKevin Mitchell2016-02-061-8/+3
| | | | | | | | | | | | | | fd339e3f53996efd2dae9525990da433d1e1bf89 introduced a regression that caused segfault while uniniting dxva2 decoder (and possibly vdpau too). The problem was that it freed the avctx earlier, before calling the backend-specific uninit which referenced it. Revert some of the changes of that commit, and avoid calling flush by checking whether the codec is open instead. (Based on a PR by Kevin Mitchell.) Signed-off-by: wm4 <wm4@nowhere>
* build: make libavfilter mandatorywm42016-02-051-2/+0
| | | | | | The complex filter support that will be added makes much more complex use of libavfilter, and I'm not going to bother with adding hacks to keep libavfilter optional.
* vo_rpi: add geometry handlingUros Vampl2016-02-051-10/+59
| | | | | | This makes it possible to set video size and position using the --geometry and/or --autofit options. It's also possible to switch between fullscreen/non-fullscreen playback during runtime.
* vd_lavc: avoid calling flush on an unopened AVCodecContextwm42016-02-051-6/+9
| | | | | | | It can be "dangerous". In particular, the decoder might have failed to initialize, and is now in a broken state. avcodec_flush_buffers() is not expected to be called in this state, and could trigger undefined behavior.
* video: remove AVI timestamps for dropped frameswm42016-02-041-1/+5
| | | | | Might possible improve A/V sync, although this is at best approximate. (AVI is just fucked.)
* vd_lavc: remove redundant best_csp fieldwm42016-02-032-15/+3
| | | | And some other simplifications.
* vd_lavc: force microsecond timestamps on RPIwm42016-02-032-3/+9
| | | | | | | | | | Avoids "problems". In particular, it makes MMAL output a NOPTS timestamp if the input timestamp was NOPTS. Don't do it for other decoders. Ideally, we will at some point in the future switch to integer fractions for timestamps at least up until the filter layer. But this would be a larger change, and for now I'd prefer keeping the not-rounded demuxer timestamps (if we have them).
* w32_common: switch to UniformResourceLocatorWwm42016-02-021-3/+5
| | | | | | | | This is the "unicode" version of it. It appears Firefox uses it now? I'm not sure if we still need to support the old variant, but hopefully not. Fixes #2782.
* audio/video: merge decoder return valueswm42016-02-012-23/+18
| | | | | | Will be helpful for the coming filter support. I planned on merging audio/video decoding, but this will have to wait a bit longer, so only remove the duplicate status codes.
* vo_opengl: hwdec: use IDs for API, and log which backend is usedwm42016-02-017-17/+30
| | | | | | | Since there can be multiple backends for a single API (vaapi can use GLX or EGL), not logging the exact backend name is annoying. So add it. At the same time, there is no need to duplicate the name as used by the --hwdec options, so replace it with using the numeric hwdec API ID.
* x11: stop waiting for MapNotify when WM_STATE changesTracerneo2016-01-311-1/+3
| | | | Signed-off-by: wm4 <wm4@nowhere>
* vd_lavc: release surfaces before destroying decoderwm42016-01-301-4/+3
| | | | | | | | | | Commit b53cb8de added a delay queue for decoded frames. This is supposed to be used with copy-back decoders like dxva2-copy and vaapi-copy. Surfaces returned by them can't be referenced after uninitializing the decoders, so they have to be released before destroying the decoder. Move the flush_all() call above decoder uninit accordingly. Also move the destruction of the AVFrame used for decoding (just for being defensive - normally it doesn't hold any reference).
* vd_lavc: allow switching between hw/sw decoding any timewm42016-01-292-21/+34
| | | | | | | We just need to provide an entrypoint for it, and move the main init code to a separate function. This gets rid of the messy video chain full reinit in command.c, which completely destroyed and recreated the video state for the purpose of mid-stream hw/sw switching.
* vd_lavc: simplify an aspect of hwdec fallbackwm42016-01-292-10/+5
| | | | | | Don't give the "software_fallback_decoder" field special meaning. Alwass set it, and rename it to "decoder". Whether hw decoding is used is determined by the "hwdec" field already.
* video: fix broken-PTS fallback determinationwm42016-01-291-11/+6
| | | | | | | | | | This codes tries to deal with broken PTS timestamps, but since commit 271cabe6 it didn't always overwrite the previous timestamp as it should have. This mattered only if there were broken timestamps in the video stream. Also remove the pointless prev_codec_pts variables, since the decoder doesn't overwrite these fields anymore.
* rpi: add VC-1 supportwm42016-01-281-0/+1
| | | | | Oops, this was forgotten earlier. Enables automatic use of the VC-1 hardware decoder on RPI.
* mp_image: copy dts as part of mp_image attributes toowm42016-01-281-0/+1
| | | | | Fixes DTS handling with certain container formats broken in commit b53cb8de (when using vaapi-copy or dxva2-copy).
* rpi: add mpeg-4 decoding supportwm42016-01-271-0/+1
|
* vo_opengl: do chroma merging in integer conversion stagewm42016-01-271-3/+13
| | | | | | This is a huge win when playing yuv420p10 on ANGLE - the 2 conversion stages for planes 1 and 2 and the chroma merging stage are all merged into one.
* vo_opengl: add precision qualifier to usampler2D on ANGLEwm42016-01-271-1/+1
| | | | | | | | GLES requires this. Some more common sampler types have default precisions, but not usampler2D. Newer ANGLE builds verify this more strictly than older builds, so this wasn't caught before. Fixes #2761.
* vo_opengl: replace tscale-interpolates-only with interpolation-thresholdwm42016-01-272-9/+12
| | | | | The previous approach was too naive, and can e.g. ruin playback if scheduling switches e.g. between 1 and 2 vsync per frame.
* vo_opengl: support 10 bit support with ANGLEwm42016-01-263-10/+120
| | | | | | | | | | | | | | | | | | | | | | | | | | GLES does not support high bit depth fixed point textures for unknown reasons, so direct 10 bit input is not possible. But we can still use integer textures, which are supported by GLES 3.0. These store integer data just like the standard fixed point textures, except they are not normalized on sampling. They also don't support bilinear filtering, and require a special sampler ("usampler2D"). While these texture formats enable us to shuffle the data to the GPU, they're rather impractical with the requirements mentioned above and our current architecture. One problem is that most code assumes it can always use bilinear scaling (even if bilinear is never used when using appropriate scale/cscale options). Another is that we don't have any concept of running a function on a texture in an uniform way. So for now, run a simple conversion step through a FBO. The FBO will use the rgba16f format normally, which gives enough bits for 10 bit, and will at least gracefully degrade with higher depth input. This is bound to be much slower than a more "direct" method, but at least it works and is simple to implement. The odd change of function call order in init_video() is to properly disable "dumb mode" (no FBO use) if these texture formats are in use.
* vo_opengl: actually reset use_normalized_range fieldwm42016-01-261-3/+3
| | | | | | | | | This was never reset - absolutely can't be right. If the renderer somehow switches back to another codepath, it certainly has to be reset. Maybe this was hard to hit, as the normalization is going to be idempotent in simpler cases (like rendering RGBA input). Also get rid of the "merged" variable.
* vo_opengl: default to rgba16f FBOs on ANGLEwm42016-01-261-2/+5
| | | | | Although it has only 1 bit more precission than rgba10_a2, it was reported to improve the visual quality.
* vaapi: lower number of allocated surfaces againwm42016-01-261-1/+1
| | | | | | | Commit b53cb8de increased this by the number of additionally delayed surfaces. But since this is only enabled in copy-back mode (which is what process_image is about), the other additional surfaces accounted for the direct rendering case can be ignored.
* vo_opengl: add tscale-interpolates-only sub-optionwm42016-01-252-1/+8
|
* vo_opengl: default scaler-resizes-only sub-option to yeswm42016-01-251-0/+2
| | | | | | | | Often requested. The main argument, that prominent scalers like sharpen change the image even if no scaling happens, disappeared anyway. ("sharpen", unsharp masking, is neither prominent nor a scaler anymore. This is an artifact from MPlayer, which fuses unsharp masking with bilinear scaling in order to make it single-pass, or so.)
* vd_lavc: delay images before reading them backwm42016-01-254-9/+52
| | | | Facilitates hardware pipelining in particular with nvidia/dxva.
* video: cleanup pts/dts passing between decoder componentswm42016-01-254-11/+18
| | | | | Instead of using semi-public codec_pts/codec_dts fields in struct dec_video, pass them via mp_image fields.
* vo_opengl: rename custom shader entrypoint from sample to sample_pixelwm42016-01-251-3/+19
| | | | | | | "sample" is a reserved identifier at least in GLES ES. Suggestions for a better name than "sample_pixel" are still welcome. Fixes #2733.
* vo_opengl: vdpau: better handling of preemption recoverywm42016-01-251-1/+1
| | | | | If recovery from preemption is done successfully, continue normally. Only fail if it's preempted during init.
* vdpau: force driver to report preemption earlywm42016-01-253-3/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Another fix for the crazy and insane nvidia preemption behavior. This time, the situation is that we are using vo_opengl with vdpau interop, and that vdpau got preempted in the background while mpv was sitting idly. This can be e.g. reproduced by using: --force-window=immediate --idle --hwdec=vdpau and switching VTs. Then after switching back, load a video file. This will not let mp_vdpau_handle_preemption() perform preemption recovery, simply because it will do so only once vdp_decoder_create() has been called. There are some other API calls which trigger preemption, but many don't. Due to the way the libavcodec API works, vdp_decoder_create() is way too late. It does so when get_format returns. It notices creating the decoder fails, and continues calling get_format without the vdpau format. We could perhaps force it to reinit again (by adding a call to vdpau.c, that checks for preemption, and sets hwdec_request_reinit), but this seems too much of a mess. Solve it by calling API in mp_vdpau_handle_preemption() that empirically does trigger preemption: output_surface_put_bits_native(). This call is useless, and in fact should be doing nothing (empty update VdpRect). There's the slight chance that in theory it will slow down operation, but in practice it's bound to be harmless. It's the likely cheapest and simplest API call I've found that can trigger the fallback this way. (The driver is closed source, so it was up to trial & error.) Also, when initializing decoding, allow initial preemption recovery, which is needed to pass the test mention above.
* video: remove some useless old RGB formatswm42016-01-256-62/+1
| | | | | | | | | | | | | Some VOs had support for these - remove them. Typically, these formats will have only some use in cases where using RGB software conversion with libswscale is faster than letting the VO/GPU do it (i.e. almost never). For the sake of testing this case, keep IMGFMT_RGB565. This is the least messy format, because it has no padding/alpha bits with unknown semantics. Note that decoding to these formats still works. We'll let libswscale repack the data to whatever the VO in use can take.
* af_lavfi, vf_lavfi: fix compilation on Libavwm42016-01-221-0/+1
| | | | It has no avfilter_graph_send_command().
* vo_opengl: vaapi: don't expect EGL exts. to be in common ext. stringwm42016-01-221-2/+6
|
* command: add vf-command commandwm42016-01-223-0/+22
|
* player: fix some oversights in video refactoringwm42016-01-221-5/+0
| | | | | | | | | | vo_chain_uninit() isn't supposed to care much about the decoder (although decoders and outputs still go strictly together, so there is not much of an actual difference now). Also unset track.d_video correctly. Remove a stale declaration from dec_video.h as well.
* vo_opengl: vaapi: reorganize platform entrypoints as tablewm42016-01-211-15/+20
|
* x11: get *current* XRandR screen configurationNils Schneider2016-01-201-1/+1
| | | | | | | | | | | | Only request the current screen configuration instead of polling for new screens, too. We're not interested in detecting any new screens as we're merely enumerating what is currently connected and configured. On some hardware (like mine) calling XRRGetScreenResources will stall X11 for about 10 to 20 seconds. This has annoyed me for a few months now and almost made me switch to VLC ;) Signed-off-by: wm4 <wm4@nowhere>
* vo_opengl: add KMS/DRM VAAPI hardware decoding interopwm42016-01-202-0/+21
| | | | Just requires glueing it together with Bloat Super Glue (tm).
* Change 3 more files to LGPLwm42016-01-202-14/+14
|
* vaapi: fix compilation on older FFmpeg/Libavwm42016-01-201-1/+1
| | | | | | They don't define FF_PROFILE_VP9_0. Fixes #2737.
* filter_kernels.h: adjust the licensewm42016-01-191-11/+5
| | | | | | Make it consistent with filter_kernels.c. See #2688.
* Change GPL/LGPL dual-licensed files to LGPLwm42016-01-1933-392/+231
| | | | | | | | | | | Do this to make the license situation less confusing. This change should be of no consequence, since LGPL is compatible with GPL anyway, and making it LGPL-only does not restrict the use with GPL code. Additionally, the wording implies that this is allowed, and that we can just remove the GPL part.
* Relicense some non-MPlayer source files to LGPL 2.1 or laterwm42016-01-1919-133/+133
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This covers source files which were added in mplayer2 and mpv times only, and where all code is covered by LGPL relicensing agreements. There are probably more files to which this applies, but I'm being conservative here. A file named ao_sdl.c exists in MPlayer too, but the mpv one is a complete rewrite, and was added some time after the original ao_sdl.c was removed. The same applies to vo_sdl.c, for which the SDL2 API is radically different in addition (MPlayer supports SDL 1.2 only). common.c contains only code written by me. But common.h is a strange case: although it originally was named mp_common.h and exists in MPlayer too, by now it contains only definitions written by uau and me. The exceptions are the CONTROL_ defines - thus not changing the license of common.h yet. codec_tags.c contained once large tables generated from MPlayer's codecs.conf, but all of these tables were removed. From demux_playlist.c I'm removing a code fragment from someone who was not asked; this probably could be done later (see commit 15dccc37). misc.c is a bit complicated to reason about (it was split off mplayer.c and thus contains random functions out of this file), but actually all functions have been added post-MPlayer. Except get_relative_time(), which was written by uau, but looks similar to 3 different versions of something similar in each of the Unix/win32/OSX timer source files. I'm not sure what that means in regards to copyright, so I've just moved it into another still-GPL source file for now. screenshot.c once had some minor parts of MPlayer's vf_screenshot.c, but they're all gone.
* vo_drm: fix CRTC usagerr-2016-01-181-1/+1
|
* vd_lavc: feed A53_CC side data packets into the demuxer for eia_608 decodingAman Gupta2016-01-181-0/+11
|
* image_writer: fix writing flipped images as jpgwm42016-01-171-1/+1
| | | | | | next_scanline is usually an unsigned int. Fixes #2635 (again).
* video: refactor: disentangle decoding/filtering some morewm42016-01-162-17/+112
| | | | | | | | | | | This moves some code related to decoding from video.c to dec_video.c, and also removes some accesses to dec_video.c from the filtering code. dec_video.ch is starting to make sense, and simply returns video frames from a demuxer stream. The API exposed is also somewhat intended to be easily changeable to move decoding to a separate thread, if we ever want this (due to libavcodec already being threaded, I don't see much of a reason, but it might still be helpful).
* cocoa: get fps only from dislaylinkStefano Pigozzi2016-01-141-8/+9
| | | | | In my tests, CGDisplayModeGetRefreshRate returns 24.0 even though the nominal one is set to 24000/1001. This is obviously not good for video.
* cocoa: add an observer for screenmode changeStefano Pigozzi2016-01-141-0/+26
|
* video: fix interactively changing aspect ratiowm42016-01-142-0/+6
| | | | | | | | | The aspect ratio calculations are cached (mainly so that aspect ratio related messages are not logged on every frame). The cache is not clared anymore when video filters are reconfigured, but changing the video-aspect-ratio property relied on it. Make it explicit. Fixes #2714.
* video: decouple filtering/decoding slightly morewm42016-01-142-6/+0
| | | | | | | | | | | | | | | | | | | Lots of noise to remove the vfilter/vo fields from dec_video. From now on, video filtering and output will still be done together, summarized under struct vo_chain. There is the question where exactly the vf_chain should go in such a decoupled architecture. The end goal is being able to place a "complex" filter between video decoders and output (which will culminate in natural integration of A->V filters for natural integration of libavfilter audio visualizations). The vf_chain is still useful for "final" processing, such as format conversions and deinterlacing. Also, there's only 1 VO and 1 --vf option. So having 1 vf_chain for a VO seems ideal, since otherwise there would be no natural way to handle all these existing o