summaryrefslogtreecommitdiffstats
path: root/video
Commit message (Collapse)AuthorAgeFilesLines
* wayland: update remaining legacy VOCTRL usage to optionsPhilip Langdale2019-12-022-31/+27
| | | | | | | | | | | The remaining legacy VOCTRLs are for the fullscreen and border properties. For fullscreen this largely just replacing the private state field with the vo option but there are small semantic differences that we need to be careful of. For the border setting, it's trivial as we don't have external mechanisms for changing the state, but I also can't test it as I'm not using a compositor that supports it.
* wayland: update Maximize and Minimize handling to use new optionsPhilip Langdale2019-12-013-26/+43
| | | | | | | | | | | I wanted to get this done quickly as I introduced the new VOCTRL behaviour for minimize and maximize and it was immediately made legacy, so best to purge it before anyone gets confused. I did not sort out fullscreen as that's more involved and not something I've educated myself about yet. But I did replace the VOCTRL_FULLSCREEN usage with the new option change mechanism as that seemed simple enough.
* vf_gpu: render subtitleswm42019-11-303-9/+22
| | | | | | | Pretty annoying affair. The vo_gpu code could of course not trigger rendering from filters yet, so it needed to be extended. Also, this uses some icky stuff made for vf_sub (and this was the reason I marked vf_sub as deprecated), so everything is terrible.
* vo_gpu: opengl: add hack for ancient Mesa/GLXwm42019-11-301-23/+47
| | | | | | | | | | | | | | | | glx.h recursively includes gl.h, and there is no way to prevent this. Old Mesa defines some GL symbols, but not all which mpv needs. In particular, one user who was too lazy to update his ancient Ubuntu and preferred to bother us with obscure bug reports, had Mesa headers which did not define GL 3.2, so GLsync was not defined. All in all I still think the idea of providing the GL API definitions ourselves was a good idea; just GLX should have been isolated better. But isolating GLX now is too much effort. Not sure why I'm bothering with this at all. Fixes: #7201 (unconfirmed)
* vf_gpu: add video filter using vo_gpu's rendererwm42019-11-291-0/+364
| | | | | | | | | Probably pretty useless in this form (see: the wall of warnings), but someone wanted this. I think this should be useful to perform some automated tests, maybe. Fixes: #7194
* vo_gpu: opengl: do not free "GL" sub-allocationswm42019-11-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | This function always expects the GL struct pointer to be a talloc allocation. So far so bad. But the terrible thing is that _lots_ of code in mpv didn't quite get this (including the code which introduced the way it is used this way). For example, in context_glx.c you see this: struct priv { GL gl; ... GL is not a talloc allocation, but since it's at the start of a talloc allocation, it works anyway. So far so bad. But the really terrible thing is that mpgl_load_functions2() calls talloc_free_children() on the GL pointer, which means that all of priv's. This would be unintentional and could create dangling pointers. And this happens at the about 1 dozen of callers. I'm amazed it didn't broke yet anywhere. Removing this anti-pattern with making GL "implicitly" a talloc allocation would be too much effort at this point. So just manually free the only allocation that the function attached to GL.
* x11: implement unminimizationwm42019-11-291-1/+5
| | | | This appears to work with IceWM.
* x11: handle maximize/minimize with new option stuffwm42019-11-291-43/+31
| | | | | | | | | Should restore full functionality. The initial state setting is a bit shoddy (instead of setting the properties before map, we use the WM commands to change it after, so you will see the normal window state for a moment; the WM commands do not work on unmapped windows, so fixing this would require more code).
* command: change window-minimized/window-maximized to optionswm42019-11-291-4/+3
| | | | | Unfortunately, this breaks window state reporting for all VOs which supported it. This can be fixed later (for x11 in the next commit).
* x11: add change notification for --on-all-workspaceswm42019-11-291-0/+18
| | | | | Not particularly important and nobody asked for this, but demonstrates how such things can be easily done now.
* x11: handle some more options with new option stuffwm42019-11-291-15/+14
|
* x11: use new option stuff to implement fullscreenwm42019-11-294-14/+24
| | | | | | | | | | | | | | | - remove VOCTRL_FULLSCREEN and VOCTRL_GET_FULLSCREEN - have your own m_config_cache for the fullscreen option (vo->opts_cache cannot be used because you lose per-option change notifications, and it'd be a mess anyway) - use VOCTRL_VO_OPTS_CHANGED to update it (it's used for convenience) - when updating it, check for the fullscreen option (wasn't sure how to do it best; currently, it compares the raw option pointers, but this could be changed) - do not send VO_EVENT_FULLSCREEN_STATE on FS change - instead write the option on FS change (assign in opt. struct + m_config_cache_write_opt)
* x11: implement minimize and maximize related VOCTRLsPhilip Langdale2019-11-291-0/+40
| | | | | This allows the pseudo client side decorations to be used under x11, which might be desirable when running in border=no mode.
* wayland: implement minimize and maximize related VOCTRLsPhilip Langdale2019-11-291-0/+27
| | | | | | | We primarily care about pseudo-decorations for wayland, where the compositor may not support server-side decorations. So let's implement the minimize and maximize commands and return the maximized window state.
* command: add `window-maximized` and make `window-minimized` settablePhilip Langdale2019-11-291-1/+5
| | | | | | | | | | | If we want to implement window pseudo-decorations via OSC, we need a way to tell the vo to minimize and maximize the window. Today, we have minimized as a read-only property, and no property for maximized. Let's made minimized settable and add a maximized property to go with it. In turn, that requires us to add VOCTRLs for minimizing or maximizing a window, and an additional WIN_STATE to indicate a maximized window.
* wayland: restore window geometry after un-maximizePhilip Langdale2019-11-291-3/+2
| | | | | | | | | | | | | At least with gnome-shell (I know, I know), the compositor does not provide the old window size when leaving the maximized state. Instead, we get a toplevel_config event with a 0x0 size and no additional states. Today, we already save the window geometry to restore it when leaving the fullscreen state, so we just need a small change for it to kick in for leaving the maximized state. If I read this correctly, we'll still respect the size passed by a compositor that actually provides the old size.
* wayland: make the edge grab zone width user configurablePhilip Langdale2019-11-292-5/+8
| | | | | | | Rather than hard-coding the edge grab zone width, we can make it user configurable. It seems worthwhile to have separate configs for pointer and touch usage as the defaults should be different, and a user might have both input methods in use.
* wayland: add grab zone for resizing window with mousePhilip Langdale2019-11-292-40/+54
| | | | | | | | Today, we support resizing wayland windows when we detect a touch event in a defined grab zone. As part of implementing pseudo-decorations, we should have equivalent functionality for mouse input. And if we detect support for actual decorations we will not activate the grab zone as the decorations will provide this.
* x11_common: don't use vo->opts directlywm42019-11-272-25/+26
| | | | | | | | | Use x11->opts instead of vo->opts. This doesn't matter currently, and x11->opts is actually set to vo->opts. However, there's a chance that either option access changes, or that the way backends integrate with struct vo changes. This is just a preemptive change to make this less of a mess, and it's generally a good idea to reduce accesses to struct vo anyway.
* command: shuffle some crap aroundwm42019-11-251-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | This is preparation to get rid of the option-to-property bridge (mp_on_set_option). This is a pretty insane thing that redirects accesses to options to properties. It was needed in the ever ongoing transition from something to... something else. A good example for the need of this bridge is applying profiles at runtime. This obviously goes through the config parser, but should also make all changes effective, for which traditionally the property layer is used. There isn't much left that needs this bridge. This commit changes a bunch of options (which also have a property implementation) to use option change notifications instead. Many of the properties are still left, but perform unrelated functions like OSD formatting. This should be mostly compatible. There may be some subtle behavior changes. For example, "hwdec" and "record-file" do not check for changes anymore before applying them, so writing the current value to them suddenly does something, while it was ignored before. DVB changes untested, but should work.
* vo_gpu: fix infinite scaler reinit spamNiklas Haas2019-11-231-8/+9
| | | | | | | | | Handling the window with this function makes no sense, since windows and kernels are not the same thing and don't share the same option list. The only reason it's done is to make sure the char* points at the static string rather than the dynamically allocated one, which we can do manually in this function. Rewrite a bit for clarity/quality.
* video/out/bitmap_packer: Avoid empty initializer listMichael Forney2019-11-181-1/+1
|
* video/out/vo_tct: Use octal escape sequence instead of non-standard \eMichael Forney2019-11-181-9/+9
|
* video/out/gpu: Remove stray top-level ';'Michael Forney2019-11-182-2/+2
|
* vo_gpu: hwdec_cuda: Reduce message level of errors while probingPhilip Langdale2019-11-172-5/+7
| | | | | | We should only be printing errors that occur when not probing, to avoid creating the impression that something is wrong - and errors during probing isn't a problem.
* vo_gpu: context_glx: Add X11 native resourcePhilip Langdale2019-11-161-0/+2
| | | | | | | Surprisingly, we've managed to get this far without context_glx ever adding the X11 display as a native resource. But with the recent change to attempt to enable vdpau when using EGL, the hwdec now requires the display to be added. So let's add it.
* wayland: use eglGetPlatformDisplay()Dudemanguy2019-11-161-1/+2
| | | | | See aacc194. The same logic all applies to Wayland. In fact, we already require EGL 1.5 for wayland anyway, so it's better to do it right.
* x11: require EGL 1.5 and use eglGetPlatformDisplay()wm42019-11-161-6/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | eglGetPlatform() is a broken API, since it takes a windowing specific argument, yet is supposed to work for multiple APIs at the same time. On Linux, it can take both a X11 "Display" and a "wl_display". Obviously there is no way to specify what kind of display the argument is (it's just a void*). Mesa has _eglNativePlatformDetectNativeDisplay, which does funny stuff to try to guess the display type, including trying to call mincore() to determine whether the pointer can be accessed at all. I guess this recently accidentally broke (as a bug), but on the other hand, maybe it's time to do this properly. The fix is using eglGetPlaformDisplay(). This requires EGL 1.5, plus Mesa needs to support the associated platform extension (EGL_KHR_platform_x11). Since I see no reasonable way to do this in a compatible way, just require that EGL 1.5 is available. The problem is that EGL 1.4 seems to require you to create a display to query EGL version and extension, and you have a chicken-and-egg problem. It's very stupid. Maybe you could jump through some more hoops to get something compatible, but fuck that. Users on "too old" Mesa will fall back to GLX (which we keep around for a regrettable company known by the name of Nvidia). I think Wayland and GBM should do the same. They're sufficiently bleeding-edge that you can expect them to have EGL 1.5. On the other hand, the cursed RPI code will have to stay with a eglGetDisplay(). Speculative fix for #7154. (Rant about EGL follows. Actually I deleted it.)
* vo_gpu: sync duplicated condition on peak computationwm42019-11-161-0/+2
| | | | | | | | | | | | | | | | | | | | pass_color_map() (in video_shaders.c) and pass_colormanage() (video.c) both duplicate the condition on whether to do peak computation. Peak computation requires a compute shader, so if the duplicated conditions don't match, video_shaders.c will generate a compute shader, but video.c will try to run it as fragment shader. This leads to a "blue screen". This can be reproduced by playing a HDTV video with --target-peak=99. It's not clear how to fix this. Should pass_tone_map() be only invoked if mp_trc_is_hdr() == true (what pass_colormanage() uses to decide whether to enable peak computation), or should pass_colormanage() just tell pass_color_map() to skip peak computation? Decide for the latter, as it's more robust. Even if not correct, at least it gets rid of the blue shit. Fixes: #7149
* wayland: use hidpi-window-scale optiondudemanguy2019-11-121-0/+2
|
* build: fix compilation conditions for vaapi interop initsPhilip Sequeira2019-11-101-2/+2
| | | | | | | | | This makes the condition for including each init match the condition for compiling the file that defines it. It's possible to e.g. HAVE_GL and HAVE_VAAPI without HAVE_VAAPI_EGL, which resulted in "undefined reference to `vaapi_gl_init'" with the old code.
* vo_gpu: yuv alpha is always full rangewm42019-11-091-8/+6
| | | | | | | | | | | | | | Probably. It's not like these pixel formats are formally specified - FFmpeg added them because _some_ file format or decoder supports it, and while that format/codec may define it precisely, the pixel format is sort of disconnected and just a FFmpeg thing. In any case, the yuva sample I had at hand uses the full range the component data type can provide. The old code used the same "shifted" range as for Y/U/V components, which must have been wrong. This will not work correctly for packed YUVA formats, but fortunately they matter even less.
* vo_gpu: context_x11egl: check eglGetConfigAttrib() for errorswm42019-11-081-1/+4
| | | | | Not sure why it assumes that it always succeeds (although generally it won't fail).
* img_format: remove some unneeded alpha flag handlingwm42019-11-082-6/+0
| | | | Don't know what this was for, but the result doesn't change.
* test: add dumping of img_format metadatawm42019-11-082-97/+2
| | | | | | | | | | | | | | | | | | | | This is fragile enough that it warrants getting "monitored". This takes the commented test program code from img_format.c, makes it output to a text file, and then compares it to a "ref" file stored in git. Originally, I wanted to do the comparison etc. in a shell or Python script. But why not do it in C. So mpv calls /usr/bin/diff as a sub-process now. This test will start producing different output if FFmpeg adds new pixel formats or pixel format flags, or if mpv adds new IMGFMT (either aliases to FFmpeg formats or own formats). That is unavoidable, and requires manual inspection of the results, and then updating the ref file. The changes in the non-test code are to guarantee that the format ID conversion functions only translate between valid IDs.
* vo_gpu: vdpau actually works under EGLwm42019-11-071-4/+2
| | | | | | | | | | | | | | | | | | The use of glXGetCurrentDisplay() restricted this to the GLX backend. But actually it works under EGL as well. Removing the GLX-specific call and using the general mpv-internal method to get the X "Display" makes it work in mpv. I didn't know this. Nvidia didn't list this as extension in the EGL context when I still used their GPUs. Note that this might in theory break use of vdpau in some libmpv clients using the render API. But only if MPV_RENDER_PARAM_X11_DISPLAY is not used, and they relied on mpv using glXGetCurrentDisplay(). EGL does not provide such an API, and hwdec_vaapi.c also uses what hwdec_vdpau.c uses now. Considering that vaapi is preferable these days, it's not bad at all if these clients get "broken". They can be easily fixed by passing the display to mpv correctly.
* DOCS/contribute.md, zimg: remove 2 instances of an extraneous "s"wm42019-11-071-1/+1
|
* vo_gpu: unconditionally clear framebuffer on start of framewm42019-11-061-5/+3
| | | | | | | | | | | | | | | | | | | | | | For some reason, the first frame displayed on X11 with amdgpu and OpenGL will be garbled. This is especially visible if the player starts, displays a frame, but then still takes a while to properly start playback. With --interpolation, the behavior somehow changes (usually gets worse). I'm not sure what exactly is going on, and the code in video.c is way too abstruse. Maybe there is some slight possibility that a frame with uncleared contents gets displayed, which somehow also corrupts another frame that is displayed immediately after that. If clear is unconditionally run, this somehow doesn't happen, and you see a video frame. By any logic this shouldn't happen: a video frame should always overwrite the background. So I can't exclude that this isn't some sort of driver bug, or at least very obscure interaction. Clearing should be practically free anyway, so always do it. Fixes: #7105
* img_format: remove some unused format flagswm42019-11-032-25/+2
| | | | | | | | | | | | | | | | | They were used at some point, but then fell into disuse. In general, these old flags are all a bit fuzzy, so it's a good idea to remove them as much as possible. The comment about MP_IMGFLAG_PAL isn't true anymore. The old meaning was deprecated at some point, and the flag was removed from "pseudo paletted" formats. I think mpv at one point changed its own flag from AV_PIX_FMT_FLAG_PSEUDOPAL to AV_PIX_FMT_FLAG_PAL, when the former was deprecated, and it became unnecessary to allocate a palette for non-paletted formats. (The one who deprecated in FFmpeg was me, if you wonder.) MP_IMGFLAG_PLANAR was used in command.c, use a relatively similar flag as replacement.
* vo_x11: accept zimg formatswm42019-11-031-1/+2
| | | | | | | | | | | | This is slightly helpful for testing, and otherwise useless and without consequence. I'm not using the correct output format and using IMGFMT_RGB0 as placeholder. This doesn't matter currently, as both sws and zimg support this as output (and support any input for it). I'm doing this because it's surprisingly tricky to get the correct output format at this point, without digging deeper into x11 shit or refactoring parts of the VO. I don't care enough about this.
* sws_utils: remove some unnecessary sws bug work aroundwm42019-11-031-11/+0
| | | | | Seems like this was needed in 2012. The comment indicates the bug was fixed in ffmpeg git, so it's long gone.
* vd_lavc: don't keep packets for fallbacks if errors are toleratedwm42019-11-021-1/+3
| | | | | | | | | | | | | | The user can raise the number of tolerated hardware decoding errors. On the other hand, we have a static limit on packets that are "saved" for fallback handling (and that's a good idea to avoid unbounded memory usage). In this case, it could happen that the start of a file was fine after a fallback, but after that buffered amount of data, it would suddenly skip. It's more useful to skip buffering entirely if the number of tolerated decoding errors exceeds the fixed buffer. (And also, I'm sure nobody gives a shit about this feature.)
* vd_lavc: fix prepare_decoding() failure modeswm42019-11-021-9/+14
| | | | | | | | | | | | prepare_decoding() returned a bool that was supposed to tell whether decoding could work, or if something was fucked. After recent changes to the decoder loop, this did not work anymore, and caused an endless loop. Redo it, so it makes more sense. avctx being NULL (software fallback initialization failed) now signals EOF. hwdec_failed needs to be handled on send_packet() only, where it probably never happens anyway. (Who was the idiot who made libavcodec have two entrypoints for decoding? Oh right, it was me. PEBKAC.)
* vd_lavc: mention hw decoding if decoding fails in hwdec modewm42019-11-021-1/+2
| | | | Just so the user knows. Provides some context.
* vd_lavc: simplify fallback handling for full stream hw decoderwm42019-11-021-20/+18
| | | | | | | | | | | | Shovel the code around to make the data flow slightly simpler (?). At least there's only one send_packet function now. The old code had the problem that send_packet() could be called even if there were queued packets; due to sending the queued packets in the receive_frame function, this should not happen anymore (the code checking for this case in send_packet should normally never be called). Untested with actual full stream hw decoders (none available here); I created a test case by making hwaccel decoding fail.
* vd_lavc: signal packet consumed in drop-all casewm42019-11-021-1/+1
| | | | | This is just a very special code path. This probably got stuck, now that the previous commit returned the EAGAIN properly. Untested.
* vd_lavc: change incorrect bool return type to intwm42019-11-021-1/+1
| | | | | | | | Forgotten in commit 5d5fdb7. This failed to return the error code properly. In particular, if the decoder rejected the packet, this was not properly detected. Normally, this mattered only in specific cases. Fixes: #7115
* zimg: support subsampled chroma with non-aligned image sizeswm42019-11-021-2/+9
|
* zinmg: stop using GBRP for RGBwm42019-11-021-23/+28
| | | | | | | | | | | | | | | Instead, use a YUV planar format. It doesn't matter, since we use the format only internally and for "management" purposes. We're only interested in the physical layout, not what colorspace FFmpeg "forcibly" associates with it. Also get rid of using the old and slightly sketchy mp_imgfmt_find() function. Yep, the IMGFMT_RGB30 now "constructs" the planar format, instead of using a pixfmt constant. Slightly inconvenient, tricky, and fragile, but I like it, so bugger off. This whole thing gets rid of some of the strange plane permutations that were needed earlier.
* zimg: correct RGB30 order (probably)wm42019-11-021-1/+1
| | | | | | | | According to the definition of the GL format, and the definition in img_format.h, and the actual output by vo_gpu, the order of components was probably wrong. It's exceedingly likely that the vo_drm format (for which this was originally written) has the same layout, so this was probably a bug from when the zimg wrapper code was refactored.
* video: mess with the filte chain to enable zimg IMGFMT_RGB30 outputwm42019-11-021-0/+1
| | | | | | | | | | | This was too hardcoded to libswscale. In particular, IMGFMT_RGB30 output is only possible with the zimg wrapper, so the context needs to be taken into account (since this depends on the --sws-allow-zimg option dynamically). This is still slightly risky, because zimg currently will still fall back to swscale in some cases, such as when it refuses to initialize the particular color conversion that is requested. f_autoconvert.c could actually handle this better, but I'm tool fucking lazy right now, and nobody cares anyway, so go away, OK?
* vo_gpu: opengl: add support for IMGFMT_RGB30wm42019-11-021-0/+11
| | | | | | | | This integrates it as "special" format, with no alpha component, as the equivalent IMGFMT_RGB30 isn't meant to contain any. Nothing can produce this format in the video chain yet, so the next commits are needed to make this actually work.
* zimg: make --zimg-fast=yes defaultwm42019-11-021-0