summaryrefslogtreecommitdiffstats
path: root/video/mp_image.c
Commit message (Collapse)AuthorAgeFilesLines
* mp_image: align stride to multiple of texel sizeNiklas Haas2019-04-211-0/+3
| | | | | | | This helps with compatibility and/or performance in particular for oddly-sized formats like rgb24. We use a loop to avoid having to calculate the lcm (or waste bytes in the extremely common case of the byte size and the stride align having shared factors).
* mp_image: strip all HDR peak information from SDR clipsNiklas Haas2018-09-051-0/+6
| | | | | | | By overriding it with 1.0 (aka SDR). This prevents blowing up on mistagged clips. Fixes #6111
* video: remove internal stereo_out flagwm42018-04-291-10/+6
| | | | | | Also rename stereo3d to stereo_in. The only real change is that the vo_gpu OSD code now uses the actual stereo 3D mode, instead of the --video-steroe-mode value. (Why does this vo_gpu code even exist?)
* mp_image: fixup a simple 10L in ref_bufferJan Ekström2018-04-211-1/+1
| | | | | We didn't want to set the pointer to zero, but the value that the pointer was pointing towards.
* video: pass through container fps to filterswm42018-04-191-0/+1
| | | | | | | | | | | | This means vf_vapoursynth doesn't need a hack to work around the filter code, and libavfilter filters now actually get the frame_rate field on input pads set. The libavfilter doxygen says the frame_rate field is only to be set if the frame rate is known to be constant, and uses the word "must" (which probably means they really mean it?) - but ffmpeg.c sets the field to mere guesses anyway, and it looks like this normally won't lead to problems.
* video: remove libavutil PSEUDOPAL stuffwm42018-04-031-2/+1
| | | | Not needed anymore with newest libavutil.
* mp_image: fix UB with certain callers like vf_vdpauppwm42018-03-151-0/+4
| | | | | | | | | | | | | | | | | | | vf_vdpaupp crashed on certain files (with --hwdec=vdpau --deinterlace). This happened for example with mpeg2 files, which for some reason typically contain some AVFrame side data. It turns out the last change in 55c88fdb8f1a9269 was not quite clean, and forgot the special cases in mp_image_new_dummy_ref(). This function is supposed to copy all metadata from the argument passed, except buffer refs. But there were new buffer refs, that were not cleared properly. Also, the ff_side_data pointer must be cleared, or the new mp_image would try to free it on destruction. The bottom line is that mp_image_new_dummy_ref() is a pretty bad idea, and I suppose all callers with non-NULL arguments should be changed to create a blank mp_image, and copy frame properties as needed (this includes callers of mp_image_new_custom_ref()). Fixes #5630.
* mp_image: replace rude function with less rude FFmpeg upstream functionwm42018-03-031-2/+4
| | | | This is new, thus a dependency bump is required.
* mp_image: make ref error handling slightly readablewm42018-03-031-10/+9
| | | | I think this is slightly more readable than this repeated "fail |= !".
* mp_image: pass through unknown AVFrame side datawm42018-03-031-1/+34
| | | | | | | | | Useful for libavfilter. Somewhat risky, because we can't ensure the consistency of the unknown side data (but this is a general problem with side data, and libavfilter filters will usually get it wrong too _if_ there are conflict cases). Fixes #5569.
* mp_image: fix subtle side data memory leakswm42018-03-031-2/+2
| | | | | | We must not create new references herem because mp_image_new_ref() is called later, and actually creates new references (including doing actual error checking). Blame C, not me.
* mp_image: preserve AVFrame closed captions datawm42018-01-301-0/+6
| | | | | | | | | | This is preparation for a change in vd_lavc.c: it should not have to access the demuxer (to pass along closed captions), so the idea is to make them part of mp_image, and to let the layer above vd_lavc propagate the buffer. Don't bother with preserving them for mp_image->AVFrame, because we don't need this.
* mp_image: factor buffer referencingwm42018-01-301-17/+15
| | | | | | | | | | | Reduce the trivial but still annoying code duplication in mp_image_new_ref(), which has to create new buffer references and deal with possible failure of creating them. The tricky part is that if creating a reference fails, we must set the target to NULL, so that unreferencing the failed new mp_image reference does not release the buffer references of the original mp_image. For the same reason, the code can't jump to error handling when it can't create a new reference, and has to set a flag instead.
* video: change some remaining vo_opengl mentions to vo_gpuAkemi2018-01-201-1/+1
|
* mp_image: fix some metadata loss with conversion from/to AVFramewm42018-01-181-2/+14
| | | | | | | | | | | | | This fixes that AVFrames passing through libavfilter (such as with --lavfi-complex) implicitly stripped some fields. I'm not actually sure what to do with the mp_image_params.color.light field here (what happens if the colorspace changed?) - there is no equivalent in AVFrame or FFmpeg at all. It did not affect the old --vf code, because it doesn't allow libavfilter to change the metadata. Also log the .light field in verbose mode.
* video: avoid some unnecessary vf.h includeswm42018-01-181-2/+0
|
* vd_lavc, mp_image: remove weird mpv specific palette constantwm42017-12-011-3/+3
| | | | | Was for times when we were trying to be less dependent on libav* I guess.
* vd_lavc: move display mastering data stuff to mp_imagewm42017-10-301-0/+19
| | | | | | | | | | | This is where it should be. It only wasn't because of an old libavcodec bug, that returned the side data only on every IDR. This required some sort of caching, which is now dropped. (mp_image wouldn't have been able to do this kind of caching, because this code is stateless.) We don't support these old libavcodec versions anymore, which is why this is not needed anymore. Also move initialization of rotation/stereo stuff to dec_video.c.
* Bump libav* API usewm42017-10-301-9/+2
| | | | (Not tested on Windows and OSX.)
* video: make previously added hwdec params mechanism more genericwm42017-10-161-9/+9
| | | | | | | | | | | | The mechanism introduced in b135af6842bf assumed AVHWFramesContext would be enough. Apparently it's not - the intended use with Rockchip (not Rokchip btw.) requires accessing actual frame data in order to access the AVDRMFrameDescriptor struct. Just pass the entire mp_image to the new function. This is more flexible, although it slightly worries me that it will be less reusable for things which require setting up mp_image_params before any real frames are processed (such as filters).
* video: properly pass through ICC datawm42017-10-161-2/+24
| | | | | | | | | | | | | | | | | | The same should happen with any other side data that matters to mpv, otherwise filters will drop it. (No, don't try to argue that mpv should use AVFrame. That won't work.) ffmpeg_garbage() is copy&paste from frame_new_side_data() in FFmpeg (roughly feed201849b8f91), because it's not public API. The name reflects my opinion about FFmpeg's API. In mp_image_to_av_frame(), change the too-fragile *new_ref = (struct mp_image){0}; into explicitly zeroing out the fields that are "transferred" to the created AVFrame.
* mp_image: merge AVFrame conversion functionswm42017-10-161-38/+29
| | | | | | | | Merge mp_image_copy_fields_to_av_frame() into mp_image_from_av_frame(), same for the other direction. There isn't any good reason to keep them separate, and the refcounting handling makes it only more awkward.
* video: add mp_image_params.hw_flags and add an examplewm42017-10-161-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | It seems this will be useful for Rokchip DRM hwcontext integration. DRM hwcontexts have additional internal structure which can be different depending on the decoder, and which is not part of the generic hwcontext API. Rockchip has 1 layer, which EGL interop happens to translate to a RGB texture, while VAAPI (mapped as DRM hwcontext) will use multiple layers. Both will use sw_format=nv12, and thus are indistinguishable on the mp_image_params level. But this is needed to initialize the EGL mapping and the vo_gpu video renderer correctly. We hope that the layer count is enough to tell whether EGL will translate the data to a RGB texture (vs. 2 texture resembling raw nv12 data). For that we introduce MP_IMAGE_HW_FLAG_OPAQUE. This commit adds the flag, infrastructure to set it, and an "example" for D3D11. The D3D11 addition is quite useless at this point. But later we want to get rid of d3d11_update_image_attribs() anyway, while we still need a way to force d3d11vpp filter insertion, so maybe it has some justification (who knows). In any case it makes testing this easier. Obviously it also adds some basic support for triggering the opaque format for decoding, which will use a driver-specific format, but which is not supported in shaders. The opaque flag is not used to determine whether d3d11vpp needs to be inserted, though.
* mp_image: select an explicit fallback for chroma locationwm42017-10-161-0/+7
| | | | | | | | | | If the chroma location is missing, vo_gpu will use centered chroma. Select a better chroma location by default: normally, it will always be MPEG video chroma location. If full levels are used, use JPEG chroma location, because that sort of sounds like it could make sense as it might coincide with JPEG being decoded. See e.g. #4804.
* video: drop old D3D11/DXVA2 supportwm42017-09-261-32/+0
| | | | | | | | | Now you need FFmpeg git, or something. This also gets rid of the last real use of gpu_memcpy(). libavutil does that itself. (vaapi.c still used it, but it was essentially unused, because the code path isn't really in use anymore. It wasn't even included due to the d3d-hwaccel dependency in wscript.)
* build: add preliminary LGPL modewm42017-09-211-9/+7
| | | | | | | See "Copyright" file for caveats. This changes the remaining "almost LGPL" files to LGPL, because we think that the conditions the author set for these was finally fulfilled.
* mp_image: don't guess colorspace params in mp_image_copy_attributes()wm42017-09-191-8/+12
| | | | | | | | | | | | | | | | | This is "wrong", because you might want mp_image_copy_attributes() to preserve the information that the colorspace parameters are unknown. This is important for hwdec -copy modes, which call this function before fix_image_params() and mp_colorspace_merge() are called. Instead, just wipe the colorspace attributes if the pixel format changes in an apparently incompatible way. Use mp_image_params_guess_csp() logic for this and factor that into its own function. mp_image_set_attributes() attempts to do something similar, so change that in the same way. Also, mp_image_params_guess_csp() just returned if the imgfmt was invalid or unset - just remove that part, because it annoyingly doesn't fit into the new code, and had little reason to exist to begin with. (Probably.)
* mp_image: always copy pixel aspect ratiowm42017-09-191-4/+2
| | | | | | I see no reason not to do this. I think the check comes from the time when mp_image stored the image aspect ratio, instead of the pixel aspect ratio, where the logic might have made more sense.
* mp_image: always copy color attributes on hw downloadwm42017-09-191-9/+2
| | | | | | | | | | | | | | | | | | It was noticed that -copy hwdec modes typically dropped the chroma_location field. This happened because the attributes on hw download are copied with mp_image_copy_attributes(), which tries to copy these parameters only if src and dst were both YUV (in an attempt to copy parameters only if it makes sense). But hardware formats did not have the YUV flag set (anymore?), and code shouldn't attempt to check the flag in this way anyway. Drop the check, and always copy the whole color metadata struct. There is a call to mp_image_params_guess_csp() below, which tries to unset nonsense metadata if it was copied from a YUV format to RGB. This function would also do the right thing for hw formats (although for the cited bug only the software case matters). Fixes #4804.
* mp_image: include config.h directlyJames Ross-Gowan2017-08-261-0/+1
| | | | | | | | | This is needed for HAVE_SSE4_INTRINSICS. config.h used to be included as a transitive dependency of vf.h, but the include statement was removed from vf.h in 8f2ccba71bb4. Also silence an unused variable warning that was introduced in the same commit.
* video: add metadata handling for spherical videowm42017-08-211-1/+27
| | | | | | | | | | | | | | This adds handling of spherical video metadata: retrieving it from demux_lavf and demux_mkv, passing it through filters, and adjusting it with vf_format. This does not include support for rendering this type of video. We don't expect we need/want to support the other projection types like cube maps, so we don't include that for now. They can be added later as needed. Also raise the maximum sizes of stringified image params, since they can get really long.
* vd_lavc: decode embedded ICC profilesNiklas Haas2017-08-031-0/+14
| | | | | | | | | Since these need to be refcounted, we throw them directly into struct mp_image instead of being part of mp_colorspace. Even though they would semantically make more sense in mp_colorspace, having them there is really awkward because mp_colorspace is passed around and stored a lot, and this way their lifetime is exactly tied to the lifetime of the mp_image associated with it.
* mp_image: expose some image allocation code as helpers, refactorwm42017-07-231-20/+123
| | | | | | | Refactor the image allocation code, and expose part of it as helper code. This aims towards allowing callers to easily allocate mp_image references from custom-allocated linear buffers. This is exposing only as much as what should be actually required.
* mp_image: use new code for determining RGB/XYZ exceptionswm42017-06-301-3/+5
| | | | | | Slightly cleaner, possibly slightly more correct. (The last case should be dead code now. In general, we can't know the implied colorspace from a AV_PIX_FMT, at least not if FFmpeg adds a new one.)
* video: get rid of swapped packed YUVwm42017-06-301-3/+1
| | | | | | Another legacy annoyance. The only place where packed YUV is still important is slightly older Apple hardware or drivers, which require it for efficient hardware decoding.
* mp_image: infer correct HLG sig_peakNiklas Haas2017-06-271-4/+9
| | | | | | | For HLG, due to the usage of a reference OOTF configured for 1000 cd/m², the default sig_peak of =nom_peak was suboptimal. We can go down to 1000/100 (=10.0), since that's the true dynamic range of the output signal after it passes through the OOTF.
* vo_opengl: implement support for OOTFs and non-display referred contentNiklas Haas2017-06-181-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | This introduces (yet another..) mp_colorspace members, an enum `light` (for lack of a better name) which basically tells us whether we're dealing with scene-referred or display-referred light, but also a bit more metadata (in which way is the scene-referred light expected to be mapped to the display?). The addition of this parameter accomplishes two goals: 1. Allows us to actually support HLG more-or-less correctly[1] 2. Allows people playing back direct “camera” content (e.g. v-log or s-log2) to treat it as scene-referred instead of display-referred [1] Even better would be to use the display-referred OOTF instead of the idealized OOTF, but this would require either native HLG support in LittleCMS (unlikely) or more communication between lcms.c and video_shaders.c than I'm remotely comfortable with That being said, in principle we could switch our usage of the BT.1886 EOTF to the BT.709 OETF instead and treat BT.709 content as being scene-referred under application of the 709+1886 OOTF; which moves that particular conversion from the 3dlut to the shader code; but also allows a) users like UliZappe to turn it off and b) supporting the full HLG OOTF in the same framework. But I think I prefer things as they are right now.
* video: refactor HDR implementationNiklas Haas2017-06-181-8/+4
| | | | | | | | | | | | | | | List of changes: 1. Kill nom_peak, since it's a pointless non-field that stores nothing of value and is _always_ derived from ref_white anyway. 2. Kill ref_white/--target-brightness, because the only case it really existed for (PQ) actually doesn't need to be this general: According to ITU-R BT.2100, PQ *always* assumes a reference monitor with a white point of 100 cd/m². 3. Improve documentation and comments surrounding this stuff. 4. Clean up some of the code in general. Move stuff where it belongs.
* mp_image: change license to LGPL (almost)wm42017-06-161-6/+3
| | | | | | | | | | | | | | | | | | | | | | Since michael was somewhat involved in it, wait with the actual license change until the core is relicensed. Thus mark it as "Almost LGPL.". The worrisome part about mp_image.c is that it was created by cehoyos (which disagreed with LGPL) in commit f2dee327b2797. But it turns out it was a patch by someone else (who agreed with LGPL). For some reason, the patch was actually slightly modified by cehoyos for no reason (messed with the include statements), so we mess them back, just to be sure. Other than this, there were some commits that added support for new IMGFMTs over the years. Some of these were by people we didn't ask or we didn't get permission from. But since the original mp_image code was replaced by more generic code using FFmpeg pixdesc, none of these changes are left anyway. One additional change by cehoyos (115bfb976270) has been removed as well (when "direct rendering" was dropped from the filter chain).
* mp_image: refuse to convert frames of unknown format to AVFramewm42017-06-081-0/+2
| | | | | This could happen with some "special" hwaccel formats, that exist in mpv, but not libavutil.
* mp_image: for hwaccel, use underlying fmt in mp_image_params_guess_csp()wm42017-02-211-1/+2
| | | | | | | | If imgfmt is a hwaccel format, hw_subfmt will contain the CPU equivalent of the data stored in the hw frames. Strictly speaking, not doing this was a bug, but since hwaccel formats were tagged with MP_IMGFLAG_YUV, it didn't have much of an impact.
* lavfi: use mp_image to store the filter pad formatwm42017-02-201-0/+11
| | | | | | | | | | Preparation for enabling hw filters. mp_image_params can't have an AVHWFramesContext reference (because it can't hold any allocations, and isn't meant to hold "active" data in the first place. So just use a mp_image. It has all real data removed, because that would essentially leak 1 frame once the decoder or renderer don't need it anymore.
* mp_image: use AVFrame.opaque_ref to pass through mpv-only fieldswm42017-02-131-0/+20
| | | | | | We can do this now, which means we can pass a mp_image through libavfilter without loss. Currently, this affects relatively obscure fields only.
* vd_lavc, mp_image: remove code duplication for AVFrame<->mp_imagewm42017-01-121-0/+14
| | | | | | | | | Mostly affects conversion of the colorimetric parameters. Not changing AV_FRAME_DATA_MASTERING_DISPLAY_METADATA handling - that's too messy, as decoders typically output it for keyframes only, and would require weird caching that can't even be done on the level of the frame rewrapping functions.
* vaapi: properly set hw_subfmt field with new decode APIwm42017-01-121-0/+6
| | | | | | | | This fixes direct rendering with hwdec_vaegl.c. The code duplication between update_image_params() and mp_image_copy_fields_from_av_frame() is quite annoying, bit will have to be resolved in another commit.
* video: use demuxer-signaled duration for last video framewm42016-12-211-0/+1
| | | | | | | | | Helps with gif, probably does unwanted things with other formats. This doesn't handle --end quite correctly, but this could be added later. Fixes #3924.
* Remove compatibility thingswm42016-12-071-4/+0
| | | | | | Possible with bumped FFmpeg/Libav. These are just the simple cases.
* mp_image: dump all mp_colorspace members in verbose loggingwm42016-11-081-1/+7
| | | | | Also extend the default buffer size for formatting this string, because it can get too damn long.
* mp_image: fix clearing to black with p010 formatwm42016-09-291-1/+1
| | | | | Using vf_expand (which uses mp_image_clear()) with p010 cleared chroma to green instead.
* video: change hw_subfmt meaningwm42016-07-151-1/+1
| | | | | | | | | | | | | | | | | | The hw_subfmt field roughly corresponds to the field AVHWFramesContext.sw_format in ffmpeg. The ffmpeg one is of the type AVPixelFormat (instead of the underlying hardware format), so it's a good idea to switch to this too for preparation. Now the hw_subfmt field is an mp_imgfmt instead of an opaque/API- specific number. VDPAU and Direct3D11 already used mp_imgfmt, but Videotoolbox and VAAPI had to be switched. One somewhat user-visible change is that the verbose log will now always show the hw_subfmt as image format, instead of as nonsensical number. (In the end it would be good if we could switch to AVHWFramesContext completely, but the upstream API is incomplete and doesn't cover Direct3D11 and Videotoolbox.)
* vo_opengl: generalize HDR tone mapping mechanismNiklas Haas2016-07-031-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | This involves multiple changes: 1. Brightness metadata is split into nominal peak and signal peak. For a quick and dirty explanation: nominal peak is the brightest value that your color space can represent (i.e. the brightness of an encoded 1.0), and signal peak is the brightest value that actually occurs in the video (i.e. the brightest thing that's displayed). 2. vo_opengl uses a new decision logic to figure out the right nom_peak and sig_peak for all situations. It also does a better job of picking the right target gamut/colorspace to use for the OSD. (Which still is and still should be treated as sRGB). This change in logic also fixes #3293 en passant. 3. Since it was growing rapidly, the logic for auto-guessing / inferring the right colorimetry configuration (in pass_colormanage) was split from the logic for actually performing the adaptation (now pass_color_map). Right now, the new logic doesn't do a whole lot since HDR metadata is still ignored (but not for long).
* mp_image: split colorimetry metadata into its own structNiklas Haas2016-07-031-55/+52
| | | | | | | | | | | | | | | | | | This has two reasons: 1. I tend to add new fields to this metadata, and every time I've done so I've consistently forgotten to update all of the dozens of places in which this colorimetry metadata might end up getting used. While most usages don't really care about most of the