summaryrefslogtreecommitdiffstats
path: root/video/decode
diff options
context:
space:
mode:
authorwm4 <wm4@nowhere>2019-07-15 03:06:47 +0200
committerwm4 <wm4@nowhere>2019-09-19 20:37:05 +0200
commite1157cb6e8191f98b60813dc91342e3baf577d92 (patch)
tree4e60d1bc521848f1249856b15edd327370d21a91 /video/decode
parent36dd2348a1e118b44fcccd3c664af7c5f960b6bb (diff)
downloadmpv-e1157cb6e8191f98b60813dc91342e3baf577d92.tar.bz2
mpv-e1157cb6e8191f98b60813dc91342e3baf577d92.tar.xz
video: generally try to align image data on 64 bytes
Generally, using x86 SIMD efficiently (or crash-free) requires aligning all data on boundaries of 16, 32, or 64 (depending on instruction set used). 64 bytes is needed or AVX-512, 32 for old AVX, 16 for SSE. Both FFmpeg and zimg usually require aligned data for this reason. FFmpeg is very unclear about alignment. Yes, it requires you to align data pointers and strides. No, it doesn't tell you how much, except sometimes (libavcodec has a legacy-looking avcodec_align_dimensions2() API function, that requires a heavy-weight AVCodecContext as argument). Sometimes, FFmpeg will take a shit on YOUR and ITS OWN alignment. For example, vf_crop will randomly reduce alignment of data pointers, depending on the crop parameters. On the other hand, some libavfilter filters or libavcodec encoders may randomly crash if they get the wrong alignment. I have no idea how this thing works at all. FFmpeg usually doesn't seem to signal alignment internal anywhere, and usually leaves it to av_malloc() etc. to allocate with proper alignment. libavutil/mem.c currently has a ALIGN define, which is set to 64 if FFmpeg is built with AVX-512 support, or as low as 16 if built without any AVX support. The really funny thing is that a normal FFmpeg build will e.g. align tiny string allocations to 64 bytes, even if the machine does not support AVX at all. For zimg use (in a later commit), we also want guaranteed alignment. Modern x86 should actually not be much slower at unaligned accesses, but that doesn't help. zimg's dumb intrinsic code apparently randomly chooses between aligned or unaligned accesses (depending on compiler, I guess), and on some CPUs these can even cause crashes. So just treat the requirement to align as a fact of life. All this means that we should probably make sure our own allocations are 64 bit aligned. This still doesn't guarantee alignment in all cases, but it's slightly better than before. This also makes me wonder whether we should always override libavcodec's buffer pool, just so we have a guaranteed alignment. Currently, we only do that if --vd-lavc-dr is used (and if that actually works). On the other hand, it always uses DR on my machine, so who cares.
Diffstat (limited to 'video/decode')
-rw-r--r--video/decode/vd_lavc.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/video/decode/vd_lavc.c b/video/decode/vd_lavc.c
index e07eecc2d2..6c356e2e01 100644
--- a/video/decode/vd_lavc.c
+++ b/video/decode/vd_lavc.c
@@ -860,7 +860,7 @@ static int get_buffer2_direct(AVCodecContext *avctx, AVFrame *pic, int flags)
// We assume that different alignments are just different power-of-2s.
// Thus, a higher alignment always satisfies a lower alignment.
- int stride_align = 0;
+ int stride_align = MP_IMAGE_BYTE_ALIGN;
for (int n = 0; n < AV_NUM_DATA_POINTERS; n++)
stride_align = MPMAX(stride_align, linesize_align[n]);