``` $ cat gep-vector.ll define i32* @bitcast_vec_to_array_gep(<7 x i32>* %x, i64 %y, i64 %z) { %arr_ptr = bitcast <7 x i32>* %x to [7 x i32]* %gep = getelementptr [7 x i32], [7 x i32]* %arr_ptr, i64 %y, i64 %z ret i32* %gep } $ opt -instcombine gep-vector.ll -S -o - define i32* @bitcast_vec_to_array_gep(<7 x i32>* %x, i64 %y, i64 %z) { %gep = getelementptr <7 x i32>, <7 x i32>* %x, i64 %y, i64 %z ret i32* %gep } ``` This is incorrect because DataLayout::getTypeAllocSize(< 7 x i32 >) and getTypeAllocSize([ 7 x i32 ]) may differ. This can be double-checked by emitting assembly code before/after optimization. https://godbolt.org/z/xB-p5u Before optimization, rax = rdi + 28 * rsi + 4 * rdx After instcombine, rax = rdi + 32 * rsi + 4 * rdx.
The t
Sorry for my mistake :( The test was excerpted from test/Transforms/InstCombine/gep-vector.ll.
I agree this is a bug.
I introduced the bug here: https://reviews.llvm.org/D44833 Using weird types in the tests helped expose it. In the common case where we have a power-of-2 vector/array and the alloc-sizes are the same, I think the transform should still be ok. I'll post a patch for review.
https://reviews.llvm.org/D71771
Should be fixed with: tps://reviews.llvm.org/rG79c7fa31f3aa
Lost some chars on that link: https://reviews.llvm.org/rG79c7fa31f3aa