Alive2: https://alive2.llvm.org/ce/z/A3FMem define i64 @src(i32 %a0) { %a = ashr i32 %a0, 16 ; all valid 16 -> 31 %b = trunc i32 %a to i16 %c = sext i16 %b to i64 ret i64 %c } define i64 @tgt(i32 %a0) { %a = ashr i32 %a0, 16 %c = sext i32 %a to i64 ret i64 %c } https://c.godbolt.org/z/zh7ffM
Same is true for lshr: https://alive2.llvm.org/ce/z/NFowLr
Err, submitted too soon Same is true for lshr: https://alive2.llvm.org/ce/z/NFowLr https://c.godbolt.org/z/7q545n I think we could use ComputeNumSignBits() here. I'll try to take a look.. Same for zext, but we get that already: https://c.godbolt.org/z/6GqxTM https://alive2.llvm.org/ce/z/289tfg https://alive2.llvm.org/ce/z/GWh-uX
@lebedev.ri Are you still looking at this?
(In reply to Simon Pilgrim from comment #3) > @lebedev.ri Are you still looking at this? Not really. If someone wants to pick this up, feel free to. There is a number of patches needed here: * general computenumsignbits fold * fold for that specific IR, to handle non-splat vectors * fold for the case where the first op is lshr * ??? * do we need something similar for zero-extension?
Looking at the ComputeNumSignBits pattern - tests: https://reviews.llvm.org/rG337854270023
https://reviews.llvm.org/D103617
Filed the 'lshr' pattern as bug 50575. The example pasted in the description is transformed into that (ashr->lshr). But the bug title and the example in the godbolt link in the description are different and should be fixed with: https://reviews.llvm.org/rGb865eead7657 So I think it's ok to close this one, but feel free to reopen or file new bugs to track other items.
Cheers Sanjay!