==== arm.c ==== char add(char b) { return b + 1; } ==== arm.ll ===== target datalayout = "e-m:e-p:32:32-i64:64-v128:64:128-a:0:32-n32-S64" target triple = "armv4t--" ; Function Attrs: minsize norecurse nounwind optsize readnone define arm_aapcscc zeroext i8 @add(i8 zeroext %b) { entry: %conv = zext i8 %b to i32 %add = add nuw nsw i32 %conv, 1 %conv1 = trunc i32 %add to i8 ret i8 %conv1 } ==== arm.s ===== add: @ @add .fnstart @ BB#0: @ %entry add r0, r0, #1 and r0, r0, #255 bx lr $ llc -march=arm arm.ll -enable-ipra=true -print-regusage add Clobbered Registers: R0 R1 R0_R1 I think it should be: $ llc -march=arm arm.ll -enable-ipra=true -print-regusage add Clobbered Registers: R0 R0_R1
I was understanding regmask wrongly. Sorry about that. Current Implementation is correct.
I don't think current implement is correct because I have a very simple example on X86 where only CL is clobbered than also CH is marked as clobbered : target triple = "x86_64--" define i8 @main(i8 %X) { %inc = add i8 %X, 1 %inc2 = mul i8 %inc, 5 ret i8 %inc2 } for above llvm IR generated X86 code is as follow: main: # @main .cfi_startproc # BB#0: movb $5, %cl movl %edi, %eax mulb %cl addb $5, %al retq So here it is very clear that only CL, CX, RCX, ECX should be marked as clobbered but current implementation marks CH too. Here is review request https://reviews.llvm.org/D22400 that should fix this.
Fixed by commit https://reviews.llvm.org/rL276235.