The condition a<=x<=b can be transformed into (unsigned)n-a<=b-a. However if a=INT_MIN then this "optimization" leads to suboptimal code: #include <limits.h> int bad (int n) { return ((unsigned)n-INT_MIN<=-1-INT_MIN); } -> define i32 @bad(i32 %n) { entry: %tmp2 = xor i32 %n, -2147483648 ; <i32> [#uses=1] icmp sgt i32 %tmp2, -1 ; <i1>:0 [#uses=1] zext i1 %0 to i32 ; <i32>:0 [#uses=1] ret i32 %0 } This would be better as: "return n<0;"
i.e. this example shows that replacing ult with slt (and presumably vice-versa) can be advantageous, but this is not exploited.
I forgot to mention where this comes from: the gcc -> llvm switch conversion code lowers wide case ranges to explicit conditional branches. The test that x is in the range lo .. hi is emitted as "x-lo ule hi-lo". If lo is INT_MIN then this results in suboptimal code, as explained in this PR. I could have checked for this case explicitly in the switch conversion code, but I didn't want to do so because it is really a job for the optimizers: they should be able to handle this case. Oddly enough, if you emit the test as "(x>=lo)&&(x<=hi)" then it gets simplified optimally.
Fixed, testcase here: Transforms/InstCombine/xor2.ll:test[01] Patch here: http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20070402/046701.html This handles the general case of a xor constant, not just this specific one. -Chris