We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
int f(unsigned int a) { return __builtin_popcount(a >> (CHAR_BIT * sizeof(a) - 1)); }
This can be optimized to return (a >> (__CHAR_BIT__ * sizeof(a) - 1));. This transformation is done by GCC, but not by LLVM.
return (a >> (__CHAR_BIT__ * sizeof(a) - 1));
See also comparison here : https://godbolt.org/z/1Kz5qf
The text was updated successfully, but these errors were encountered:
assigned to @rotateright
Sorry, something went wrong.
We miss both of these:
define i32 @pop_signbit(i32 %x) { %b = lshr i32 %x, 31 %r = tail call i32 @llvm.ctpop.i32(i32 %b), !range !0 ret i32 %r }
define i32 @pop_lowbit(i32 %x) { %b = and i32 %x, 1 %r = tail call i32 @llvm.ctpop.i32(i32 %b), !range !0 ret i32 %r }
We might be able to rely on the range metadata. I'll take a look.
Metadata doesn't provide what we need. I used a ValueTracking call instead. Should be fixed with: https://reviews.llvm.org/rG236c4524a7cd
It looks like gcc still does more, so please open a new bug if more popcount optimizations are expected/needed.
rotateright
No branches or pull requests
Extended Description
int f(unsigned int a)
{
return __builtin_popcount(a >> (CHAR_BIT * sizeof(a) - 1));
}
This can be optimized to
return (a >> (__CHAR_BIT__ * sizeof(a) - 1));
. This transformation is done by GCC, but not by LLVM.See also comparison here : https://godbolt.org/z/1Kz5qf
The text was updated successfully, but these errors were encountered: