Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make better use of conditional unreachable's in LLVM #1182

Closed
llvmbot opened this issue Jun 19, 2006 · 16 comments
Closed

Make better use of conditional unreachable's in LLVM #1182

llvmbot opened this issue Jun 19, 2006 · 16 comments
Labels
bugzilla Issues migrated from bugzilla

Comments

@llvmbot
Copy link
Collaborator

llvmbot commented Jun 19, 2006

Bugzilla Link 810
Resolution FIXED
Resolved on Sep 03, 2020 07:29
Version trunk
OS All
Blocks llvm/llvm-bugzilla-archive#3100
Reporter LLVM Bugzilla Contributor
CC @atrick,@mkurdej,@dwblaikie,@fhahn,@yuanfang-chen,@irishrover

Extended Description

Hi,

I'd like to open a discussion on adding assert/assume intrinsics to
LLVM. Here I will describe the syntax, semantics, and applications
of both.

Those statements should have the following syntax:

assert/assume ( Boolean_variable, String )

"Boolean_variable" represents a condition.

The only purpose of "String" is to classify assertions and assumptions
into classes, so that, for example, certain class of assertions can be
enabled or disabled during the code generation. For maximal generality,
I'd suggest leaving "String" unspecified for now. The users can come up
with identifiers like "Range_check" or similar.

The semantics of statements is:

ASSERT - if the condition evaluates to TRUE, the statement is a NOP,
otherwise the assertion terminates the program. Code generators should
be able to enable/disable assertions, but are also allowed to completely
ignore them.

ASSUME - is always a NOP, but the code generators might generate code
that issues a warning if the condition evaluates to FALSE. Assumptions
present the conditions that "should", but are not guaranteed to hold.

Applications:

ASSERT - debugging, code reliability, can represent both user provided
assertions (invariants, pre/post-conditions) and automatically inserted
checks, like range checking.

ASSUME - primarily as a tool for communication between passes. Some
possible applications include:

  • language independent attribute mechanism
    [ like assume(restricted(ptr_x), "Restricted Pointer") ]

  • passing information through different stages of the compilation
    framework, for example, a loop analysis pass might store an inferred
    loop invariant as assume(x + y < 200, "Loop Invariant").

  • static checking. For details I'd suggest papers from Dijkstra, Greg
    Nelson, Rustan Leino.

It should be noted that ASSUMEs have application specific meaning, and
should not be touched by passes that do not use them (something like dbg
intrinsics).

Looking forward to your comments,

Domagoj Babic

@lattner
Copy link
Collaborator

lattner commented Jun 19, 2006

Basic question about "ASSERT": why bother? LLVM already has the ability to express "if this condition isn't
true, abort", you need no intrinsic for that.

Assume is useful as an intrinsic, and could conceptually be useful for conveying information from the
front-end to the optimizers. However, using it this way would require a bunch of new predicates (e.g.
'restricted' which aren't defined here), and it would seriously bloat the code.

Why do you need assume for loop invariants? Those are trivial to compute.

-Chris

@llvmbot
Copy link
Collaborator Author

llvmbot commented Jun 21, 2006

The addvantage of having explicit ASSERT statement is that it can be used by
automatic code annotation tools can infer assertions. Also, it's handy to have
the capability of switching certain classes of assertions on/off during the code
generation. You just can't get the same functionality with plain test-exit blocks.

Here's an example: let's assume that you're compiling some kernel (linux/or
whatever) for debugging. And, you have a tool that inserts lock checks (or
access to blocking calls while the kernel lock is locked, or similar). Now, if
you insert test-exit blocks, you need to recompile pretty much everything if you
want to check something else. If you could disable inserted assertions at code
generation time, you would need just to run the back-end again.

As about loop invariants, I disagree. Can you suggest an algorithm to find an
invariant of the following loop:
while (y1 != y2) {
while (y1 > y2) { y1=y1-y2; y4=y4+y3; }
while (y2 > y1) { y2=y2-y1; y3=y3+y4; }
}

I can't, and I'll happily pay a beer to anyone who can.

Additional predicates can be introduced when and if needed. Currently there's
no way to pass certain additional info from the source to the bytecode level.
For example, it would be usefull to know that certain pointer is expected to be
in user or kernel space. We can start with plain boolean variables.

Furthermore, LLVM is lacking a language-independent attribute system. Assumes
would solve that problem too.

And, I wonder whether that statement about code blowup is true. How many
predicates would one need in the worst case? 10-20 for Linux kernel?

@lattner
Copy link
Collaborator

lattner commented Jun 21, 2006

I still don't understand your comments about assert. Any transformation tool that can insert an assert
attribute can insert code that calls abort.

I don't understand your comment about the loop invariant. There are no loop invariants in that loop, at
least not in the classical sense. What do you mean?

And, I wonder whether that statement about code blowup is true. How many
predicates would one need in the worst case? 10-20 for Linux kernel?

Code blowup isn't a matter of how many unique predicates you need, it's a question of how many
assert/assumes you're inserting. In the case of user/kernel, you need at least one intrinsic for every
pointer variable
.

Furthermore, LLVM is lacking a language-independent attribute system. Assumes
would solve that problem too.

This is a feature, not a bug. :)

-Chris

@llvmbot
Copy link
Collaborator Author

llvmbot commented Jun 21, 2006

I still don't understand your comments about assert. Any transformation tool
that can insert an assert
attribute can insert code that calls abort.

Yes, and everyone can use GCC as a front-end. :-)

The fact that something can be done doesn't mean that it should be done
that way.

I don't understand your comment about the loop invariant. There are no loop
invariants in that loop, at
least not in the classical sense. What do you mean?

There is an invariant, y1=GCD(x1,x2) && y3+y4=LCM(x1,x2). The point is
that you can use assumes to pass information from one pass (which might
be very non-trivial, like inferring loop invariants) to another
through byte-code annotation.

And, I wonder whether that statement about code blowup is true. How many
predicates would one need in the worst case? 10-20 for Linux kernel?

Code blowup isn't a matter of how many unique predicates you need, it's a
question of how many
assert/assumes you're inserting. In the case of user/kernel, you need at
least one intrinsic for every
pointer variable
.

The code can be annotated with assumes/asserts but doesn't have to be.
In many circumstances adding one assert/assume per pointer is negligible
cost. There is a good reason why user and kernel attributes are
pushed into the kernel source. A good number of pointers is already
attributed, and I don't see anyone complaining.

Furthermore, LLVM is lacking a language-independent attribute system. Assumes
would solve that problem too.

This is a feature, not a bug. :)

Right, not a bug, but a lacking feature.

Domagoj

@lattner
Copy link
Collaborator

lattner commented Jun 21, 2006

Yes, and everyone can use GCC as a front-end. :-)

I have no idea what this refers to.

The fact that something can be done doesn't mean that it should be done
that way.

Agreed, what's your point?

There is an invariant, y1=GCD(x1,x2) && y3+y4=LCM(x1,x2). The point is
that you can use assumes to pass information from one pass (which might
be very non-trivial, like inferring loop invariants) to another
through byte-code annotation.

Please please please start using standard terminology. LLVM already has a mechanism for doing this,
the Pass infrastructure. We aready handle alias analysis and other high-level analyses with it, it works
well, and it would handle this better than your annotation system.

The code can be annotated with assumes/asserts but doesn't have to be.
In many circumstances adding one assert/assume per pointer is negligible
cost. There is a good reason why user and kernel attributes are
pushed into the kernel source. A good number of pointers is already
attributed, and I don't see anyone complaining.

You are conflating the ideas of annotations in the source code (which I don't care about) with the size of
the LLVM IR. Adding an intrinsic for every pointer ssa variable for the entire linux kernel would add
millions of intrinsics. This would dramatically slow down the compiler and have other major effects.
Yes, this is possibly tolerable to some people, but this does not feel like the right way to do this.

This is a feature, not a bug. :)
Right, not a bug, but a lacking feature.

No it isn't. Please see the extensive discussions in the llvmdev mailing list on this topic.

-Chris

@llvmbot
Copy link
Collaborator Author

llvmbot commented Jun 21, 2006

The fact that something can be done doesn't mean that it should be done
that way.

Agreed, what's your point?

The point is that having an ASSERT intrinsic is a better way to handle
assertions than inserting test-exit blocks, and ASSERTs can be easily
enabled/disabled.

There is an invariant, y1=GCD(x1,x2) && y3+y4=LCM(x1,x2). The point is
that you can use assumes to pass information from one pass (which might
be very non-trivial, like inferring loop invariants) to another
through byte-code annotation.
Please please please start using standard terminology. LLVM already has a
mechanism for doing this,
the Pass infrastructure. We aready handle alias analysis and other high-level
analyses with it, it works
well, and it would handle this better than your annotation system.

I'm using standard terminology. You're missing the point. Pass
infrastructure is powerful, but it's much handier to have assert/assume
in the bytecode. This solution transcedents the problems with pass
ordering, dependencies between Module and Function passes, etc.

You are conflating the ideas of annotations in the source code (which I don't
care about) with the size of
the LLVM IR. Adding an intrinsic for every pointer ssa variable for the
entire linux kernel would add
millions of intrinsics. This would dramatically slow down the compiler and
have other major effects.
Yes, this is possibly tolerable to some people, but this does not feel like
the right way to do this.

Can you propose an alternative way to get that information into LLVM
form somehow? Or you strongly believe that propagating such
language-dependencies into LLVM is inappropriate?

BTW, you would have at most the number of predicates that is equal to
the number of attributes in the code. Mostly function parameters are
attributed, so that theory about millions of intrinsics just doesn't
hold.

No it isn't. Please see the extensive discussions in the llvmdev mailing list
on this topic.

Which topic exactly?

Domagoj

@lattner
Copy link
Collaborator

lattner commented Jun 21, 2006

The point is that having an ASSERT intrinsic is a better way to handle
assertions than inserting test-exit blocks, and ASSERTs can be easily
enabled/disabled.

While I understand that this is your opinion, you haven't said anything that backs that up. Real code
can be enabled/disabled, specialized away, etc.

Pass infrastructure is powerful, but it's much handier to have assert/assume
in the bytecode. This solution transcedents the problems with pass
ordering, dependencies between Module and Function passes, etc.

Not true. This doesn't have pass ordering problems because it's not powerful enough to express such
things. Dependencies between module passes and functions passes are an implementation detail of
the pass manager, also something that your representation just isn't powerful enough to run into.

Reading/writing pass results into .bc files would have exactly the same expressiveness as bolting on
new intrinsics.

Can you propose an alternative way to get that information into LLVM
form somehow?

Extend the IR.

Or you strongly believe that propagating such
language-dependencies into LLVM is inappropriate?

You're missing that this is inherently a language extension that you're talking about here. Why not
admit it and treat it like it is?

BTW, you would have at most the number of predicates that is equal to
the number of attributes in the code. Mostly function parameters are
attributed, so that theory about millions of intrinsics just doesn't
hold.

Incorrect. First, programmers annotate non-ssa variables. Once in SSA form, one non-ssa variable can
become many SSA variables. Further, many of these annotations can be hidden from the program in
type defs and other forms, or derived pointers are inferred from base pointers. At the LLVM level all of
this is explicit.

Further, to illustrate that this is a language extension, LLVM transformations need to know what these
things are and what they mean. A trivial example, in the spirit above, is the SSA promotion pass. It
needs to know how to turn intrinsics talking about "non-ssa variables" into N intrinsics talking about
ssa-variables. This is an example of how annotations are not a panacea that people think they are.

Which topic exactly?

If you look, you'll find it. I would find it no quicker than you could. It has been discussed extensively
multiple times in the past.

-Chris

@llvmbot
Copy link
Collaborator Author

llvmbot commented Jun 21, 2006

The point is that having an ASSERT intrinsic is a better way to handle
assertions than inserting test-exit blocks, and ASSERTs can be easily
enabled/disabled.

While I understand that this is your opinion, you haven't said anything that
backs that up. Real code
can be enabled/disabled, specialized away, etc.

I just don't see how handling an arbitrary amount of code and figuring
out what represents an assertion, what not, and what type of assertion
is simpler or more convinient than having a single
intrinsic/instruction/whatever...

Pass infrastructure is powerful, but it's much handier to have assert/assume
in the bytecode. This solution transcedents the problems with pass
ordering, dependencies between Module and Function passes, etc.

Not true. This doesn't have pass ordering problems because it's not powerful
enough to express such
things. Dependencies between module passes and functions passes are an
implementation detail of
the pass manager, also something that your representation just isn't powerful
enough to run into.

Reading/writing pass results into .bc files would have exactly the same
expressiveness as bolting on
new intrinsics.

Being simple is a feature, not a bug, that's why asserts/assumes are
such a handy concept, but certainly not meant for describing
dependencies between passes :-D

Can you propose an alternative way to get that information into LLVM
form somehow?

Extend the IR.
You're missing that this is inherently a language extension that you're
talking about here. Why not
admit it and treat it like it is?

Look, I don't really care that much how assertions are supported.
For my purposes I might just as well insert function calls to
externally defined functions. However, I think a more general
solution would be useful. As about assumes, I see no other way to get
the info I'd like to have through the front-end (restrict, user,
kernel pointers).

BTW, you would have at most the number of predicates that is equal to
the number of attributes in the code. Mostly function parameters are
attributed, so that theory about millions of intrinsics just doesn't
hold.

Incorrect. First, programmers annotate non-ssa variables. Once in SSA form,
one non-ssa variable can
become many SSA variables. Further, many of these annotations can be hidden
from the program in
type defs and other forms, or derived pointers are inferred from base
pointers. At the LLVM level all of
this is explicit.

All true, but I simply don't see those millions of intrinsics.

Which topic exactly?

If you look, you'll find it. I would find it no quicker than you could. It
has been discussed extensively
multiple times in the past.

Let me rephrase:

No it isn't. Please see the extensive discussions in the llvmdev mailing list
on this topic.

What are you talking about? What should I look-up? Discussions on
intrinsics/asserts/assumes/IR? What?

Domagoj

@lattner
Copy link
Collaborator

lattner commented Jun 16, 2009

A simpler way to do this:
http://nondot.org/sabre/LLVMNotes/BuiltinUnreachable.txt

@llvmbot
Copy link
Collaborator Author

llvmbot commented Nov 10, 2009

A simpler way to do this:
http://nondot.org/sabre/LLVMNotes/BuiltinUnreachable.txt

Would it not be more compact to use an undef label? That is:

br i1 %condition, label %labelIfTrue, label undef

rather than

br i1 %condition, label %labelIfTrue, label %labelIfFalse
labelIfFalse:
unreachable

@dwblaikie
Copy link
Collaborator

Assigning this to myself, in the absence of anyone else looking at it.

This kind of functionality blocks a few possible optimizations in code that
could be produced by clang (see llvm/llvm-bugzilla-archive#3100 for an example) & I'd like to see it
work.

Currently this functionality is blocked by the SimplifyCFG pass that runs early
on in LLVM's pipeline. The pass removes any unreachable blocks and the
conditions that branch to them. This means the fact expressed by the branch's
condition is not available to later optimizations.

Nick Lewycky apparently experimented with solving this bug, leaving unreachable
blocks in the CFG until a later pass but had some regressions with such a
change. If I can get a hold of his experiments I'll see if I can address those
issues.

@dwblaikie
Copy link
Collaborator

Here's a few details that haven't been captured in the bug but were discussed on the mailing list, etc:

Mailing list (2/2/2010)
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20100201/thread.html#95476
Discussing Nick Lewycky's last attempt at a patch for this issue, delaying unreachable block removal until the CodeGenPrep pass (rather than up front in SimplifyCFG where it currently resides).

IRC (10/30/2011)
...(boiling down some of the facts Nick & I discussed on IRC today)
11:01 < nicholas> close; instcombine is too slow. i'm suggesting that we run instsimplify after unreachable removal, as a late cleanup pass.

From the last email in the thread, Jakob mentioned: "I would like this patch better if you would implement Chris' suggestion of DCE'ing unreachable blocks with conditions that are too gross for the optimizer. As it stands there can be arbitrarily complex instructions feeding into the condition."

Though I can't find any mention of the comment by Chris that Jakob is alluding to. On the one hand I can see the merit in only working with sufficiently simple expressions & killing anything too complicated up-front/early, I sort of really want to see what code would be like if we could benefit from arbitrary 'assertions' through this mechanism. I'm just not sure it'll be possible without hurting other optimizations.

I'll try it out by updating Nick's patch & adding his suggestion of using InsntSimplify after removing unreachables as a way to see if that gets the code back on track. I'll update this bug with an updated patch & any progress/details.

@llvmbot
Copy link
Collaborator Author

llvmbot commented Oct 24, 2013

This old bug seems to cover both the original rationale for creating 'unreachable' and more recent efforts to let LLVM make better use of 'unreachable' for optimization purposes.

The following simple example demonstrates the issue.
clang/llvm optimizes away the second comparison in test1 but not the one in test2. This is in contrast to GCC and MSVC, which both optimize away the comparison in test2 (where the equivalent MSVC codes uses the __assume intrinsic).

void f1();

void f2();

void abort() attribute((noreturn));

void test1(int x) {
if (x < 0) abort();
// the following comparison is optimized away
if (x < 0) f1(); else f2();
}

void test2(int x) {
if (x < 0) __builtin_unreachable();
// the following comparison is NOT optimized away
if (x < 0) f1(); else f2();
}

@fhahn
Copy link
Contributor

fhahn commented Feb 14, 2020

I think this has been addressed in general.

For the example code mentioned in the previous comment is optimised as expected I think: https://godbolt.org/z/39cdJa

for test1 we generate a single comparison, and test2 is optimised to just call f2().

Given that, I think we can close this issue. Apologies if I missed something from the extensive discussion in the issue.

@mkurdej
Copy link
Member

mkurdej commented Sep 2, 2020

It seems that the current trunk version of clang (as of 2020-09-02) does not optimize away the call to f1 in test2 (nb. clang 9 and 10 did optimize it). See: https://godbolt.org/z/YK13j1.
I'm not sure if this issue should be reopened or a new one created...

@fhahn
Copy link
Contributor

fhahn commented Sep 3, 2020

It seems that the current trunk version of clang (as of 2020-09-02) does not
optimize away the call to f1 in test2 (nb. clang 9 and 10 did optimize it).
See: https://godbolt.org/z/YK13j1.
I'm not sure if this issue should be reopened or a new one created...

That is unfortunate! Given that it is a recent regression, it is probably better to create a new issue, as it will require to track down where/when we regressed again.

@llvmbot llvmbot transferred this issue from llvm/llvm-bugzilla-archive Dec 3, 2021
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bugzilla Issues migrated from bugzilla
Projects
None yet
Development

No branches or pull requests

5 participants