Search this Site
Open LLVM Projects
This document is meant to be a sort of "big TODO list" for LLVM. Each
project in this document is something that would be useful for LLVM to have, and
would also be a great way to get familiar with the system. Some of these
projects are small and self-contained, which may be implemented in a couple of
days, others are larger. Several of these projects may lead to interesting
research projects in their own right. In any case, we welcome all
If you are thinking about tackling one of these projects, please send a mail
to the LLVM
Developer's mailing list, so that we know the project is being worked on.
Additionally this is a good way to get more information about a specific project
or to suggest other projects to add to this page.
The projects in this page are open-ended. More specific projects are
filed as unassigned enhancements in the
LLVM bug tracker. See the list of currently outstanding issues if you wish to help improve LLVM.
In addition to hacking on the main LLVM project, LLVM has several subprojects,
including Clang and VMKit. If you are interested in working on these, please
see their "Open projects" page:
Improvements to the current infrastructure are always very welcome and tend
to be fairly straight-forward to implement. Here are some of the key areas that
can use improvement...
The LLVM bug tracker occasionally
has "code-cleanup" bugs filed in it.
Taking one of these and fixing it is a good way to get your feet wet in the
LLVM code and discover how some of its components work. Some of these include
some major IR redesign work, which is high-impact because it can simplify a lot
of things in the optimizer.
Some specific ones that would be great to have:
Additionally, there are performance improvements in LLVM that need to get
fixed. These are marked with the slow-compile keyword. Use
this Bugzilla query
to find them.
The llvm-test testsuite is
a large collection of programs we use for nightly testing of generated code
performance, compile times, correctness, etc. Having a large testsuite gives
us a lot of coverage of programs and enables us to spot and improve any
problem areas in the compiler.
One extremely useful task, which does not require in-depth knowledge of
compilers, would be to extend our testsuite to include new programs and benchmarks.
In particular, we are interested in cpu-intensive programs that have few
library dependencies, produce some output that can be used for correctness
testing, and that are redistributable in source form. Many different programs
are suitable, for example, see this list for some
We are always looking for new testcases and benchmarks for use with LLVM. In
particular, it is useful to try compiling your favorite C source code with LLVM.
If it doesn't compile, try to figure out why or report it to the llvm-bugs list. If you
get the program to compile, it would be extremely useful to convert the build
system to be compatible with the LLVM Programs testsuite so that we can check it
into SVN and the automated tester can use it to track progress of the
When testing a code, try running it with a variety of optimizations, and with
all the back-ends: CBE, llc, and lli.
Find benchmarks either using our test results or on your own,
where LLVM code generators do not produce optimal code or simply where another
compiler produces better code. Try to minimize the test case that demonstrates
the issue. Then, either submit a
bug with your testcase and the code that LLVM produces vs. the code that it
should produce, or even better, see if you can improve the code
generator and submit a patch. The basic idea is that it's generally quite easy
for us to fix performance problems if we know about them, but we generally don't
have the resources to go finding out why performance is bad.
LNT perf database has some nice features like detect moving average,
standard deviations, variations, etc. But the report page give too much emphasis
on the individual variation (where noise can be higher than signal), eg.
The first part of the project would be to create an analysis tool that would
track moving averages and report:
- If the current result is higher/lower than the previous moving average by
more than (configurable) S standard deviations
- If the current moving average is more than S standard deviations of the
- If the last A moving averages are in constant increase/decrease of more
than P percent
The second part would be to create a web page which would show all related
benchmarks (possibly configurable, like a dashboard) and show the basic statistics
with red/yellow/green colour codes to show status and links to more detailed
analysis of each benchmark.
A possible third part would be to be able to automatically cross reference
different builds, so that if you group them by architecture/compiler/number
of CPUs, this automated tool would understand that the changes are more common
to one particular group.
- Completely rewrite bugpoint. In addition to being a mess, bugpoint suffers
from a number of problems where it will "lose" a bug when reducing. It should
be rewritten from scratch to solve these and other problems.
- Add support for
transactions to the PassManager for improved bugpoint.
- Improve bugpoint to
support running tests in parallel on MP machines.
- Add MC assembler/disassembler and JIT support to the SPARC port.
- Move more optimizations out of the -instcombine pass and into
InstructionSimplify. The optimizations that should be moved are those that
do not create new instructions, for example turning sub i32 %x, 0
into %x. Many passes use InstructionSimplify to clean up code as
they go, so making it smarter can result in improvements all over the place.
Sometimes creating new things is more fun than improving existing things.
These projects tend to be more involved and perhaps require more work, but can
also be very rewarding.
We have a strong base for development of
both pointer analysis based optimizations as well as pointer analyses
themselves. It seems natural to want to take advantage of this:
- The globals mod/ref pass basically does really simple and cheap
bottom-up context sensitive alias analysis. It being simple and cheap
are really important, but there are simple things that we could do to
better capture the effects of functions that access pointer
arguments. This can be really important for C++ methods, which spend
lots of time accessing pointers off 'this'.
- The alias analysis API supports the getModRefBehavior method, which
allows the implementation to give details analysis of the functions.
For example, we could implement full knowledge
of printf/scanf side effects, which would be useful. This feature is in
place but not being used for anything right now.
- We need some way to reason about errno. Consider a loop like this:
x += sqrt(loopinvariant);
We'd like to transform this into:
t = sqrt(loopinvariant);
x += t;
This transformation is safe, because the value of errno isn't
otherwise changed in the loop and the exit value of errno from the
loop is the same. We currently can't do this, because sqrt clobbers
errno, so it isn't "readonly" or "readnone" and we don't have a good
way to model this.
The hard part of this project is figuring out how to describe errno
in the optimizer: each libc #defines errno to something different it
seems. Maybe the solution is to have a __builtin_errno_addr() or
something and change sys headers to use it.
- There are lots of ways to optimize out and improve handling of
We now have a unified infrastructure for writing profile-guided
transformations, which will work either at offline-compile-time or in the JIT,
but we don't have many transformations. We would welcome new profile-guided
transformations as well as improvements to the current profiling system.
Ideas for profile-guided transformations:
- Superblock formation (with many optimizations)
- Loop unrolling/peeling
- Profile directed inlining
- Code layout
Improvements to the existing support:
- The current block and edge profiling code that gets inserted is very simple
and inefficient. Through the use of control-dependence information, many fewer
counters could be inserted into the code. Also, if the execution count of a
loop is known to be a compile-time or runtime constant, all of the counters in
the loop could be avoided.
- You could implement one of the "static profiling" algorithms which analyze a
piece of code an make educated guesses about the relative execution frequencies
of various parts of the code.
- You could add path profiling support, or adapt the existing LLVM path
profiling code to work with the generic profiling interfaces.
LLVM aggressively optimizes for performance, but does not yet optimize for code size.
With a new ARM backend, there is increasing interest in using LLVM for embedded systems
where code size is more of an issue.
Someone interested in working on implementing code compaction in LLVM might want to read
this article, describing using
link-time optimizations for code size optimization.
- Implement a Loop Dependence Analysis Infrastructure
- Design some way to represent and query dep analysis
- Value range propagation pass
- More fun with loops:
- Type inference (aka. devirtualization)
assertions (also PR810).
- Merge the delay slot filling logic that is duplicated into (at least)
the Sparc and Mips backends into a single target independent pass.
Likewise, the branch shortening logic in several targets should be merged
together into one pass.
- Implement 'stack slot coloring' to allocate two frame indexes to the same
stack offset if their live ranges don't overlap. This can reuse a bunch of
analysis machinery from LiveIntervals. Making the stack smaller is good
for cache use and very important on targets where loads have limited
displacement like ppc, thumb, mips, sparc, etc. This should be done as
a pass before prolog epilog insertion. This is now done for register
allocator temporaries, but not for allocas.
- Implement 'shrink wrapping', which is the intelligent placement of callee
saved register save/restores. Right now PrologEpilogInsertion always saves
every (modified) callee save reg in the prolog and restores it in the
epilog. However, some paths through a function (e.g. an early exit) may
not use all regs. Sinking the save down the CFG avoids useless work on
these paths. Work has started on this, please inquire on llvmdev.
- Implement interprocedural register allocation. The CallGraphSCCPass can be
used to implement a bottom-up analysis that will determine the *actual*
registers clobbered by a function. Use the pass to fine tune register usage
in callers based on *actual* registers used by the callee.
- Add support for 16-bit x86 assembly and real mode to the assembler and
disassembler, for use by BIOS code. This includes both 16-bit instruction
encodings as well as privileged instructions (lgdt, lldt, ltr, lmsw, clts,
invd, invlpg, wbinvd, hlt, rdmsr, wrmsr, rdpmc, rdtsc) and the control and
- Port the Bigloo
Scheme compiler, from Manuel Serrano at INRIA Sophia-Antipolis, to
output LLVM bytecode. It seems that it can already output .NET
bytecode, JVM bytecode, and C, so LLVM would ostensibly be another good
- Write a new frontend for some other language (Java? OCaml? Forth?)
- Random test vector generator: Use a C grammar to generate random C code,
run it through llvm-gcc, then run a random set of passes on it using opt.
Try to crash opt. When
opt crashes, use bugpoint to reduce the
test case and post it to a website or mailing list. Repeat ad infinitum.
- Add sandbox features to the Interpreter: catch invalid memory accesses,
potentially unsafe operations (access via arbitrary memory pointer) etc.
- Port Valgrind to use LLVM code generation
and optimization passes instead of its own.
- Write LLVM IR level debugger (extend Interpreter?)
- Write an LLVM Superoptimizer. It would be interesting to take ideas from
this superoptimizer for x86:
paper #1 and paper #2 and adapt them to run on LLVM code.
It would seem that operating on LLVM code would save a lot of time
because its semantics are much simpler than x86. The cost of operating
on LLVM is that target-specific tricks would be missed.
The outcome would be a new LLVM pass that subsumes at least the
instruction combiner, and probably a few other passes as well. Benefits
would include not missing cases missed by the current combiner and also
more easily adapting to changes in the LLVM IR.
All previous superoptimizers have worked on linear sequences of code.
It would seem much better to operate on small subgraphs of the program
LLVM Compiler Infrastructure
Last modified: $Date: 2009/12/16 09:03:23 $