Search this Site
2016 LLVM Cauldron
- What: The inaugural LLVM Cauldron
- When: September 8th, 2016
- Where: Hebden Bridge Town Hall - Hebden Bridge, Yorkshire, UK
We are pleased to announce the first LLVM Cauldron, to be held on Thursday
September 8th 2016 in Hebden Bridge, UK. This is the day before the GNU Tools
Cauldron being held at the same venue, so we hope to take advantage of the
high concentration of compiler enthusiasts and to encourage lots of
cross-pollination between communities.
This will be a one-day conference with a single talks track and a space for
breakout sessions, birds of a feather, and potentially tutorials. The meeting
is free to attend and open to anyone whether a hobbyist, from academia, or from
industry, and regardless of previous experience with LLVM.
The conference keynote will be delivered by Andy Wingo, maintainer of the Guile Scheme implementation
and committer to V8 and SpiderMonkey. His talk will focus on compilation using
persistent data structures, a technique used in Guile's intermediate
Much like the LLVM Developers' meetings, we
believe the event will be of particular interest to the following people:
- Active developers of projects in the LLVM Umbrella
(LLVM core, Clang, LLDB, libc++, compiler_rt, klee, dragonegg, lld, etc).
- Anyone interested in using these as part of another project.
- Compiler, programming language, and runtime enthusiasts.
- Those interested in using compiler and toolchain technology in novel
and interesting ways.
This inaugural LLVM Cauldron will form part of the Wuthering Bytes technology festival in
Hebden Bridge in the UK. Situated at the heart of the UK's
"Northern Powerhouse", Hebden Bridge is roughly equidistant from Liverpool,
Manchester, Sheffield, Leeds and Bradford. The Wuthering Bytes festival is a 10
day celebration of all things technological that are free and open source.
We are incredibly grateful to our event sponsors:
The LLVM Cauldron is organised by:
- Alex Bradbury
- Ed Jones
- Judith Jones
- Diana Picus
- Konstantina Mitropoulou
Please contact email@example.com if you have any issues or questions.
|Keynote: Optimizing with persistent data structures (Andy Wingo)|
engine (Dmitry Melnik)|
|Using LLVM and Sulong for Language C Extensions (Chris Seaton)|
|Scheduler for in-order processors - what's present and
what's missing in LLVM? (Javed Absar)|
|An Altruistic Processor - AAP (Simon Cook and Ed Jones)|
|Syntax macros - a case study in extending Clang (Norman Rink)|
|Accelerating Python code with Numba and LLVM (Graham
| How to add a new target to LLD (Peter Smith)|
|Speeding up query execution in PostgreSQL using LLVM JIT (Dmitry Melnik)|
|MyStorm (Ken Boak)|
|Can we improve the experience of first-time LLVM contributors? (Alex Bradbury)|
|AAPSim - Building a simulator using LLVM MC (Simon Cook)|
|Generating Python & Ruby bindings from C++ (Jonathan B Coe)|
We're thrilled that Andy Wingo will be presenting the
keynote on compilation with persistent data structures.
- Title: Optimizing with persistent data structures
- Abstract: Is there life beyond phi variables and basic blocks?
Andy will report on his experience using a new intermediate representation
for compiler middle-ends, "CPS soup". The CPS soup language represents
programs using Clojure-inspired maps, allowing optimizations to be neatly
expressed as functions from one graph to another. Using these persistent
data structures also means that the source program doesn't change while the
residual program is being created, eliminating one class of problems that
the optimizer writer has to keep in mind. Together we will look at some
example transformations from an expressiveness as well as a performance
point of view, and we will also cover some advantages which a traditional
SSA graph maintains over CPS soup.
- Bio: Andy loves compilers. He has co-maintained the Guile
implementation of Scheme since 2009, and has also worked on the V8 and
anniversary at Igalia, a consultancy
out of Spain that is organized as a
worker-owned cooperative. After all these years, Andy's favorite
compilers paper is still "Lambda, the ultimate GOTO".
Our main track talks consist of:
Scheduler for in-order processors - what's present and what's missing
Javed Absar - ARM Ltd
This talk will focus on developing instruction-scheduling
machine-models in LLVM for deeply-pipelined, in-order, multiple-issue
processors. While LLVM provides a neat Tablegen based interface for describing
the pipeline behavior of both in-order and out-of-order processors, during the
modelling phases we discovered that some hardware aspects of target processors
were much harder to model. The talk will first cover the features that were
relatively easy to capture such as issue-width, latency, forwarding and
resource-cycles. Then we will delve into aspects that turned out to be much
harder or contrived to model, such as - constraints on instructions issue;
in-stage forwarding, variadic instructions. The talk will then focus on
potential solutions to those problems.
An Altruistic Processor (AAP)
Simon Cook and Ed Jones - Embecosm
An Altruistic Processor (AAP) is an open hardware 16-bit architecture for
deeply embedded applications running on small FPGA devices. It is somewhat
configurable, with a minimum of 4 16-bit general registers, at least one
word-addressed code space of up to 16Mword and at least one byte addressed data
space of up to 64kbyte.
The primary role of AAP is as a testbench and vehicle for improvement for LLVM.
It encapsulates features found in a wide range of popular embedded processors
which traditionally cause difficulties for compiler writers.
In this talk we will discuss the motivation behind this project, our experience
implementing and testing LLVM for this architecture, the architectural features
unique to our processor and how these interact with LLVM. We will the on to
explaining our experiences with implementing AAPSim, an instruction set
simulator based on LLVM MC and how we have used this for compiler regression
Dmitry Melnik - ISP RAS
third tier JIT compiler. The main idea is to lower V8's IR into LLVM bitcode,
and then use LLVM for optimization and code generation — the idea is similar to
V8 is 2nd level JIT, because it’s a speculative JIT, and it’s IR is already
"static enough" to allow to do such transformation effectively.
We'll discuss the challenges of using LLVM JIT for compiling dynamic language.
Much effort was spent on making LLVM-generated code properly interact with V8
(relocations, safepoints, stack layout), supporting on-stack-replacement
(implementation of OSR entry and deoptimization). Also we’ll discuss
interaction of V8 and LLVM optimizations.
This project is still work in progress. Now it supports a substantial subset of
selected tests we achieve up to 2x speedup, but most tests suffer from
increased compilation time overhead. On average, the performance is almost on
par with original V8, if measured after "warmup".
The project is open source, and is available at https://github.com/ispras/llv8.
Using LLVM and Sulong for Language C Extensions
Chris Seaton - Oracle Labs
written in C, either for speed or to create interfaces to native libraries.
Ironically, these extensions can hold back performance of the languages
themselves because the native interfaces expose implementation details about
how the language was first implemented, such as the layout of data structures.
In JRuby+Truffle, an implementation of Ruby, we are using the Sulong LLVM
bitcode interpreter to run C extensions on the JVM. By combining LLVM's static
optimizations with dynamic compilation, Sulong is fast, but Sulong also gives
us a powerful new tool - it allows us to abstract from normal C semantics and
to appear to provide the same native API while actually mapping it to our own
alternative data structures and implementation.
We'll demonstrate Sulong and
how we're using it to implement Ruby C extensions.
Syntax macros - a case study in extending Clang
Norman Rink - TU Dresden
Besides being an incredibly successful production compiler, LLVM/Clang has also
enabled an enormous number of academic research projects that are centred on
adding language extensions to C/C++. However, since Clang is not primarily
designed as a research platform, adding language extensions is a tedious task
and work often has to be repeated for new projects. In this talk I will discuss
an implementation of generic syntax macros and the challenges encountered in
extending Clang. Special emphasis will be put on the generality of these
challenges. This talk will be of interest to anyone who (a) would like to know
more about the structure of Clang or (b) has already thought about extending
Accelerating Python code with Numba and LLVM
Graham Markall - Embecosm
Numba is an LLVM-based JIT Compiler for Python that is focused on numerical
computing and delivers execution speed on a par with languages more
traditionally used in scientific computing, such as Fortran and C++. This talk
discusses the implementation of Numba, in particular how to go from an
interpreted, dynamically-typed language to a compiled, statically-typed
implementation. There will be some discussion of the CUDA and HSA backends and
how the differences between LLVM versions in these backends are managed.
How to add a new target to LLD
Peter Smith - Linaro
Using recent experience of adding ARM support to LLD,
we'll look at what are the steps involved in adding a new target to
LLD. Starting with a brief introduction to LLD's functionality and design, we
will go through the customisation points where target specific logic can be
added, and will touch on how to add functionality where your target
needs individual attention.
This will be of general interest to people wanting to know a bit more about
LLD's current status and capabilities, and especially interesting for those
wanting to know how a linker is implemented.