The LLVM Compiler Infrastructure
Site Map:
Download!
Search this Site


Useful Links
Release Emails
19.1.6: Dec 2024
19.1.5: Dec 2024
19.1.4: Nov 2024
19.1.3: Oct 2024
19.1.2: Oct 2024
19.1.1: Oct 2024
19.1.0: Sep 2024
18.1.8: Jun 2024
18.1.7: Jun 2024
18.1.6: May 2024
18.1.5: May 2024
18.1.4: Apr 2024
18.1.3: Apr 2024
18.1.2: Mar 2024
18.1.1: Mar 2024
18.1.0: Mar 2024
17.0.6: Nov 2023
17.0.5: Nov 2023
17.0.4: Oct 2023
17.0.3: Oct 2023
17.0.2: Oct 2023
17.0.1: Sep 2023
All Announcements

Maintained by the
llvm-admin team
2022 European LLVM Developers' Meeting

About

The LLVM Developers' Meeting is a bi-annual gathering of the entire LLVM Project community. The conference is organized by the LLVM Foundation and many volunteers within the LLVM community. Developers and users of LLVM, Clang, and related subprojects will enjoy attending interesting talks, impromptu discussions, and networking with the many members of our community. Whether you are a new to the LLVM project or a long time member, there is something for each attendee.

To see the agenda, speakers, and register, please visit the Event Site.

What can you can expect at an LLVM Developers' Meeting?

Technical Talks
These 20-30 minute talks cover all topics from core infrastructure talks, to project's using LLVM's infrastructure. Attendees will take away technical information that could be pertinent to their project or general interest.
Tutorials
Tutorials are 45-50 minute sessions that dive down deep into a technical topic. Expect in depth examples and explanations.
Lightning Talks
These are fast 5 minute talks that give you a taste of a project or topic. Attendees will hear a wide range of topics and probably leave wanting to learn more.
Quick Talks
Quick 10 minute talks that dive a bit deeper into a topic, but not as deep as a Technical Talk.
Student Technical Talks
Graduate or Undergraduate students present their work using LLVM.
Panels
Panel sessions are guided discussions about a specific topic. The panel consists of ~3 developers who discuss a topic through prepared questions from a moderator. The audience is also given the opportunity to ask questions of the panel.

What types of people attend?

  • Active developers of projects in the LLVM Umbrella (LLVM core, Clang, LLDB, libc++, compiler_rt, flang, lld, MLIR, etc).
  • Anyone interested in using these as part of another project.
  • Students and Researchers
  • Compiler, programming language, and runtime enthusiasts.
  • Those interested in using compiler and toolchain technology in novel and interesting ways.

The LLVM Developers' Meeting strives to be the best conference to meet other LLVM developers and users.

For future announcements or questions, please ask on the LLVM Discourse forums in the Community - EuroLLVM category.

Program (Slides & Videos)

Keynotes

MCA Daemon: Hybrid Throughput Analysis Beyond Basic Blocks [ Video ] [ Slides ]
Min-Yih Hsu, University of California, Irvine

Estimating instruction-level throughput (for example, predicting the cycle counts) is critical for many applications that rely on tightly calculated and accurate timing bounds. In this talk, we will present a new throughput analysis tool, MCA Daemon (MCAD). It is built on top of LLVM MCA and combines the advantages of both static and dynamic throughput analyses, providing a powerful, fast, and easy-to-use tool that scales up with large-scale programs in the real world.

Finding Missed Optimizations Through the Lens of Dead Code Elimination [ Video ] [ Slides ]
Theodoros Theodoridis, ETH Zurich

Compilers are foundational software development tools and incorporate increasingly sophisticated optimizations. Due to their complexity, it is difficult to systematically identify opportunities for improving them. Indeed, the automatic discovery of missed optimizations has been an important and significant challenge. We tackle this challenge by introducing a novel, effective approach that, in a simple and general manner, automatically identifies a wide range of missed optimizations. Our core insight is to leverage dead code elimination (DCE) to both analyze how well compilers optimize code and identify missed optimizations: (1) insert "optimization markers'' in the basic blocks of a given program, (2) compute the program's live/dead basic blocks using the "optimization markers'', and (3) identify missed optimizations from how well compilers eliminate dead blocks. We have implemented and open-sourced our approach in our tool DEAD. DEAD can automatically find missed optimizations and regressions and generate minimal test cases. We reported over a hundred such bugs in LLVM and GCC most of which have already been confirmed or fixed, demonstrating our work's strong practical utility.

Tutorials

Precise Polyhedral Analyses For MLIR using the FPL Presburger Library [ Video ] [ Slides ]
Arjun Pitchanathan, University of Edinburgh

Since March 2022, MLIR has a Presburger library, FPL, that provides native support for a full set of polyhedral analysis operators. This functionality has already been deployed in the loop fusion pass in the Affine dialect and has also been used to enable better dependence analysis in CIRCT. In this tutorial, we demonstrate several case studies showing how to use FPL’s Presburger arithmetic functionality in MLIR. Reasoning precisely about sets of integers enables accurate analytical cache models and brings powerful transformations to loop optimizers for ML and HPC, formal verification, and hardware design. Despite many efforts to use Presburger arithmetic in LLVM, their use has thus far been confined to optional extensions like Polly due to the need for external Presburger libraries (e.g., isl) that were not part of the core compiler toolchain. In the course of developing FPL we worked closely with the LLVM community to make FPL and Presburger arithmetic available in the MLIR upstream repositories. In this talk, we give a detailed walkthrough and demonstrate how they can be used. Our objective is to overcome the longstanding ‘vendor-lock-in’ and inflexibility of polyhedral toolchains by working with the LLVM community to provide targeted analyses that enhance the native components of the LLVM ecosystem.

Technical Talks

Prototyping a Compiler for Homomorphic Encryption Using MLIR [ Video ] [ Slides ]
Juneyoung Lee, CryptoLab

In this talk, we introduce a prototype of a compiler for homomorphic encryption using MLIR. Homomorphic encryption is an encryption scheme in cryptography that provides a set of operations on encrypted data. Implementations of HE operations typically contain many loops on large arrays representing polynomials. Successfully applying loop optimizations can significantly boost the performance of the operations. Our prototype can compile decryption/encryption, and the generated code is at most 40% faster when run in multi threads than the C++ implementation written using Intel HEXL.

Lightweight Instrumentation using Debug Information [ Video ] [ Slides ]
Ellis Hoag & Kyungwoo Lee, Meta

Profile-Guided Optimization (PGO) has been shown to be useful not only in CPU-bound scenarios, but also in the mobile space where app size is a dominating issue. Collecting profiles on size-constrained devices is challenging because the instrumented binary can be doubled in size. Recently, we’ve introduced Lightweight Instrumentation that greatly improves the instrumented binary size overhead using debug info. In this talk we will describe how we reduced this overhead, how to create a minimal instrumented binary for function entry coverage only, and our future plans in this space.

Custom benefit-driven inliner in Falcon JIT [ Video ] [ Slides ]
Artur Pilipenko, Azul

This talk continues a series of technical talks about the internals of Azul's Falcon compiler. Inlining is a critically important compiler optimization. It is especially important in Java because of the extensive use of object-oriented abstractions. Introducing a custom downstream inliner in Falcon enabled great performance and compile-time improvements. In this session, we will give an overview of the inliner we implemented. We will talk about benefit-driven inlining heuristics, prioritization, a combination of top-down and bottom-up traversal orders, and clustering.

LLD for Mach-O: The Journey [ Videos ] [ Slides ]
Jez Ng, Meta

It's been more than two years since we started working on the Mach-O back-end of LLD. We can now successfully link a wide range of large programs, including Chromium as well as LLVM itself. LLD links programs roughly 2x faster than ld64, greatly improving the developer experience on large projects. In this talk, we'll go over some of the challenges we've faced, the reasoning behind the design decisions we've made, as well as our future plans for the linker.

How to write a new compiler driver? The LLVM Flang perspective. [ Video ] [ Slides ]
Andrzej Warzynski, Arm

When LLVM Flang (then "F18") was merged into LLVM as a sub-project in 2020, it had no compiler driver. Two years later, Flang enjoys a driver that integrates all components of the Flang sub-project, can generate executables and that shares the driver logic with Clang. In this presentation I will walk you through our journey.

Developing an LLVM backend for the KV3 Kalray VLIW core [ Videos ] [ Slides ]
Cyril Six, Kalray

Kalray is a semiconductor company. We design and produce a manycore architecture with 6-issue VLIW cores. We started writing a backend for our VLIW 3 years ago. This talk will describe our architecture and relate our experience in the development of its LLVM backend, including the challenges we faced, and a comparison between GCC and LLVM generated code on a few examples.

Hardware loops in the IPU backend [ Videos ] [ Slides ]
Janek van Oirschot, Graphcore

Albeit rare, more and more of today’s architectures implement their own concept of hardware loops; A set of instructions designed to aid the workhorse of computational algorithms: loops. Like these hardware loop enabled architectures, Graphcore’s IPU architecture has multiple of its own hardware loops. This talk will explore IPU’s hardware loops, their use, functionality, constraints, and lowering pipeline within LLVM.

Experiences of OS distributions using LLVM as their main toolchain [ Videos ] [ Slides ]
Bernhard Rosenkränzer, Huawei Open Source Technology Center

A report from the toolchain maintainer of 2 OS distributions targeting 8 different processor architectures that have picked LLVM components (clang/clang++, lld, lldb, and in one case also libc++) as their main toolchain. What is going well? Where are other options still doing better? And more...

Faust audio Domain Specific Language and LLVM [ Videos ] [ Slides ]
Stéphane Letz, GRAME

The talk will briefly present the Faust audio Domain Special Language for sound synthesis and audio processing, then how using the LLVM technology allows programmers to rapidly prototype and test their audio DSP programs, share the same code between several environments, and discover the best set of Faust compiler options to produce the fastest executable.

Implicitly discovered, explicitly built Clang modules [ Videos ] [ Slides ]
Jan Svoboda, Apple

This presentation talks about the transition from implicit builds of Clang modules to a new system based on explicit modules. The aim is to improve scheduling, reliability and performance by making the build system aware of Clang modules. This talk describes the basic structure of such build, communication channel between the build system and the compiler, and new Clang features to enable this system.

Introduction to the IPU graph compiler and the use of LLVM [ Videos ] [ Slides ]
David Bozier, Graphcore

The IPU is a completely new kind of massively parallel processor, co-designed from the ground up with the Poplar® SDK, to accelerate machine intelligence. The compute and memory architecture are designed for AI scale-out. The hardware is developed together with the software, delivering a platform that is easy to use and excels at real-world applications. In this talk we will provide an overview of our IPU processor, the Poplar graph framework library and how it utilizes Clang and LLVM to enable users to write highly optimized C++ compute kernels that run on the IPU device.

ez-clang C++ REPL for bare metal embedded devices [ Videos ] [ Slides ]
Stefan Gränitz

"ez-clang is an experimental Clang-based cross-compiler with a remote-JIT backend targeting very low-resource embedded devices. Compilation, linking and memory management all run on the host machine. Thus, the RPC endpoint on the device is very simple and only takes few kilobytes of flash memory. Right now, ez-clang supports 32-bit ARMv7-m Cortex devices (i.e. Arduino Due and QEMU LM3S811). Please find public previews on https://echtzeit.dev/ez-clang I want to give a live demo of the current development state, an overview of the compiler pipeline based on an example and present the firmware ABI. A binary distribution of ez-clang, sources for two reference firmwares and the RPC interface documentation will be published on https://github.com/echtzeit-dev/ez-clang Give it a try and hack with it on your own hardware! I am looking forward to discuss details from all technical layers and hear your opinions about upcoming development goals!"

SCEV-Based debuginfo salvaging in Loop Strength Reduction [ Videos ] [ Slides ]
Chris Jackson, Graphcore

A discussion of how Scalar Evolution has been used to improve debuginfo retention in the Loop trength Reduction pass by translating SCEVs for optimised-out locations and induction variables into DWARF expressions. This work was enabled by the addition of variadic dbg.value, or dbg.value that allow references to multiple locations. This means that a DWARF program can combine results from multiple SCEVs that refer to multiple locations.

Quick talks

How to Make Hardware with Maths: An Introduction to CIRCT's Scheduling Infrastructure [ Video ] [ Slides ]
Julian Oppermann, Technical University of Darmstadt

The LLVM incubator project CIRCT aims to provide an MLIR-based foundation for the next generation of modular hardware design tools. Scheduling is a common concern in this domain, for example in high-level synthesis (HLS) flows that build tailored, synchronous microarchitectures from untimed dataflow graphs. This talk gives a gentle introduction to CIRCT's scheduling abstractions and presents the currently available infrastructure—including extensible problem models, ready-to-use scheduler implementations and support for external solvers. We discuss current users of the infrastructure and outline future plans for this recent addition to the project.

Improving debug locations for variables in memory [ Video ] [ Slides ]
Orlando Cazalet-Hyams, SN Systems (Sony Interactive Entertainment

LLVM generates suboptimal debug variable locations for variables in memory in optimised code. Due to a lack of information in the compiler, it uses a heuristic to decide whether to issue locations with low availability or locations that may be incorrect. We’ve been prototyping a new debug intrinsic which enables LLVM to make smarter decisions for these variables by connecting stores and source assignment markers. In this talk I will briefly outline the problem with the existing system, how the new system works - including how existing passes are affected - and discuss the accuracy and coverage improvements we've found so far.

LLVM-MOS 6502 Backend: Having a Blast in the Past [ Video ] [ Slides ]
Daniel Thornburgh, Google

LLVM-MOS is an out-of-tree Clang and LLVM backend for the MOS Technology 6502, the CPU behind the NES, Atari 2600/8-bit, BBC Micro, Commodore 64, and many more beloved devices. LLVM-MOS converts freestanding C/C++ into fairly efficient 6502 machine code, despite the 6502’s limited and heterogeneous registers, lack of stack-relative addressing modes, and 256-byte stack. This talk will explore the grab-bag of tricks that hoodwinked LLVM into supporting an almost 50-year-old architecture: "imaginary" registers, instruction set regularization, whole-program static stack allocation, and, of course, lots and lots of pseudo-instructions.

Lightning Talks

LLVM office hours: addressing LLVM engagement and contribution barriers [ Video ] [ Slides ]
Kristof Beyls, Arm

As part of registering for the 2021 LLVM dev meeting, participants were asked to answer a few questions about how the LLVM community could increase engagement and contributions. Out of the 450 people replying, the top 3 issues mentioned were "sometimes people aren't receiving detailed enough feedback on their proposals"; "people are worried to come across as an idiot when asking a question on the mailing list/on record"; "People cannot find where to start; where to find documentation; etc." These were discussed in the community.o workshop at the 2021 LLVM dev meeting, and a summary of that discussion was presented by Adelina Chalmers as a keynote session, see 2021 LLVM Dev Mtg "Deconstructing the Myth: Only real coders contribute to LLVM!? - Takeaways” One of the solutions suggested to help address those top identified barriers from the majority of participants is introducing the concept of "office hours". We have taken some small steps since then to make "office hours" a reality. In this lightning talk, I will talk about what issues"office hours" is aiming to address; how both newbies and experienced contributors can get a lot of value out of them; and where we are in implementing this concept and how you can help for them to be as effective as possible.

llsoftsecbook: an open source book on low-level software security for compiler developers [ Video ] [ Slides ]
Kristof Beyls, Arm

Many compiler engineers work on security hardening features and many of them feel their work would benefit from a better understanding of attacks and hardening techniques. Therefore, we recently started an open source book titled "Low Level Software Security for Compiler developers" at https://github.com/llsoftsec/llsoftsecbook/. It aims to help compiler developers improve their knowledge about security hardening; ultimately leading to more innovation and better implementations of security features.

Exploring Clang/LLVM optimization on programming horror [ Video ] [ Slides ]
Matthieu Dubet

Exploring how Clang/LLVM manages to transform a linear time complexity algorithm to a constant time one.

Flang update [ Video ] [ Slides ]
Kiran Chandramohan, Arm

F18/Flang was accepted as the Fortran frontend of LLVM in 2019. The project joined the monorepo in 2020. Development of a large chunk of code dealing with the lowering of the parse-tree to FIR (Fortran IR) continued in a fork of the llvm-project. In the last year, a significant effort went in to upstreaming this fork. Most of this code is upstreamed and all development has shifted to the llvm-project monorepo. Much development effort has also gone into adding a runtime and lowering for Fortran 95. In parallel, great progress has been achieved on adding a driver, support for OpenMP 1.1+ and OpenACC. In this talk, I will summarize the developmental activities for the last year, the current status and future work.

Student Talks

Using link-time call graph embedding to streamline compiler-assisted instrumentation selection [ Video ] [ Slides ]
Sebastian Kreutzer, TU Darmstadt

CaPI is an instrumentation tool that combines static analysis with user-direction to create low-overhead configurations for accurate performance measurements of scientific applications. We present a prototype implementation of an improved instrumentation toolchain for CaPI that generates a whole-program call graph at link-time and embeds it into the binary. We combine this approach with a dynamic instrumentation method based on XRay.

Automated Batching and Differentiation of Scalar Code in Enzyme [ Video ] [ Slides ]
Tim Gymnich, Technical University of Munich

Derivatives are the key to many important problems in computing, such as machine learning, and optimization. Building on the Enzyme compiler plugin for automatic differentiation, we add forward mode automatic differentiation, batching and the emission of vectorization-ready IR for arbitrary scalar code to unlock significant performance boosts.

Extending Sulong (an LLVM bitcode runtime) for cross-language interoperability between C++/Swift and Java, JavaScript or Python [ Video ] [ Slides ]
Christoph Pichler, Johannes Kepler University, Linz

Sulong is an execution engine for LLVM bitcode and is part of GraalVM, a polyglot virtual machine that can execute programs written in multiple programming languages. Besides advanced tooling (e.g., debugging, monitoring and profiling), GraalVM supports cross-language interoperability as well, which includes languages that can be compiled to LLVM bitcode, such as Swift and C++. Although Sulong runs LLVM bitcode within GraalVM, the implemented interoperability concept also takes the corresponding source language (C++/Swift) semantics into account (e.g., where to apply dynamic binding). In this talk, we will show that Swift/C++ code can be used to treat objects from different languages the same way as Swift/C++ objects, and vice versa. Moreover, we will demonstrate how to use object-oriented concepts (such as interfaces and information hiding) across those languages.

Code of Conduct

The LLVM Foundation is dedicated to providing an inclusive and safe experience for everyone. We do not tolerate harassment of participants in any form. By registering for this event, we expect you to have read and agree to the LLVM Code of Conduct.

Contact

To contact the organizer, email Tanya Lattner

Diamond Sponsors:

Platinum Sponsors:

Gold Sponsors:

Corporate Supporters

Thank you to our sponsors!