The LLVM Compiler Infrastructure
Site Map:
Search this Site

Useful Links
Release Emails
18.1.6: May 2024
18.1.5: May 2024
18.1.4: Apr 2024
18.1.3: Apr 2024
18.1.2: Mar 2024
18.1.1: Mar 2024
18.1.0: Mar 2024
17.0.6: Nov 2023
17.0.5: Nov 2023
17.0.4: Oct 2023
17.0.3: Oct 2023
17.0.2: Oct 2023
17.0.1: Sep 2023
All Announcements

Maintained by the
llvm-admin team
Google Summer of Code - 2016
Project Name
Student Mentor
Better Alias Analysis By Default
Jia Chen
Hal Finkel / George Burgess IV
Capture Tracking Improvements
Scott Egerton
Nuno P. Lopes / Mehdi Amini
Enabling LLVM's self-hosted modules builds using libstdc++
Bianca-Cristina Cristescu
Vassil Vassilev
Finding and analysing copy-pasted code with clang
Raphael Isemann
Vassil Vassilev
Improvement of vectorization process in Polly
Roman Gareev
Tobias Grosser
Interprocedural Register Allocation in LLVM
Vivek Pandya
Mehdi Amini, Hal Finkel
Polly as an Analysis Pass in LLVM
Utpal Bora
Johannes Doerfert
SAFECode's Memory Policy Hardening
Zhengyang Liu
John Criswell
Enabiling Polyhedral Optimizations in Julia (funded by Julia)
Matthias Reisinger
Tobias Grosser, Tim Holy, Jameson Nash
Project Details

Better Alias Analysis By Default

The cfl-aa pass implemented by Gerorge Burgess IV back in GSoC 2014 is a fast, precise and interprocedural analyses that overcomes many deficiencies in the alias analyses currently used in LLVM. It is also easily extensible to add support for field-, flow-, and context- sensitivity. However, the pass is not enabled in today's LLVM build due to (1) various self-hosting miscompilation bugs, and (2) not sufficiently tuned for existing optimization passes that uses it. The goal of this GSoC project is to bring cfl-aa to a usable state and make it a good complement, if not a replacement, of the existing alias analysis pipeline.

Capture Tracking Improvements

The capture tracking analysis is currently inefficient and inaccurate in cases. It could be improved in a number of ways, as mentioned by Philip Reames on the mailing list. I would like to use this opportunity to take my previous experience within LLVM and apply it to other areas of LLVM.

Enabling llvm's self-hosted modules builds using libstdc++

A Module System for C++ is on its way to the C++ standard. The current state of the Module System, although fairly stable, it has a few bugs for C++ support. The most common reason for the bugs is the semantic merging of C++ entities. Currently, the method for ensuring no regressions is a buildbot for libc++, which builds llvm in modules self-hosted mode. Its main purpose is to find bugs in clang's implementation and ensure no regression for the ongoing development. Since the Module Systems is meant to be generic, the project aims to improve the stability and coverage of the Module System by finding as many issues as possible. One approach is to add a buildbot for libstdc++, because this would change the merging model for the modules, and in this way, it will point out different issues which would not be observed by using libc++. The choice for libstdc++ is motivated by its wider use in Unix and, more importantly, the benefits it will bring to supporting modules for third party projects that rely on libstdc++.

Finding and analysing copy-pasted code with clang

Copy-pasted code is dangerous because it introduced bugs and makes projects harder to maintain. This proposal is about creating tools for finding copy-pasted code and report bugs that are caused by this practice. These tools include a checker for clang's static analyzer that analyses a single translation unit and a standalone tool that performs a project-wide analysis.

Improvement of vectorization process in Polly

Polly can perform classical loop transformations, exploit OpenMP level parallelism, expose SIMDization opportunities. However, due to the lack of a machine-specific performance model and missing optimizations, these transformations sometimes lead to compile and execution time regressions, and the generated code is at least one order of magnitude off in comparison to the corresponding vendor implementations. The goal of the project is to reduce such influence through implementation of optimizations aimed to produce code compatible with the best implementations of BLAS and an attempt to avoid vectorization of loops, when it is not profitable for the target architecture. It could be a step in transformation of Polly into an optimization pass used in standard -O3 optimizations.

Interprocedural Register Allocation in LLVM

The objective of this project is to implement a simple interprocedural register allocation that attempts to minimize register spill code by propagating register usage information through the program call graph. By examining the register usage information at each call site, the intraprocedural register allocator can avoid assigning registers already used in the called routines and minimizing spill code.
Stretch goal for this project would be a link time register allocator, in that register allocation is deferred till linking of the code to optimize the allocation across module boundaries.

Draft proposal

Reporting interval: weekly

Personal website/blog:

Polly as an Analysis Pass in LLVM

The Polyhedral framework provides an exact dependence analysis, which is more powerful than conventional dependence testing algorithms. Currently, LLVM mainline lacks a powerful dependence analysis framework, and at the same time, Polly's (a high level data locality optimizer based on polyhedral framework) dependence analysis is suitable for many transformation passes in LLVM like Loop Vectorization, Loop Versioning, Modulo Scheduling, Loop Nest Optimizations, etc. I want to provide an API to Polly such that its precise dependence analysis can be used as an Analysis pass within LLVM's transformation passes.


Reporting interval: Weekly

Personal website/blog: Utpal Bora

SAFECode's Memory Policy Hardening

Monolithic kernels, like linux, did not provide a hardening mechanism on the kernel modules' memory access. Modules in Linux could do almost everything. Arbitary write and read may cause system crash, information leak, and even rootkit injection. There is a great need to implement a memory hardening mechanism to limit the behavior of a kernel module. This projects will enhance the 'Baggy Bounds with Accurate Checking' (BBAC). By adding information to the memory object's padding area, we can perform various safety checks with limited overhead. I will mainly focus on providing runtime access policy hardening. This work will prevent most of illegal memory accesses efficiently.


Reporting interval: Weekly

Personal website/blog: Zhengyang Liu

Enabling Polyhedral Optimizations in Julia

Julia is a dynamic programming language that, over the past few years, gained interest in the open-source community, especially in the field of scientific computing. Julia programs are executed by a virtual machine that translates the source code, during run-time, to machine code based on the LLVM compiler framework. LLVM provides a variety of analyses and transformation capabilities that are leveraged to optimize programs and facilitate efficient execution. More recently, LLVM was enhanced by a new optimization framework, namely Polly, that supports automatic parallelization and data-locality optimizations based on the polyhedral model. Polly is able to speed up compute kernels significantly, especially in the context of dense linear algebra and iterative stencil computations. In the course of this project I plan to integrate Polly into Julia to enable polyhedral optimizations for Julia programs.


Reporting Interval: Weekly

Personal website/blog: