LLVM  4.0.0
PassBuilder.h
Go to the documentation of this file.
1 //===- Parsing, selection, and construction of pass pipelines --*- C++ -*--===//
2 //
3 // The LLVM Compiler Infrastructure
4 //
5 // This file is distributed under the University of Illinois Open Source
6 // License. See LICENSE.TXT for details.
7 //
8 //===----------------------------------------------------------------------===//
9 /// \file
10 ///
11 /// Interfaces for registering analysis passes, producing common pass manager
12 /// configurations, and parsing of pass pipelines.
13 ///
14 //===----------------------------------------------------------------------===//
15 
16 #ifndef LLVM_PASSES_PASSBUILDER_H
17 #define LLVM_PASSES_PASSBUILDER_H
18 
19 #include "llvm/ADT/Optional.h"
21 #include "llvm/IR/PassManager.h"
23 #include <vector>
24 
25 namespace llvm {
26 class StringRef;
27 class AAManager;
28 class TargetMachine;
29 
30 /// \brief This class provides access to building LLVM's passes.
31 ///
32 /// It's members provide the baseline state available to passes during their
33 /// construction. The \c PassRegistry.def file specifies how to construct all
34 /// of the built-in passes, and those may reference these members during
35 /// construction.
36 class PassBuilder {
37  TargetMachine *TM;
38 
39 public:
40  /// \brief LLVM-provided high-level optimization levels.
41  ///
42  /// This enumerates the LLVM-provided high-level optimization levels. Each
43  /// level has a specific goal and rationale.
45  /// Disable as many optimizations as possible. This doesn't completely
46  /// disable the optimizer in all cases, for example always_inline functions
47  /// can be required to be inlined for correctness.
48  O0,
49 
50  /// Optimize quickly without destroying debuggability.
51  ///
52  /// FIXME: The current and historical behavior of this level does *not*
53  /// agree with this goal, but we would like to move toward this goal in the
54  /// future.
55  ///
56  /// This level is tuned to produce a result from the optimizer as quickly
57  /// as possible and to avoid destroying debuggability. This tends to result
58  /// in a very good development mode where the compiled code will be
59  /// immediately executed as part of testing. As a consequence, where
60  /// possible, we would like to produce efficient-to-execute code, but not
61  /// if it significantly slows down compilation or would prevent even basic
62  /// debugging of the resulting binary.
63  ///
64  /// As an example, complex loop transformations such as versioning,
65  /// vectorization, or fusion might not make sense here due to the degree to
66  /// which the executed code would differ from the source code, and the
67  /// potential compile time cost.
68  O1,
69 
70  /// Optimize for fast execution as much as possible without triggering
71  /// significant incremental compile time or code size growth.
72  ///
73  /// The key idea is that optimizations at this level should "pay for
74  /// themselves". So if an optimization increases compile time by 5% or
75  /// increases code size by 5% for a particular benchmark, that benchmark
76  /// should also be one which sees a 5% runtime improvement. If the compile
77  /// time or code size penalties happen on average across a diverse range of
78  /// LLVM users' benchmarks, then the improvements should as well.
79  ///
80  /// And no matter what, the compile time needs to not grow superlinearly
81  /// with the size of input to LLVM so that users can control the runtime of
82  /// the optimizer in this mode.
83  ///
84  /// This is expected to be a good default optimization level for the vast
85  /// majority of users.
86  O2,
87 
88  /// Optimize for fast execution as much as possible.
89  ///
90  /// This mode is significantly more aggressive in trading off compile time
91  /// and code size to get execution time improvements. The core idea is that
92  /// this mode should include any optimization that helps execution time on
93  /// balance across a diverse collection of benchmarks, even if it increases
94  /// code size or compile time for some benchmarks without corresponding
95  /// improvements to execution time.
96  ///
97  /// Despite being willing to trade more compile time off to get improved
98  /// execution time, this mode still tries to avoid superlinear growth in
99  /// order to make even significantly slower compile times at least scale
100  /// reasonably. This does not preclude very substantial constant factor
101  /// costs though.
102  O3,
103 
104  /// Similar to \c O2 but tries to optimize for small code size instead of
105  /// fast execution without triggering significant incremental execution
106  /// time slowdowns.
107  ///
108  /// The logic here is exactly the same as \c O2, but with code size and
109  /// execution time metrics swapped.
110  ///
111  /// A consequence of the different core goal is that this should in general
112  /// produce substantially smaller executables that still run in
113  /// a reasonable amount of time.
114  Os,
115 
116  /// A very specialized mode that will optimize for code size at any and all
117  /// costs.
118  ///
119  /// This is useful primarily when there are absolute size limitations and
120  /// any effort taken to reduce the size is worth it regardless of the
121  /// execution time impact. You should expect this level to produce rather
122  /// slow, but very small, code.
124  };
125 
126  explicit PassBuilder(TargetMachine *TM = nullptr) : TM(TM) {}
127 
128  /// \brief Cross register the analysis managers through their proxies.
129  ///
130  /// This is an interface that can be used to cross register each
131  // AnalysisManager with all the others analysis managers.
134  CGSCCAnalysisManager &CGAM,
135  ModuleAnalysisManager &MAM);
136 
137  /// \brief Registers all available module analysis passes.
138  ///
139  /// This is an interface that can be used to populate a \c
140  /// ModuleAnalysisManager with all registered module analyses. Callers can
141  /// still manually register any additional analyses. Callers can also
142  /// pre-register analyses and this will not override those.
144 
145  /// \brief Registers all available CGSCC analysis passes.
146  ///
147  /// This is an interface that can be used to populate a \c CGSCCAnalysisManager
148  /// with all registered CGSCC analyses. Callers can still manually register any
149  /// additional analyses. Callers can also pre-register analyses and this will
150  /// not override those.
152 
153  /// \brief Registers all available function analysis passes.
154  ///
155  /// This is an interface that can be used to populate a \c
156  /// FunctionAnalysisManager with all registered function analyses. Callers can
157  /// still manually register any additional analyses. Callers can also
158  /// pre-register analyses and this will not override those.
160 
161  /// \brief Registers all available loop analysis passes.
162  ///
163  /// This is an interface that can be used to populate a \c LoopAnalysisManager
164  /// with all registered loop analyses. Callers can still manually register any
165  /// additional analyses.
167 
168  /// Construct the core LLVM function canonicalization and simplification
169  /// pipeline.
170  ///
171  /// This is a long pipeline and uses most of the per-function optimization
172  /// passes in LLVM to canonicalize and simplify the IR. It is suitable to run
173  /// repeatedly over the IR and is not expected to destroy important
174  /// information about the semantics of the IR.
175  ///
176  /// Note that \p Level cannot be `O0` here. The pipelines produced are
177  /// only intended for use when attempting to optimize code. If frontends
178  /// require some transformations for semantic reasons, they should explicitly
179  /// build them.
182  bool DebugLogging = false);
183 
184  /// Build a per-module default optimization pipeline.
185  ///
186  /// This provides a good default optimization pipeline for per-module
187  /// optimization and code generation without any link-time optimization. It
188  /// typically correspond to frontend "-O[123]" options for optimization
189  /// levels \c O1, \c O2 and \c O3 resp.
190  ///
191  /// Note that \p Level cannot be `O0` here. The pipelines produced are
192  /// only intended for use when attempting to optimize code. If frontends
193  /// require some transformations for semantic reasons, they should explicitly
194  /// build them.
196  bool DebugLogging = false);
197 
198  /// Build a pre-link, LTO-targeting default optimization pipeline to a pass
199  /// manager.
200  ///
201  /// This adds the pre-link optimizations tuned to work well with a later LTO
202  /// run. It works to minimize the IR which needs to be analyzed without
203  /// making irreversible decisions which could be made better during the LTO
204  /// run.
205  ///
206  /// Note that \p Level cannot be `O0` here. The pipelines produced are
207  /// only intended for use when attempting to optimize code. If frontends
208  /// require some transformations for semantic reasons, they should explicitly
209  /// build them.
211  bool DebugLogging = false);
212 
213  /// Build an LTO default optimization pipeline to a pass manager.
214  ///
215  /// This provides a good default optimization pipeline for link-time
216  /// optimization and code generation. It is particularly tuned to fit well
217  /// when IR coming into the LTO phase was first run through \c
218  /// addPreLinkLTODefaultPipeline, and the two coordinate closely.
219  ///
220  /// Note that \p Level cannot be `O0` here. The pipelines produced are
221  /// only intended for use when attempting to optimize code. If frontends
222  /// require some transformations for semantic reasons, they should explicitly
223  /// build them.
225  bool DebugLogging = false);
226 
227  /// Build the default `AAManager` with the default alias analysis pipeline
228  /// registered.
230 
231  /// \brief Parse a textual pass pipeline description into a \c ModulePassManager.
232  ///
233  /// The format of the textual pass pipeline description looks something like:
234  ///
235  /// module(function(instcombine,sroa),dce,cgscc(inliner,function(...)),...)
236  ///
237  /// Pass managers have ()s describing the nest structure of passes. All passes
238  /// are comma separated. As a special shortcut, if the very first pass is not
239  /// a module pass (as a module pass manager is), this will automatically form
240  /// the shortest stack of pass managers that allow inserting that first pass.
241  /// So, assuming function passes 'fpassN', CGSCC passes 'cgpassN', and loop passes
242  /// 'lpassN', all of these are valid:
243  ///
244  /// fpass1,fpass2,fpass3
245  /// cgpass1,cgpass2,cgpass3
246  /// lpass1,lpass2,lpass3
247  ///
248  /// And they are equivalent to the following (resp.):
249  ///
250  /// module(function(fpass1,fpass2,fpass3))
251  /// module(cgscc(cgpass1,cgpass2,cgpass3))
252  /// module(function(loop(lpass1,lpass2,lpass3)))
253  ///
254  /// This shortcut is especially useful for debugging and testing small pass
255  /// combinations. Note that these shortcuts don't introduce any other magic. If
256  /// the sequence of passes aren't all the exact same kind of pass, it will be
257  /// an error. You cannot mix different levels implicitly, you must explicitly
258  /// form a pass manager in which to nest passes.
259  bool parsePassPipeline(ModulePassManager &MPM, StringRef PipelineText,
260  bool VerifyEachPass = true, bool DebugLogging = false);
261 
262  /// Parse a textual alias analysis pipeline into the provided AA manager.
263  ///
264  /// The format of the textual AA pipeline is a comma separated list of AA
265  /// pass names:
266  ///
267  /// basic-aa,globals-aa,...
268  ///
269  /// The AA manager is set up such that the provided alias analyses are tried
270  /// in the order specified. See the \c AAManaager documentation for details
271  /// about the logic used. This routine just provides the textual mapping
272  /// between AA names and the analyses to register with the manager.
273  ///
274  /// Returns false if the text cannot be parsed cleanly. The specific state of
275  /// the \p AA manager is unspecified if such an error is encountered and this
276  /// returns false.
277  bool parseAAPipeline(AAManager &AA, StringRef PipelineText);
278 
279 private:
280  /// A struct to capture parsed pass pipeline names.
281  struct PipelineElement {
282  StringRef Name;
283  std::vector<PipelineElement> InnerPipeline;
284  };
285 
286  static Optional<std::vector<PipelineElement>>
287  parsePipelineText(StringRef Text);
288 
289  bool parseModulePass(ModulePassManager &MPM, const PipelineElement &E,
290  bool VerifyEachPass, bool DebugLogging);
291  bool parseCGSCCPass(CGSCCPassManager &CGPM, const PipelineElement &E,
292  bool VerifyEachPass, bool DebugLogging);
293  bool parseFunctionPass(FunctionPassManager &FPM, const PipelineElement &E,
294  bool VerifyEachPass, bool DebugLogging);
295  bool parseLoopPass(LoopPassManager &LPM, const PipelineElement &E,
296  bool VerifyEachPass, bool DebugLogging);
297  bool parseAAPassName(AAManager &AA, StringRef Name);
298 
299  bool parseLoopPassPipeline(LoopPassManager &LPM,
300  ArrayRef<PipelineElement> Pipeline,
301  bool VerifyEachPass, bool DebugLogging);
302  bool parseFunctionPassPipeline(FunctionPassManager &FPM,
303  ArrayRef<PipelineElement> Pipeline,
304  bool VerifyEachPass, bool DebugLogging);
305  bool parseCGSCCPassPipeline(CGSCCPassManager &CGPM,
306  ArrayRef<PipelineElement> Pipeline,
307  bool VerifyEachPass, bool DebugLogging);
308  bool parseModulePassPipeline(ModulePassManager &MPM,
309  ArrayRef<PipelineElement> Pipeline,
310  bool VerifyEachPass, bool DebugLogging);
311 };
312 }
313 
314 #endif
AAManager buildDefaultAAPipeline()
Build the default AAManager with the default alias analysis pipeline registered.
This header provides classes for managing a pipeline of passes over loops in LLVM IR...
void registerModuleAnalyses(ModuleAnalysisManager &MAM)
Registers all available module analysis passes.
OptimizationLevel
LLVM-provided high-level optimization levels.
Definition: PassBuilder.h:44
PassBuilder(TargetMachine *TM=nullptr)
Definition: PassBuilder.h:126
bool parseAAPipeline(AAManager &AA, StringRef PipelineText)
Parse a textual alias analysis pipeline into the provided AA manager.
Similar to O2 but tries to optimize for small code size instead of fast execution without triggering ...
Definition: PassBuilder.h:114
This class provides access to building LLVM's passes.
Definition: PassBuilder.h:36
ModulePassManager buildLTODefaultPipeline(OptimizationLevel Level, bool DebugLogging=false)
Build an LTO default optimization pipeline to a pass manager.
void registerLoopAnalyses(LoopAnalysisManager &LAM)
Registers all available loop analysis passes.
A very specialized mode that will optimize for code size at any and all costs.
Definition: PassBuilder.h:123
void crossRegisterProxies(LoopAnalysisManager &LAM, FunctionAnalysisManager &FAM, CGSCCAnalysisManager &CGAM, ModuleAnalysisManager &MAM)
Cross register the analysis managers through their proxies.
static GCRegistry::Add< CoreCLRGC > E("coreclr","CoreCLR-compatible GC")
Disable as many optimizations as possible.
Definition: PassBuilder.h:48
A manager for alias analyses.
ModulePassManager buildLTOPreLinkDefaultPipeline(OptimizationLevel Level, bool DebugLogging=false)
Build a pre-link, LTO-targeting default optimization pipeline to a pass manager.
void registerFunctionAnalyses(FunctionAnalysisManager &FAM)
Registers all available function analysis passes.
Optimize for fast execution as much as possible.
Definition: PassBuilder.h:102
FunctionPassManager buildFunctionSimplificationPipeline(OptimizationLevel Level, bool DebugLogging=false)
Construct the core LLVM function canonicalization and simplification pipeline.
PassManager< LazyCallGraph::SCC, CGSCCAnalysisManager, LazyCallGraph &, CGSCCUpdateResult & > CGSCCPassManager
The CGSCC pass manager.
Optimize for fast execution as much as possible without triggering significant incremental compile ti...
Definition: PassBuilder.h:86
bool parsePassPipeline(ModulePassManager &MPM, StringRef PipelineText, bool VerifyEachPass=true, bool DebugLogging=false)
Parse a textual pass pipeline description into a ModulePassManager.
PassManager< Function > FunctionPassManager
Convenience typedef for a pass manager over functions.
Definition: PassManager.h:477
PassManager< Loop, LoopAnalysisManager, LoopStandardAnalysisResults &, LPMUpdater & > LoopPassManager
The Loop pass manager.
Manages a sequence of passes over a particular unit of IR.
Definition: PassManager.h:389
This header provides classes for managing passes over SCCs of the call graph.
void registerCGSCCAnalyses(CGSCCAnalysisManager &CGAM)
Registers all available CGSCC analysis passes.
Optimize quickly without destroying debuggability.
Definition: PassBuilder.h:68
Primary interface to the complete machine description for the target machine.
StringRef - Represent a constant reference to a string, i.e.
Definition: StringRef.h:47
ModulePassManager buildPerModuleDefaultPipeline(OptimizationLevel Level, bool DebugLogging=false)
Build a per-module default optimization pipeline.
A container for analyses that lazily runs them and caches their results.
This header defines various interfaces for pass management in LLVM.
PassManager< Module > ModulePassManager
Convenience typedef for a pass manager over modules.
Definition: PassManager.h:473