LLVM 17.0.0git
CGSCCPassManager.h File Reference

This header provides classes for managing passes over SCCs of the call graph. More...

#include "llvm/ADT/MapVector.h"
#include "llvm/Analysis/LazyCallGraph.h"
#include "llvm/IR/PassManager.h"
#include "llvm/IR/ValueHandle.h"
#include "llvm/Support/raw_ostream.h"
#include <cassert>
#include <utility>
Include dependency graph for CGSCCPassManager.h:
This graph shows which files directly or indirectly include this file:

Go to the source code of this file.

Classes

struct  llvm::RequireAnalysisPass< AnalysisT, LazyCallGraph::SCC, CGSCCAnalysisManager, LazyCallGraph &, CGSCCUpdateResult & >
An explicit specialization of the require analysis template pass. More...

class  llvm::CGSCCAnalysisManagerModuleProxy::Result
We need a specialized result for the CGSCCAnalysisManagerModuleProxy so it can have access to the call graph in order to walk all the SCCs when invalidating things. More...

struct  llvm::CGSCCUpdateResult
Support structure for SCC passes to communicate updates the call graph back to the CGSCC pass manager infrastructure. More...

The core module pass which does a post-order walk of the SCCs and runs a CGSCC pass over each one. More...

class  llvm::FunctionAnalysisManagerCGSCCProxy
A proxy from a FunctionAnalysisManager to an SCC. More...

class  llvm::FunctionAnalysisManagerCGSCCProxy::Result

Adaptor that maps from a SCC to its functions. More...

class  llvm::ShouldNotRunFunctionPassesAnalysis

struct  llvm::ShouldNotRunFunctionPassesAnalysis::Result

class  llvm::DevirtSCCRepeatedPass
A helper that repeats an SCC pass each time an indirect call is refined to a direct call by that pass. More...

Namespaces

namespace  llvm
This is an optimization pass for GlobalISel generic memory operations.

Macros

#define DEBUG_TYPE   "cgscc"

Typedefs

using llvm::CGSCCAnalysisManager = AnalysisManager< LazyCallGraph::SCC, LazyCallGraph & >
The CGSCC analysis manager.

using llvm::CGSCCPassManager = PassManager< LazyCallGraph::SCC, CGSCCAnalysisManager, LazyCallGraph &, CGSCCUpdateResult & >
The CGSCC pass manager.

using llvm::CGSCCAnalysisManagerModuleProxy = InnerAnalysisManagerProxy< CGSCCAnalysisManager, Module >
A proxy from a CGSCCAnalysisManager to a Module.

using llvm::ModuleAnalysisManagerCGSCCProxy = OuterAnalysisManagerProxy< ModuleAnalysisManager, LazyCallGraph::SCC, LazyCallGraph & >
A proxy from a ModuleAnalysisManager to an SCC.

using llvm::CGSCCAnalysisManagerFunctionProxy = OuterAnalysisManagerProxy< CGSCCAnalysisManager, Function >
A proxy from a CGSCCAnalysisManager to a Function.

Functions

template<typename CGSCCPassT >
A function to deduce a function pass type and wrap it in the templated adaptor.

LazyCallGraph::SCCllvm::updateCGAndAnalysisManagerForFunctionPass (LazyCallGraph &G, LazyCallGraph::SCC &C, LazyCallGraph::Node &N, CGSCCAnalysisManager &AM, CGSCCUpdateResult &UR, FunctionAnalysisManager &FAM)
Helper to update the call graph after running a function pass.

LazyCallGraph::SCCllvm::updateCGAndAnalysisManagerForCGSCCPass (LazyCallGraph &G, LazyCallGraph::SCC &C, LazyCallGraph::Node &N, CGSCCAnalysisManager &AM, CGSCCUpdateResult &UR, FunctionAnalysisManager &FAM)
Helper to update the call graph after running a CGSCC pass.

template<typename FunctionPassT >
A function to deduce a function pass type and wrap it in the templated adaptor.

template<typename CGSCCPassT >
DevirtSCCRepeatedPass llvm::createDevirtSCCRepeatedPass (CGSCCPassT &&Pass, int MaxIterations)
A function to deduce a function pass type and wrap it in the templated adaptor.

Detailed Description

This header provides classes for managing passes over SCCs of the call graph.

These passes form an important component of LLVM's interprocedural optimizations. Because they operate on the SCCs of the call graph, and they traverse the graph in post-order, they can effectively do pair-wise interprocedural optimizations for all call edges in the program while incrementally refining it and improving the context of these pair-wise optimizations. At each call site edge, the callee has already been optimized as much as is possible. This in turn allows very accurate analysis of it for IPO.

A secondary more general goal is to be able to isolate optimization on unrelated parts of the IR module. This is useful to ensure our optimizations are principled and don't miss oportunities where refinement of one part of the module influences transformations in another part of the module. But this is also useful if we want to parallelize the optimizations across common large module graph shapes which tend to be very wide and have large regions of unrelated cliques.

To satisfy these goals, we use the LazyCallGraph which provides two graphs nested inside each other (and built lazily from the bottom-up): the call graph proper, and a reference graph. The reference graph is super set of the call graph and is a conservative approximation of what could through scalar or CGSCC transforms become the call graph. Using this allows us to ensure we optimize functions prior to them being introduced into the call graph by devirtualization or other technique, and thus ensures that subsequent pair-wise interprocedural optimizations observe the optimized form of these functions. The (potentially transitive) reference reachability used by the reference graph is a conservative approximation that still allows us to have independent regions of the graph.

FIXME: There is one major drawback of the reference graph: in its naive form it is quadratic because it contains a distinct edge for each (potentially indirect) reference, even if are all through some common global table of function pointers. This can be fixed in a number of ways that essentially preserve enough of the normalization. While it isn't expected to completely preclude the usability of this, it will need to be addressed.

All of these issues are made substantially more complex in the face of mutations to the call graph while optimization passes are being run. When mutations to the call graph occur we want to achieve two different things:

• We need to update the call graph in-flight and invalidate analyses cached on entities in the graph. Because of the cache-based analysis design of the pass manager, it is essential to have stable identities for the elements of the IR that passes traverse, and to invalidate any analyses cached on these elements as the mutations take place.
• We want to preserve the incremental and post-order traversal of the graph even as it is refined and mutated. This means we want optimization to observe the most refined form of the call graph and to do so in post-order.

To address this, the CGSCC manager uses both worklists that can be expanded by passes which transform the IR, and provides invalidation tests to skip entries that become dead. This extra data is provided to every SCC pass so that it can carefully update the manager's traversal as the call graph mutates.

We also provide support for running function passes within the CGSCC walk, and there we provide automatic update of the call graph including of the pass manager to reflect call graph changes that fall out naturally as part of scalar transformations.

The patterns used to ensure the goals of post-order visitation of the fully refined graph:

1) Sink toward the "bottom" as the graph is refined. This means that any iteration continues in some valid post-order sequence after the mutation has altered the structure.

2) Enqueue in post-order, including the current entity. If the current entity's shape changes, it and everything after it in post-order needs to be visited to observe that shape.

Definition in file CGSCCPassManager.h.

◆ DEBUG_TYPE

 #define DEBUG_TYPE   "cgscc"

Definition at line 109 of file CGSCCPassManager.h.