LLVM  6.0.0svn
Classes | Namespaces | Macros | Typedefs | Functions
CGSCCPassManager.h File Reference

This header provides classes for managing passes over SCCs of the call graph. More...

#include "llvm/ADT/DenseSet.h"
#include "llvm/ADT/PriorityWorklist.h"
#include "llvm/ADT/STLExtras.h"
#include "llvm/ADT/SmallPtrSet.h"
#include "llvm/ADT/SmallVector.h"
#include "llvm/Analysis/LazyCallGraph.h"
#include "llvm/IR/CallSite.h"
#include "llvm/IR/Function.h"
#include "llvm/IR/InstIterator.h"
#include "llvm/IR/PassManager.h"
#include "llvm/IR/ValueHandle.h"
#include "llvm/Support/Debug.h"
#include "llvm/Support/raw_ostream.h"
#include <algorithm>
#include <cassert>
#include <utility>
Include dependency graph for CGSCCPassManager.h:
This graph shows which files directly or indirectly include this file:

Go to the source code of this file.


struct  llvm::RequireAnalysisPass< AnalysisT, LazyCallGraph::SCC, CGSCCAnalysisManager, LazyCallGraph &, CGSCCUpdateResult & >
 An explicit specialization of the require analysis template pass. More...
class  llvm::CGSCCAnalysisManagerModuleProxy::Result<>
 We need a specialized result for the CGSCCAnalysisManagerModuleProxy so it can have access to the call graph in order to walk all the SCCs when invalidating things. More...
struct  llvm::CGSCCUpdateResult
 Support structure for SCC passes to communicate updates the call graph back to the CGSCC pass manager infrsatructure. More...
class  llvm::ModuleToPostOrderCGSCCPassAdaptor< CGSCCPassT >
 The core module pass which does a post-order walk of the SCCs and runs a CGSCC pass over each one. More...
class  llvm::FunctionAnalysisManagerCGSCCProxy
 A proxy from a FunctionAnalysisManager to an SCC. More...
class  llvm::FunctionAnalysisManagerCGSCCProxy::Result
class  llvm::CGSCCToFunctionPassAdaptor< FunctionPassT >
 Adaptor that maps from a SCC to its functions. More...
class  llvm::DevirtSCCRepeatedPass< PassT >
 A helper that repeats an SCC pass each time an indirect call is refined to a direct call by that pass. More...


 Compute iterated dominance frontiers using a linear time algorithm.


#define DEBUG_TYPE   "cgscc"


using llvm::CGSCCAnalysisManager = AnalysisManager< LazyCallGraph::SCC, LazyCallGraph & >
 The CGSCC analysis manager. More...
using llvm::CGSCCPassManager = PassManager< LazyCallGraph::SCC, CGSCCAnalysisManager, LazyCallGraph &, CGSCCUpdateResult & >
 The CGSCC pass manager. More...
using llvm::CGSCCAnalysisManagerModuleProxy = InnerAnalysisManagerProxy< CGSCCAnalysisManager, Module >
 A proxy from a CGSCCAnalysisManager to a Module. More...
using llvm::ModuleAnalysisManagerCGSCCProxy = OuterAnalysisManagerProxy< ModuleAnalysisManager, LazyCallGraph::SCC, LazyCallGraph & >
 A proxy from a ModuleAnalysisManager to an SCC. More...
using llvm::CGSCCAnalysisManagerFunctionProxy = OuterAnalysisManagerProxy< CGSCCAnalysisManager, Function >
 A proxy from a CGSCCAnalysisManager to a Function. More...


template<typename CGSCCPassT >
ModuleToPostOrderCGSCCPassAdaptor< CGSCCPassT > llvm::createModuleToPostOrderCGSCCPassAdaptor (CGSCCPassT Pass)
 A function to deduce a function pass type and wrap it in the templated adaptor. More...
LazyCallGraph::SCCllvm::updateCGAndAnalysisManagerForFunctionPass (LazyCallGraph &G, LazyCallGraph::SCC &C, LazyCallGraph::Node &N, CGSCCAnalysisManager &AM, CGSCCUpdateResult &UR)
 Helper to update the call graph after running a function pass. More...
template<typename FunctionPassT >
CGSCCToFunctionPassAdaptor< FunctionPassT > llvm::createCGSCCToFunctionPassAdaptor (FunctionPassT Pass)
 A function to deduce a function pass type and wrap it in the templated adaptor. More...
template<typename PassT >
DevirtSCCRepeatedPass< PassT > llvm::createDevirtSCCRepeatedPass (PassT Pass, int MaxIterations)
 A function to deduce a function pass type and wrap it in the templated adaptor. More...

Detailed Description

This header provides classes for managing passes over SCCs of the call graph.

These passes form an important component of LLVM's interprocedural optimizations. Because they operate on the SCCs of the call graph, and they traverse the graph in post-order, they can effectively do pair-wise interprocedural optimizations for all call edges in the program while incrementally refining it and improving the context of these pair-wise optimizations. At each call site edge, the callee has already been optimized as much as is possible. This in turn allows very accurate analysis of it for IPO.

A secondary more general goal is to be able to isolate optimization on unrelated parts of the IR module. This is useful to ensure our optimizations are principled and don't miss oportunities where refinement of one part of the module influence transformations in another part of the module. But this is also useful if we want to parallelize the optimizations across common large module graph shapes which tend to be very wide and have large regions of unrelated cliques.

To satisfy these goals, we use the LazyCallGraph which provides two graphs nested inside each other (and built lazily from the bottom-up): the call graph proper, and a reference graph. The reference graph is super set of the call graph and is a conservative approximation of what could through scalar or CGSCC transforms become the call graph. Using this allows us to ensure we optimize functions prior to them being introduced into the call graph by devirtualization or other technique, and thus ensures that subsequent pair-wise interprocedural optimizations observe the optimized form of these functions. The (potentially transitive) reference reachability used by the reference graph is a conservative approximation that still allows us to have independent regions of the graph.

FIXME: There is one major drawback of the reference graph: in its naive form it is quadratic because it contains a distinct edge for each (potentially indirect) reference, even if are all through some common global table of function pointers. This can be fixed in a number of ways that essentially preserve enough of the normalization. While it isn't expected to completely preclude the usability of this, it will need to be addressed.

All of these issues are made substantially more complex in the face of mutations to the call graph while optimization passes are being run. When mutations to the call graph occur we want to achieve two different things:

To address this, the CGSCC manager uses both worklists that can be expanded by passes which transform the IR, and provides invalidation tests to skip entries that become dead. This extra data is provided to every SCC pass so that it can carefully update the manager's traversal as the call graph mutates.

We also provide support for running function passes within the CGSCC walk, and there we provide automatic update of the call graph including of the pass manager to reflect call graph changes that fall out naturally as part of scalar transformations.

The patterns used to ensure the goals of post-order visitation of the fully refined graph:

1) Sink toward the "bottom" as the graph is refined. This means that any iteration continues in some valid post-order sequence after the mutation has altered the structure.

2) Enqueue in post-order, including the current entity. If the current entity's shape changes, it and everything after it in post-order needs to be visited to observe that shape.

Definition in file CGSCCPassManager.h.

Macro Definition Documentation


#define DEBUG_TYPE   "cgscc"

Definition at line 115 of file CGSCCPassManager.h.