LLVM
6.0.0svn

Shared implementation for block frequency analysis. More...
#include "llvm/Analysis/BlockFrequencyInfo.h"
Public Member Functions  
BlockFrequencyInfoImpl ()=default  
const FunctionT *  getFunction () const 
void  calculate (const FunctionT &F, const BranchProbabilityInfoT &BPI, const LoopInfoT &LI) 
BlockFrequency  getBlockFreq (const BlockT *BB) const 
Optional< uint64_t >  getBlockProfileCount (const Function &F, const BlockT *BB) const 
Optional< uint64_t >  getProfileCountFromFreq (const Function &F, uint64_t Freq) const 
void  setBlockFreq (const BlockT *BB, uint64_t Freq) 
Scaled64  getFloatingBlockFreq (const BlockT *BB) const 
const BranchProbabilityInfoT &  getBPI () const 
raw_ostream &  print (raw_ostream &OS) const override 
Print the frequencies for the current function. More...  
raw_ostream &  printBlockFreq (raw_ostream &OS, const BlockT *BB) const 
Friends  
struct  bfi_detail::BlockEdgesAdder< BT > 
Shared implementation for block frequency analysis.
This is a shared implementation of BlockFrequencyInfo and MachineBlockFrequencyInfo, and calculates the relative frequencies of blocks.
LoopInfo defines a loop as a "nontrivial" SCC dominated by a single block, which is called the header. A given loop, L, can have subloops, which are loops within the subgraph of L that exclude its header. (A "trivial" SCC consists of a single block that does not have a selfedge.)
In addition to loops, this algorithm has limited support for irreducible SCCs, which are SCCs with multiple entry blocks. Irreducible SCCs are discovered on they fly, and modelled as loops with multiple headers.
The headers of irreducible subSCCs consist of its entry blocks and all nodes that are targets of a backedge within it (excluding backedges within true subloops). Block frequency calculations act as if a block is inserted that intercepts all the edges to the headers. All backedges and entries point to this block. Its successors are the headers, which split the frequency evenly.
This algorithm leverages BlockMass and ScaledNumber to maintain precision, separates mass distribution from loop scaling, and dithers to eliminate probability mass loss.
The implementation is split between BlockFrequencyInfoImpl, which knows the type of graph being modelled (BasicBlock vs. MachineBasicBlock), and BlockFrequencyInfoImplBase, which doesn't. The base class uses BlockNode, a wrapper around a uint32_t. BlockNode is numbered from 0 in reversepost order. This gives two advantages: it's easy to compare the relative ordering of two nodes, and maps keyed on BlockT can be represented by vectors.
This algorithm is O(V+E), unless there is irreducible control flow, in which case it's O(V*E) in the worst case.
These are the main stages:
0. Reverse postorder traversal (initializeRPOT()).
Run a single postorder traversal and save it (in reverse) in RPOT. All other stages make use of this ordering. Save a lookup from BlockT to BlockNode (the index into RPOT) in Nodes.
Loop initialization (initializeLoops()).
Translate LoopInfo/MachineLoopInfo into a form suitable for the rest of the algorithm. In particular, store the immediate members of each loop in reverse postorder.
Calculate mass and scale in loops (computeMassInLoops()).
For each loop (bottomup), distribute mass through the DAG resulting from ignoring backedges and treating subloops as a single pseudonode. Track the backedge mass distributed to the loop header, and use it to calculate the loop scale (number of loop iterations). Immediate members that represent subloops will already have been visited and packaged into a pseudonode.
Distributing mass in a loop is a reversepostorder traversal through the loop. Start by assigning full mass to the Loop header. For each node in the loop:
 Fetch and categorize the weight distribution for its successors. If this is a packagedsubloop, the weight distribution is stored in \a LoopData::Exits. Otherwise, fetch it from BranchProbabilityInfo.  Each successor is categorized as \a Weight::Local, a local edge within the current loop, \a Weight::Backedge, a backedge to the loop header, or \a Weight::Exit, any successor outside the loop. The weight, the successor, and its category are stored in \a Distribution. There can be multiple edges to each successor.  If there's a backedge to a nonheader, there's an irreducible SCC. The usual flow is temporarily aborted. \a computeIrreducibleMass() finds the irreducible SCCs within the loop, packages them up, and restarts the flow.  Normalize the distribution: scale weights down so that their sum is 32bits, and coalesce multiple edges to the same node.  Distribute the mass accordingly, dithering to minimize mass loss, as described in \a distributeMass().
In the case of irreducible loops, instead of a single loop header, there will be several. The computation of backedge masses is similar but instead of having a single backedge mass, there will be one backedge per loop header. In these cases, each backedge will carry a mass proportional to the edge weights along the corresponding path.
At the end of propagation, the full mass assigned to the loop will be distributed among the loop headers proportionally according to the mass flowing through their backedges.
Finally, calculate the loop scale from the accumulated backedge mass.
Distribute mass in the function (computeMassInFunction()).
Finally, distribute mass through the DAG resulting from packaging all loops in the function. This uses the same algorithm as distributing mass in a loop, except that there are no exit or backedge edges.
Unpackage loops (unwrapLoops()).
Initialize each block's frequency to a floating point representation of its mass.
Visit loops topdown, scaling the frequencies of its immediate members by the loop's pseudonode's frequency.
Convert frequencies to a 64bit range (finalizeMetrics()).
Using the min and max frequencies as a guide, translate floating point frequencies to an appropriate range in uint64_t.
It has some known flaws.
The model of irreducible control flow is a rough approximation.
Modelling irreducible control flow exactly involves setting up and solving a group of infinite geometric series. Such precision is unlikely to be worthwhile, since most of our algorithms give up on irreducible control flow anyway.
Nevertheless, we might find that we need to get closer. Here's a sort of TODO list for the model with diminishing returns, to be completed as necessary.
Definition at line 32 of file BlockFrequencyInfo.h.

default 
void llvm::BlockFrequencyInfoImpl< BT >::calculate  (  const FunctionT &  F, 
const BranchProbabilityInfoT &  BPI,  
const LoopInfoT &  LI  
) 
Definition at line 1006 of file BlockFrequencyInfoImpl.h.

inline 
Definition at line 962 of file BlockFrequencyInfoImpl.h.

inline 
Definition at line 966 of file BlockFrequencyInfoImpl.h.

inline 
Definition at line 982 of file BlockFrequencyInfoImpl.h.

inline 
Definition at line 978 of file BlockFrequencyInfoImpl.h.

inline 
Definition at line 955 of file BlockFrequencyInfoImpl.h.

inline 
Definition at line 971 of file BlockFrequencyInfoImpl.h.

overridevirtual 
Print the frequencies for the current function.
Prints the frequencies for the blocks in the current function.
Blocks are printed in the natural iteration order of the function, rather than reverse postorder. This provides two advantages: writing analyze tests is easier (since blocks come out in source order), and even unreachable blocks are printed.
BlockFrequencyInfoImplBase::print() only knows reverse postorder, so we need to override it here.
Reimplemented from llvm::BlockFrequencyInfoImplBase.
Definition at line 1276 of file BlockFrequencyInfoImpl.h.

inline 
Definition at line 1000 of file BlockFrequencyInfoImpl.h.
void llvm::BlockFrequencyInfoImpl< BT >::setBlockFreq  (  const BlockT *  BB, 
uint64_t  Freq  
) 
Definition at line 1034 of file BlockFrequencyInfoImpl.h.

friend 
Definition at line 836 of file BlockFrequencyInfoImpl.h.