LLVM 22.0.0git
InferAddressSpaces.cpp
Go to the documentation of this file.
1//===- InferAddressSpace.cpp - --------------------------------------------===//
2//
3// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4// See https://llvm.org/LICENSE.txt for license information.
5// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6//
7//===----------------------------------------------------------------------===//
8//
9// CUDA C/C++ includes memory space designation as variable type qualifers (such
10// as __global__ and __shared__). Knowing the space of a memory access allows
11// CUDA compilers to emit faster PTX loads and stores. For example, a load from
12// shared memory can be translated to `ld.shared` which is roughly 10% faster
13// than a generic `ld` on an NVIDIA Tesla K40c.
14//
15// Unfortunately, type qualifiers only apply to variable declarations, so CUDA
16// compilers must infer the memory space of an address expression from
17// type-qualified variables.
18//
19// LLVM IR uses non-zero (so-called) specific address spaces to represent memory
20// spaces (e.g. addrspace(3) means shared memory). The Clang frontend
21// places only type-qualified variables in specific address spaces, and then
22// conservatively `addrspacecast`s each type-qualified variable to addrspace(0)
23// (so-called the generic address space) for other instructions to use.
24//
25// For example, the Clang translates the following CUDA code
26// __shared__ float a[10];
27// float v = a[i];
28// to
29// %0 = addrspacecast [10 x float] addrspace(3)* @a to [10 x float]*
30// %1 = gep [10 x float], [10 x float]* %0, i64 0, i64 %i
31// %v = load float, float* %1 ; emits ld.f32
32// @a is in addrspace(3) since it's type-qualified, but its use from %1 is
33// redirected to %0 (the generic version of @a).
34//
35// The optimization implemented in this file propagates specific address spaces
36// from type-qualified variable declarations to its users. For example, it
37// optimizes the above IR to
38// %1 = gep [10 x float] addrspace(3)* @a, i64 0, i64 %i
39// %v = load float addrspace(3)* %1 ; emits ld.shared.f32
40// propagating the addrspace(3) from @a to %1. As the result, the NVPTX
41// codegen is able to emit ld.shared.f32 for %v.
42//
43// Address space inference works in two steps. First, it uses a data-flow
44// analysis to infer as many generic pointers as possible to point to only one
45// specific address space. In the above example, it can prove that %1 only
46// points to addrspace(3). This algorithm was published in
47// CUDA: Compiling and optimizing for a GPU platform
48// Chakrabarti, Grover, Aarts, Kong, Kudlur, Lin, Marathe, Murphy, Wang
49// ICCS 2012
50//
51// Then, address space inference replaces all refinable generic pointers with
52// equivalent specific pointers.
53//
54// The major challenge of implementing this optimization is handling PHINodes,
55// which may create loops in the data flow graph. This brings two complications.
56//
57// First, the data flow analysis in Step 1 needs to be circular. For example,
58// %generic.input = addrspacecast float addrspace(3)* %input to float*
59// loop:
60// %y = phi [ %generic.input, %y2 ]
61// %y2 = getelementptr %y, 1
62// %v = load %y2
63// br ..., label %loop, ...
64// proving %y specific requires proving both %generic.input and %y2 specific,
65// but proving %y2 specific circles back to %y. To address this complication,
66// the data flow analysis operates on a lattice:
67// uninitialized > specific address spaces > generic.
68// All address expressions (our implementation only considers phi, bitcast,
69// addrspacecast, and getelementptr) start with the uninitialized address space.
70// The monotone transfer function moves the address space of a pointer down a
71// lattice path from uninitialized to specific and then to generic. A join
72// operation of two different specific address spaces pushes the expression down
73// to the generic address space. The analysis completes once it reaches a fixed
74// point.
75//
76// Second, IR rewriting in Step 2 also needs to be circular. For example,
77// converting %y to addrspace(3) requires the compiler to know the converted
78// %y2, but converting %y2 needs the converted %y. To address this complication,
79// we break these cycles using "poison" placeholders. When converting an
80// instruction `I` to a new address space, if its operand `Op` is not converted
81// yet, we let `I` temporarily use `poison` and fix all the uses later.
82// For instance, our algorithm first converts %y to
83// %y' = phi float addrspace(3)* [ %input, poison ]
84// Then, it converts %y2 to
85// %y2' = getelementptr %y', 1
86// Finally, it fixes the poison in %y' so that
87// %y' = phi float addrspace(3)* [ %input, %y2' ]
88//
89//===----------------------------------------------------------------------===//
90
92#include "llvm/ADT/ArrayRef.h"
93#include "llvm/ADT/DenseMap.h"
94#include "llvm/ADT/DenseSet.h"
95#include "llvm/ADT/SetVector.h"
100#include "llvm/IR/BasicBlock.h"
101#include "llvm/IR/Constant.h"
102#include "llvm/IR/Constants.h"
103#include "llvm/IR/Dominators.h"
104#include "llvm/IR/Function.h"
105#include "llvm/IR/IRBuilder.h"
106#include "llvm/IR/InstIterator.h"
107#include "llvm/IR/Instruction.h"
108#include "llvm/IR/Instructions.h"
110#include "llvm/IR/Intrinsics.h"
111#include "llvm/IR/LLVMContext.h"
112#include "llvm/IR/Operator.h"
113#include "llvm/IR/PassManager.h"
114#include "llvm/IR/Type.h"
115#include "llvm/IR/Use.h"
116#include "llvm/IR/User.h"
117#include "llvm/IR/Value.h"
118#include "llvm/IR/ValueHandle.h"
120#include "llvm/Pass.h"
121#include "llvm/Support/Casting.h"
123#include "llvm/Support/Debug.h"
129#include <cassert>
130#include <iterator>
131#include <limits>
132#include <utility>
133#include <vector>
134
135#define DEBUG_TYPE "infer-address-spaces"
136
137using namespace llvm;
138
140 "assume-default-is-flat-addrspace", cl::init(false), cl::ReallyHidden,
141 cl::desc("The default address space is assumed as the flat address space. "
142 "This is mainly for test purpose."));
143
144static const unsigned UninitializedAddressSpace =
145 std::numeric_limits<unsigned>::max();
146
147namespace {
148
149using ValueToAddrSpaceMapTy = DenseMap<const Value *, unsigned>;
150// Different from ValueToAddrSpaceMapTy, where a new addrspace is inferred on
151// the *def* of a value, PredicatedAddrSpaceMapTy is map where a new
152// addrspace is inferred on the *use* of a pointer. This map is introduced to
153// infer addrspace from the addrspace predicate assumption built from assume
154// intrinsic. In that scenario, only specific uses (under valid assumption
155// context) could be inferred with a new addrspace.
156using PredicatedAddrSpaceMapTy =
158using PostorderStackTy = llvm::SmallVector<PointerIntPair<Value *, 1, bool>, 4>;
159
160class InferAddressSpaces : public FunctionPass {
161 unsigned FlatAddrSpace = 0;
162
163public:
164 static char ID;
165
166 InferAddressSpaces()
167 : FunctionPass(ID), FlatAddrSpace(UninitializedAddressSpace) {
169 }
170 InferAddressSpaces(unsigned AS) : FunctionPass(ID), FlatAddrSpace(AS) {
172 }
173
174 void getAnalysisUsage(AnalysisUsage &AU) const override {
175 AU.setPreservesCFG();
176 AU.addPreserved<DominatorTreeWrapperPass>();
177 AU.addRequired<AssumptionCacheTracker>();
178 AU.addRequired<TargetTransformInfoWrapperPass>();
179 }
180
181 bool runOnFunction(Function &F) override;
182};
183
184class InferAddressSpacesImpl {
185 AssumptionCache &AC;
186 Function *F = nullptr;
187 const DominatorTree *DT = nullptr;
188 const TargetTransformInfo *TTI = nullptr;
189 const DataLayout *DL = nullptr;
190
191 /// Target specific address space which uses of should be replaced if
192 /// possible.
193 unsigned FlatAddrSpace = 0;
194
195 // Try to update the address space of V. If V is updated, returns true and
196 // false otherwise.
197 bool updateAddressSpace(const Value &V,
198 ValueToAddrSpaceMapTy &InferredAddrSpace,
199 PredicatedAddrSpaceMapTy &PredicatedAS) const;
200
201 // Tries to infer the specific address space of each address expression in
202 // Postorder.
203 void inferAddressSpaces(ArrayRef<WeakTrackingVH> Postorder,
204 ValueToAddrSpaceMapTy &InferredAddrSpace,
205 PredicatedAddrSpaceMapTy &PredicatedAS) const;
206
207 bool isSafeToCastConstAddrSpace(Constant *C, unsigned NewAS) const;
208
209 Value *cloneInstructionWithNewAddressSpace(
210 Instruction *I, unsigned NewAddrSpace,
211 const ValueToValueMapTy &ValueWithNewAddrSpace,
212 const PredicatedAddrSpaceMapTy &PredicatedAS,
213 SmallVectorImpl<const Use *> *PoisonUsesToFix) const;
214
215 void performPointerReplacement(
216 Value *V, Value *NewV, Use &U, ValueToValueMapTy &ValueWithNewAddrSpace,
217 SmallVectorImpl<Instruction *> &DeadInstructions) const;
218
219 // Changes the flat address expressions in function F to point to specific
220 // address spaces if InferredAddrSpace says so. Postorder is the postorder of
221 // all flat expressions in the use-def graph of function F.
222 bool rewriteWithNewAddressSpaces(
223 ArrayRef<WeakTrackingVH> Postorder,
224 const ValueToAddrSpaceMapTy &InferredAddrSpace,
225 const PredicatedAddrSpaceMapTy &PredicatedAS) const;
226
227 void appendsFlatAddressExpressionToPostorderStack(
228 Value *V, PostorderStackTy &PostorderStack,
229 DenseSet<Value *> &Visited) const;
230
231 bool rewriteIntrinsicOperands(IntrinsicInst *II, Value *OldV,
232 Value *NewV) const;
233 void collectRewritableIntrinsicOperands(IntrinsicInst *II,
234 PostorderStackTy &PostorderStack,
235 DenseSet<Value *> &Visited) const;
236
237 std::vector<WeakTrackingVH> collectFlatAddressExpressions(Function &F) const;
238
239 Value *cloneValueWithNewAddressSpace(
240 Value *V, unsigned NewAddrSpace,
241 const ValueToValueMapTy &ValueWithNewAddrSpace,
242 const PredicatedAddrSpaceMapTy &PredicatedAS,
243 SmallVectorImpl<const Use *> *PoisonUsesToFix) const;
244 unsigned joinAddressSpaces(unsigned AS1, unsigned AS2) const;
245
246 unsigned getPredicatedAddrSpace(const Value &PtrV,
247 const Value *UserCtx) const;
248
249public:
250 InferAddressSpacesImpl(AssumptionCache &AC, const DominatorTree *DT,
251 const TargetTransformInfo *TTI, unsigned FlatAddrSpace)
252 : AC(AC), DT(DT), TTI(TTI), FlatAddrSpace(FlatAddrSpace) {}
253 bool run(Function &F);
254};
255
256} // end anonymous namespace
257
258char InferAddressSpaces::ID = 0;
259
260INITIALIZE_PASS_BEGIN(InferAddressSpaces, DEBUG_TYPE, "Infer address spaces",
261 false, false)
264INITIALIZE_PASS_END(InferAddressSpaces, DEBUG_TYPE, "Infer address spaces",
266
267static Type *getPtrOrVecOfPtrsWithNewAS(Type *Ty, unsigned NewAddrSpace) {
268 assert(Ty->isPtrOrPtrVectorTy());
269 PointerType *NPT = PointerType::get(Ty->getContext(), NewAddrSpace);
270 return Ty->getWithNewType(NPT);
271}
272
273// Check whether that's no-op pointer bicast using a pair of
274// `ptrtoint`/`inttoptr` due to the missing no-op pointer bitcast over
275// different address spaces.
276static bool isNoopPtrIntCastPair(const Operator *I2P, const DataLayout &DL,
277 const TargetTransformInfo *TTI) {
278 assert(I2P->getOpcode() == Instruction::IntToPtr);
279 auto *P2I = dyn_cast<Operator>(I2P->getOperand(0));
280 if (!P2I || P2I->getOpcode() != Instruction::PtrToInt)
281 return false;
282 // Check it's really safe to treat that pair of `ptrtoint`/`inttoptr` as a
283 // no-op cast. Besides checking both of them are no-op casts, as the
284 // reinterpreted pointer may be used in other pointer arithmetic, we also
285 // need to double-check that through the target-specific hook. That ensures
286 // the underlying target also agrees that's a no-op address space cast and
287 // pointer bits are preserved.
288 // The current IR spec doesn't have clear rules on address space casts,
289 // especially a clear definition for pointer bits in non-default address
290 // spaces. It would be undefined if that pointer is dereferenced after an
291 // invalid reinterpret cast. Also, due to the unclearness for the meaning of
292 // bits in non-default address spaces in the current spec, the pointer
293 // arithmetic may also be undefined after invalid pointer reinterpret cast.
294 // However, as we confirm through the target hooks that it's a no-op
295 // addrspacecast, it doesn't matter since the bits should be the same.
296 unsigned P2IOp0AS = P2I->getOperand(0)->getType()->getPointerAddressSpace();
297 unsigned I2PAS = I2P->getType()->getPointerAddressSpace();
299 I2P->getOperand(0)->getType(), I2P->getType(),
300 DL) &&
302 P2I->getOperand(0)->getType(), P2I->getType(),
303 DL) &&
304 (P2IOp0AS == I2PAS || TTI->isNoopAddrSpaceCast(P2IOp0AS, I2PAS));
305}
306
307// Returns true if V is an address expression.
308// TODO: Currently, we only consider:
309// - arguments
310// - phi, bitcast, addrspacecast, and getelementptr operators
311static bool isAddressExpression(const Value &V, const DataLayout &DL,
312 const TargetTransformInfo *TTI) {
313
314 if (const Argument *Arg = dyn_cast<Argument>(&V))
315 return Arg->getType()->isPointerTy() &&
316 TTI->getAssumedAddrSpace(&V) != UninitializedAddressSpace;
317
318 const Operator *Op = dyn_cast<Operator>(&V);
319 if (!Op)
320 return false;
321
322 switch (Op->getOpcode()) {
323 case Instruction::PHI:
324 assert(Op->getType()->isPtrOrPtrVectorTy());
325 return true;
326 case Instruction::BitCast:
327 case Instruction::AddrSpaceCast:
328 case Instruction::GetElementPtr:
329 return true;
330 case Instruction::Select:
331 return Op->getType()->isPtrOrPtrVectorTy();
332 case Instruction::Call: {
334 return II && II->getIntrinsicID() == Intrinsic::ptrmask;
335 }
336 case Instruction::IntToPtr:
337 return isNoopPtrIntCastPair(Op, DL, TTI);
338 default:
339 // That value is an address expression if it has an assumed address space.
340 return TTI->getAssumedAddrSpace(&V) != UninitializedAddressSpace;
341 }
342}
343
344// Returns the pointer operands of V.
345//
346// Precondition: V is an address expression.
349 const TargetTransformInfo *TTI) {
350 if (isa<Argument>(&V))
351 return {};
352
353 const Operator &Op = cast<Operator>(V);
354 switch (Op.getOpcode()) {
355 case Instruction::PHI: {
356 auto IncomingValues = cast<PHINode>(Op).incoming_values();
357 return {IncomingValues.begin(), IncomingValues.end()};
358 }
359 case Instruction::BitCast:
360 case Instruction::AddrSpaceCast:
361 case Instruction::GetElementPtr:
362 return {Op.getOperand(0)};
363 case Instruction::Select:
364 return {Op.getOperand(1), Op.getOperand(2)};
365 case Instruction::Call: {
367 assert(II.getIntrinsicID() == Intrinsic::ptrmask &&
368 "unexpected intrinsic call");
369 return {II.getArgOperand(0)};
370 }
371 case Instruction::IntToPtr: {
373 auto *P2I = cast<Operator>(Op.getOperand(0));
374 return {P2I->getOperand(0)};
375 }
376 default:
377 llvm_unreachable("Unexpected instruction type.");
378 }
379}
380
381bool InferAddressSpacesImpl::rewriteIntrinsicOperands(IntrinsicInst *II,
382 Value *OldV,
383 Value *NewV) const {
384 Module *M = II->getParent()->getParent()->getParent();
385 Intrinsic::ID IID = II->getIntrinsicID();
386 switch (IID) {
387 case Intrinsic::objectsize:
388 case Intrinsic::masked_load: {
389 Type *DestTy = II->getType();
390 Type *SrcTy = NewV->getType();
391 Function *NewDecl =
392 Intrinsic::getOrInsertDeclaration(M, IID, {DestTy, SrcTy});
393 II->setArgOperand(0, NewV);
394 II->setCalledFunction(NewDecl);
395 return true;
396 }
397 case Intrinsic::ptrmask:
398 // This is handled as an address expression, not as a use memory operation.
399 return false;
400 case Intrinsic::masked_gather: {
401 Type *RetTy = II->getType();
402 Type *NewPtrTy = NewV->getType();
403 Function *NewDecl =
404 Intrinsic::getOrInsertDeclaration(M, IID, {RetTy, NewPtrTy});
405 II->setArgOperand(0, NewV);
406 II->setCalledFunction(NewDecl);
407 return true;
408 }
409 case Intrinsic::masked_store:
410 case Intrinsic::masked_scatter: {
411 Type *ValueTy = II->getOperand(0)->getType();
412 Type *NewPtrTy = NewV->getType();
414 M, II->getIntrinsicID(), {ValueTy, NewPtrTy});
415 II->setArgOperand(1, NewV);
416 II->setCalledFunction(NewDecl);
417 return true;
418 }
419 case Intrinsic::prefetch:
420 case Intrinsic::is_constant: {
422 M, II->getIntrinsicID(), {NewV->getType()});
423 II->setArgOperand(0, NewV);
424 II->setCalledFunction(NewDecl);
425 return true;
426 }
427 case Intrinsic::fake_use: {
428 II->replaceUsesOfWith(OldV, NewV);
429 return true;
430 }
431 case Intrinsic::lifetime_start:
432 case Intrinsic::lifetime_end: {
433 // Always force lifetime markers to work directly on the alloca.
434 NewV = NewV->stripPointerCasts();
436 M, II->getIntrinsicID(), {NewV->getType()});
437 II->setArgOperand(0, NewV);
438 II->setCalledFunction(NewDecl);
439 return true;
440 }
441 default: {
442 Value *Rewrite = TTI->rewriteIntrinsicWithAddressSpace(II, OldV, NewV);
443 if (!Rewrite)
444 return false;
445 if (Rewrite != II)
446 II->replaceAllUsesWith(Rewrite);
447 return true;
448 }
449 }
450}
451
452void InferAddressSpacesImpl::collectRewritableIntrinsicOperands(
453 IntrinsicInst *II, PostorderStackTy &PostorderStack,
454 DenseSet<Value *> &Visited) const {
455 auto IID = II->getIntrinsicID();
456 switch (IID) {
457 case Intrinsic::ptrmask:
458 case Intrinsic::objectsize:
459 appendsFlatAddressExpressionToPostorderStack(II->getArgOperand(0),
460 PostorderStack, Visited);
461 break;
462 case Intrinsic::is_constant: {
463 Value *Ptr = II->getArgOperand(0);
464 if (Ptr->getType()->isPtrOrPtrVectorTy()) {
465 appendsFlatAddressExpressionToPostorderStack(Ptr, PostorderStack,
466 Visited);
467 }
468
469 break;
470 }
471 case Intrinsic::masked_load:
472 case Intrinsic::masked_gather:
473 case Intrinsic::prefetch:
474 appendsFlatAddressExpressionToPostorderStack(II->getArgOperand(0),
475 PostorderStack, Visited);
476 break;
477 case Intrinsic::masked_store:
478 case Intrinsic::masked_scatter:
479 appendsFlatAddressExpressionToPostorderStack(II->getArgOperand(1),
480 PostorderStack, Visited);
481 break;
482 case Intrinsic::fake_use: {
483 for (Value *Op : II->operands()) {
484 if (Op->getType()->isPtrOrPtrVectorTy()) {
485 appendsFlatAddressExpressionToPostorderStack(Op, PostorderStack,
486 Visited);
487 }
488 }
489
490 break;
491 }
492 case Intrinsic::lifetime_start:
493 case Intrinsic::lifetime_end: {
494 appendsFlatAddressExpressionToPostorderStack(II->getArgOperand(0),
495 PostorderStack, Visited);
496 break;
497 }
498 default:
499 SmallVector<int, 2> OpIndexes;
500 if (TTI->collectFlatAddressOperands(OpIndexes, IID)) {
501 for (int Idx : OpIndexes) {
502 appendsFlatAddressExpressionToPostorderStack(II->getArgOperand(Idx),
503 PostorderStack, Visited);
504 }
505 }
506 break;
507 }
508}
509
510// Returns all flat address expressions in function F. The elements are
511// If V is an unvisited flat address expression, appends V to PostorderStack
512// and marks it as visited.
513void InferAddressSpacesImpl::appendsFlatAddressExpressionToPostorderStack(
514 Value *V, PostorderStackTy &PostorderStack,
515 DenseSet<Value *> &Visited) const {
516 assert(V->getType()->isPtrOrPtrVectorTy());
517
518 // Generic addressing expressions may be hidden in nested constant
519 // expressions.
520 if (ConstantExpr *CE = dyn_cast<ConstantExpr>(V)) {
521 // TODO: Look in non-address parts, like icmp operands.
522 if (isAddressExpression(*CE, *DL, TTI) && Visited.insert(CE).second)
523 PostorderStack.emplace_back(CE, false);
524
525 return;
526 }
527
528 if (V->getType()->getPointerAddressSpace() == FlatAddrSpace &&
529 isAddressExpression(*V, *DL, TTI)) {
530 if (Visited.insert(V).second) {
531 PostorderStack.emplace_back(V, false);
532
533 if (auto *Op = dyn_cast<Operator>(V))
534 for (auto &O : Op->operands())
535 if (ConstantExpr *CE = dyn_cast<ConstantExpr>(O))
536 if (isAddressExpression(*CE, *DL, TTI) && Visited.insert(CE).second)
537 PostorderStack.emplace_back(CE, false);
538 }
539 }
540}
541
542// Returns all flat address expressions in function F. The elements are ordered
543// in postorder.
544std::vector<WeakTrackingVH>
545InferAddressSpacesImpl::collectFlatAddressExpressions(Function &F) const {
546 // This function implements a non-recursive postorder traversal of a partial
547 // use-def graph of function F.
548 PostorderStackTy PostorderStack;
549 // The set of visited expressions.
550 DenseSet<Value *> Visited;
551
552 auto PushPtrOperand = [&](Value *Ptr) {
553 appendsFlatAddressExpressionToPostorderStack(Ptr, PostorderStack, Visited);
554 };
555
556 // Look at operations that may be interesting accelerate by moving to a known
557 // address space. We aim at generating after loads and stores, but pure
558 // addressing calculations may also be faster.
559 for (Instruction &I : instructions(F)) {
560 if (auto *GEP = dyn_cast<GetElementPtrInst>(&I)) {
561 PushPtrOperand(GEP->getPointerOperand());
562 } else if (auto *LI = dyn_cast<LoadInst>(&I))
563 PushPtrOperand(LI->getPointerOperand());
564 else if (auto *SI = dyn_cast<StoreInst>(&I))
565 PushPtrOperand(SI->getPointerOperand());
566 else if (auto *RMW = dyn_cast<AtomicRMWInst>(&I))
567 PushPtrOperand(RMW->getPointerOperand());
568 else if (auto *CmpX = dyn_cast<AtomicCmpXchgInst>(&I))
569 PushPtrOperand(CmpX->getPointerOperand());
570 else if (auto *MI = dyn_cast<MemIntrinsic>(&I)) {
571 // For memset/memcpy/memmove, any pointer operand can be replaced.
572 PushPtrOperand(MI->getRawDest());
573
574 // Handle 2nd operand for memcpy/memmove.
575 if (auto *MTI = dyn_cast<MemTransferInst>(MI))
576 PushPtrOperand(MTI->getRawSource());
577 } else if (auto *II = dyn_cast<IntrinsicInst>(&I))
578 collectRewritableIntrinsicOperands(II, PostorderStack, Visited);
579 else if (ICmpInst *Cmp = dyn_cast<ICmpInst>(&I)) {
580 if (Cmp->getOperand(0)->getType()->isPtrOrPtrVectorTy()) {
581 PushPtrOperand(Cmp->getOperand(0));
582 PushPtrOperand(Cmp->getOperand(1));
583 }
584 } else if (auto *ASC = dyn_cast<AddrSpaceCastInst>(&I)) {
585 PushPtrOperand(ASC->getPointerOperand());
586 } else if (auto *I2P = dyn_cast<IntToPtrInst>(&I)) {
588 PushPtrOperand(cast<Operator>(I2P->getOperand(0))->getOperand(0));
589 } else if (auto *RI = dyn_cast<ReturnInst>(&I)) {
590 if (auto *RV = RI->getReturnValue();
591 RV && RV->getType()->isPtrOrPtrVectorTy())
592 PushPtrOperand(RV);
593 }
594 }
595
596 std::vector<WeakTrackingVH> Postorder; // The resultant postorder.
597 while (!PostorderStack.empty()) {
598 Value *TopVal = PostorderStack.back().getPointer();
599 // If the operands of the expression on the top are already explored,
600 // adds that expression to the resultant postorder.
601 if (PostorderStack.back().getInt()) {
602 if (TopVal->getType()->getPointerAddressSpace() == FlatAddrSpace)
603 Postorder.push_back(TopVal);
604 PostorderStack.pop_back();
605 continue;
606 }
607 // Otherwise, adds its operands to the stack and explores them.
608 PostorderStack.back().setInt(true);
609 // Skip values with an assumed address space.
611 for (Value *PtrOperand : getPointerOperands(*TopVal, *DL, TTI)) {
612 appendsFlatAddressExpressionToPostorderStack(PtrOperand, PostorderStack,
613 Visited);
614 }
615 }
616 }
617 return Postorder;
618}
619
620// A helper function for cloneInstructionWithNewAddressSpace. Returns the clone
621// of OperandUse.get() in the new address space. If the clone is not ready yet,
622// returns poison in the new address space as a placeholder.
624 const Use &OperandUse, unsigned NewAddrSpace,
625 const ValueToValueMapTy &ValueWithNewAddrSpace,
626 const PredicatedAddrSpaceMapTy &PredicatedAS,
627 SmallVectorImpl<const Use *> *PoisonUsesToFix) {
628 Value *Operand = OperandUse.get();
629
630 Type *NewPtrTy = getPtrOrVecOfPtrsWithNewAS(Operand->getType(), NewAddrSpace);
631
632 if (Constant *C = dyn_cast<Constant>(Operand))
633 return ConstantExpr::getAddrSpaceCast(C, NewPtrTy);
634
635 if (Value *NewOperand = ValueWithNewAddrSpace.lookup(Operand))
636 return NewOperand;
637
638 Instruction *Inst = cast<Instruction>(OperandUse.getUser());
639 auto I = PredicatedAS.find(std::make_pair(Inst, Operand));
640 if (I != PredicatedAS.end()) {
641 // Insert an addrspacecast on that operand before the user.
642 unsigned NewAS = I->second;
643 Type *NewPtrTy = getPtrOrVecOfPtrsWithNewAS(Operand->getType(), NewAS);
644 auto *NewI = new AddrSpaceCastInst(Operand, NewPtrTy);
645 NewI->insertBefore(Inst->getIterator());
646 NewI->setDebugLoc(Inst->getDebugLoc());
647 return NewI;
648 }
649
650 PoisonUsesToFix->push_back(&OperandUse);
651 return PoisonValue::get(NewPtrTy);
652}
653
654// Returns a clone of `I` with its operands converted to those specified in
655// ValueWithNewAddrSpace. Due to potential cycles in the data flow graph, an
656// operand whose address space needs to be modified might not exist in
657// ValueWithNewAddrSpace. In that case, uses poison as a placeholder operand and
658// adds that operand use to PoisonUsesToFix so that caller can fix them later.
659//
660// Note that we do not necessarily clone `I`, e.g., if it is an addrspacecast
661// from a pointer whose type already matches. Therefore, this function returns a
662// Value* instead of an Instruction*.
663//
664// This may also return nullptr in the case the instruction could not be
665// rewritten.
666Value *InferAddressSpacesImpl::cloneInstructionWithNewAddressSpace(
667 Instruction *I, unsigned NewAddrSpace,
668 const ValueToValueMapTy &ValueWithNewAddrSpace,
669 const PredicatedAddrSpaceMapTy &PredicatedAS,
670 SmallVectorImpl<const Use *> *PoisonUsesToFix) const {
671 Type *NewPtrType = getPtrOrVecOfPtrsWithNewAS(I->getType(), NewAddrSpace);
672
673 if (I->getOpcode() == Instruction::AddrSpaceCast) {
674 Value *Src = I->getOperand(0);
675 // Because `I` is flat, the source address space must be specific.
676 // Therefore, the inferred address space must be the source space, according
677 // to our algorithm.
678 assert(Src->getType()->getPointerAddressSpace() == NewAddrSpace);
679 return Src;
680 }
681
682 if (IntrinsicInst *II = dyn_cast<IntrinsicInst>(I)) {
683 // Technically the intrinsic ID is a pointer typed argument, so specially
684 // handle calls early.
685 assert(II->getIntrinsicID() == Intrinsic::ptrmask);
687 II->getArgOperandUse(0), NewAddrSpace, ValueWithNewAddrSpace,
688 PredicatedAS, PoisonUsesToFix);
689 Value *Rewrite =
690 TTI->rewriteIntrinsicWithAddressSpace(II, II->getArgOperand(0), NewPtr);
691 if (Rewrite) {
692 assert(Rewrite != II && "cannot modify this pointer operation in place");
693 return Rewrite;
694 }
695
696 return nullptr;
697 }
698
699 unsigned AS = TTI->getAssumedAddrSpace(I);
700 if (AS != UninitializedAddressSpace) {
701 // For the assumed address space, insert an `addrspacecast` to make that
702 // explicit.
703 Type *NewPtrTy = getPtrOrVecOfPtrsWithNewAS(I->getType(), AS);
704 auto *NewI = new AddrSpaceCastInst(I, NewPtrTy);
705 NewI->insertAfter(I->getIterator());
706 NewI->setDebugLoc(I->getDebugLoc());
707 return NewI;
708 }
709
710 // Computes the converted pointer operands.
711 SmallVector<Value *, 4> NewPointerOperands;
712 for (const Use &OperandUse : I->operands()) {
713 if (!OperandUse.get()->getType()->isPtrOrPtrVectorTy())
714 NewPointerOperands.push_back(nullptr);
715 else
717 OperandUse, NewAddrSpace, ValueWithNewAddrSpace, PredicatedAS,
718 PoisonUsesToFix));
719 }
720
721 switch (I->getOpcode()) {
722 case Instruction::BitCast:
723 return new BitCastInst(NewPointerOperands[0], NewPtrType);
724 case Instruction::PHI: {
725 assert(I->getType()->isPtrOrPtrVectorTy());
726 PHINode *PHI = cast<PHINode>(I);
727 PHINode *NewPHI = PHINode::Create(NewPtrType, PHI->getNumIncomingValues());
728 for (unsigned Index = 0; Index < PHI->getNumIncomingValues(); ++Index) {
729 unsigned OperandNo = PHINode::getOperandNumForIncomingValue(Index);
730 NewPHI->addIncoming(NewPointerOperands[OperandNo],
731 PHI->getIncomingBlock(Index));
732 }
733 return NewPHI;
734 }
735 case Instruction::GetElementPtr: {
736 GetElementPtrInst *GEP = cast<GetElementPtrInst>(I);
737 GetElementPtrInst *NewGEP = GetElementPtrInst::Create(
738 GEP->getSourceElementType(), NewPointerOperands[0],
739 SmallVector<Value *, 4>(GEP->indices()));
740 NewGEP->setIsInBounds(GEP->isInBounds());
741 return NewGEP;
742 }
743 case Instruction::Select:
744 assert(I->getType()->isPtrOrPtrVectorTy());
745 return SelectInst::Create(I->getOperand(0), NewPointerOperands[1],
746 NewPointerOperands[2], "", nullptr, I);
747 case Instruction::IntToPtr: {
749 Value *Src = cast<Operator>(I->getOperand(0))->getOperand(0);
750 if (Src->getType() == NewPtrType)
751 return Src;
752
753 // If we had a no-op inttoptr/ptrtoint pair, we may still have inferred a
754 // source address space from a generic pointer source need to insert a cast
755 // back.
756 return new AddrSpaceCastInst(Src, NewPtrType);
757 }
758 default:
759 llvm_unreachable("Unexpected opcode");
760 }
761}
762
763// Similar to cloneInstructionWithNewAddressSpace, returns a clone of the
764// constant expression `CE` with its operands replaced as specified in
765// ValueWithNewAddrSpace.
767 ConstantExpr *CE, unsigned NewAddrSpace,
768 const ValueToValueMapTy &ValueWithNewAddrSpace, const DataLayout *DL,
769 const TargetTransformInfo *TTI) {
770 Type *TargetType =
771 CE->getType()->isPtrOrPtrVectorTy()
772 ? getPtrOrVecOfPtrsWithNewAS(CE->getType(), NewAddrSpace)
773 : CE->getType();
774
775 if (CE->getOpcode() == Instruction::AddrSpaceCast) {
776 // Because CE is flat, the source address space must be specific.
777 // Therefore, the inferred address space must be the source space according
778 // to our algorithm.
779 assert(CE->getOperand(0)->getType()->getPointerAddressSpace() ==
780 NewAddrSpace);
781 return CE->getOperand(0);
782 }
783
784 if (CE->getOpcode() == Instruction::BitCast) {
785 if (Value *NewOperand = ValueWithNewAddrSpace.lookup(CE->getOperand(0)))
786 return ConstantExpr::getBitCast(cast<Constant>(NewOperand), TargetType);
787 return ConstantExpr::getAddrSpaceCast(CE, TargetType);
788 }
789
790 if (CE->getOpcode() == Instruction::IntToPtr) {
792 Constant *Src = cast<ConstantExpr>(CE->getOperand(0))->getOperand(0);
793 assert(Src->getType()->getPointerAddressSpace() == NewAddrSpace);
794 return Src;
795 }
796
797 // Computes the operands of the new constant expression.
798 bool IsNew = false;
799 SmallVector<Constant *, 4> NewOperands;
800 for (unsigned Index = 0; Index < CE->getNumOperands(); ++Index) {
801 Constant *Operand = CE->getOperand(Index);
802 // If the address space of `Operand` needs to be modified, the new operand
803 // with the new address space should already be in ValueWithNewAddrSpace
804 // because (1) the constant expressions we consider (i.e. addrspacecast,
805 // bitcast, and getelementptr) do not incur cycles in the data flow graph
806 // and (2) this function is called on constant expressions in postorder.
807 if (Value *NewOperand = ValueWithNewAddrSpace.lookup(Operand)) {
808 IsNew = true;
809 NewOperands.push_back(cast<Constant>(NewOperand));
810 continue;
811 }
812 if (auto *CExpr = dyn_cast<ConstantExpr>(Operand))
814 CExpr, NewAddrSpace, ValueWithNewAddrSpace, DL, TTI)) {
815 IsNew = true;
816 NewOperands.push_back(cast<Constant>(NewOperand));
817 continue;
818 }
819 // Otherwise, reuses the old operand.
820 NewOperands.push_back(Operand);
821 }
822
823 // If !IsNew, we will replace the Value with itself. However, replaced values
824 // are assumed to wrapped in an addrspacecast cast later so drop it now.
825 if (!IsNew)
826 return nullptr;
827
828 if (CE->getOpcode() == Instruction::GetElementPtr) {
829 // Needs to specify the source type while constructing a getelementptr
830 // constant expression.
831 return CE->getWithOperands(NewOperands, TargetType, /*OnlyIfReduced=*/false,
832 cast<GEPOperator>(CE)->getSourceElementType());
833 }
834
835 return CE->getWithOperands(NewOperands, TargetType);
836}
837
838// Returns a clone of the value `V`, with its operands replaced as specified in
839// ValueWithNewAddrSpace. This function is called on every flat address
840// expression whose address space needs to be modified, in postorder.
841//
842// See cloneInstructionWithNewAddressSpace for the meaning of PoisonUsesToFix.
843Value *InferAddressSpacesImpl::cloneValueWithNewAddressSpace(
844 Value *V, unsigned NewAddrSpace,
845 const ValueToValueMapTy &ValueWithNewAddrSpace,
846 const PredicatedAddrSpaceMapTy &PredicatedAS,
847 SmallVectorImpl<const Use *> *PoisonUsesToFix) const {
848 // All values in Postorder are flat address expressions.
849 assert(V->getType()->getPointerAddressSpace() == FlatAddrSpace &&
850 isAddressExpression(*V, *DL, TTI));
851
852 if (auto *Arg = dyn_cast<Argument>(V)) {
853 // Arguments are address space casted in the function body, as we do not
854 // want to change the function signature.
855 Function *F = Arg->getParent();
856 BasicBlock::iterator Insert = F->getEntryBlock().getFirstNonPHIIt();
857
858 Type *NewPtrTy = PointerType::get(Arg->getContext(), NewAddrSpace);
859 auto *NewI = new AddrSpaceCastInst(Arg, NewPtrTy);
860 NewI->insertBefore(Insert);
861 return NewI;
862 }
863
864 if (Instruction *I = dyn_cast<Instruction>(V)) {
865 Value *NewV = cloneInstructionWithNewAddressSpace(
866 I, NewAddrSpace, ValueWithNewAddrSpace, PredicatedAS, PoisonUsesToFix);
867 if (Instruction *NewI = dyn_cast_or_null<Instruction>(NewV)) {
868 if (NewI->getParent() == nullptr) {
869 NewI->insertBefore(I->getIterator());
870 NewI->takeName(I);
871 NewI->setDebugLoc(I->getDebugLoc());
872 }
873 }
874 return NewV;
875 }
876
878 cast<ConstantExpr>(V), NewAddrSpace, ValueWithNewAddrSpace, DL, TTI);
879}
880
881// Defines the join operation on the address space lattice (see the file header
882// comments).
883unsigned InferAddressSpacesImpl::joinAddressSpaces(unsigned AS1,
884 unsigned AS2) const {
885 if (AS1 == FlatAddrSpace || AS2 == FlatAddrSpace)
886 return FlatAddrSpace;
887
888 if (AS1 == UninitializedAddressSpace)
889 return AS2;
890 if (AS2 == UninitializedAddressSpace)
891 return AS1;
892
893 // The join of two different specific address spaces is flat.
894 return (AS1 == AS2) ? AS1 : FlatAddrSpace;
895}
896
897bool InferAddressSpacesImpl::run(Function &CurFn) {
898 F = &CurFn;
899 DL = &F->getDataLayout();
900
902 FlatAddrSpace = 0;
903
904 if (FlatAddrSpace == UninitializedAddressSpace) {
905 FlatAddrSpace = TTI->getFlatAddressSpace();
906 if (FlatAddrSpace == UninitializedAddressSpace)
907 return false;
908 }
909
910 // Collects all flat address expressions in postorder.
911 std::vector<WeakTrackingVH> Postorder = collectFlatAddressExpressions(*F);
912
913 // Runs a data-flow analysis to refine the address spaces of every expression
914 // in Postorder.
915 ValueToAddrSpaceMapTy InferredAddrSpace;
916 PredicatedAddrSpaceMapTy PredicatedAS;
917 inferAddressSpaces(Postorder, InferredAddrSpace, PredicatedAS);
918
919 // Changes the address spaces of the flat address expressions who are inferred
920 // to point to a specific address space.
921 return rewriteWithNewAddressSpaces(Postorder, InferredAddrSpace,
922 PredicatedAS);
923}
924
925// Constants need to be tracked through RAUW to handle cases with nested
926// constant expressions, so wrap values in WeakTrackingVH.
927void InferAddressSpacesImpl::inferAddressSpaces(
928 ArrayRef<WeakTrackingVH> Postorder,
929 ValueToAddrSpaceMapTy &InferredAddrSpace,
930 PredicatedAddrSpaceMapTy &PredicatedAS) const {
931 SetVector<Value *> Worklist(llvm::from_range, Postorder);
932 // Initially, all expressions are in the uninitialized address space.
933 for (Value *V : Postorder)
934 InferredAddrSpace[V] = UninitializedAddressSpace;
935
936 while (!Worklist.empty()) {
937 Value *V = Worklist.pop_back_val();
938
939 // Try to update the address space of the stack top according to the
940 // address spaces of its operands.
941 if (!updateAddressSpace(*V, InferredAddrSpace, PredicatedAS))
942 continue;
943
944 for (Value *User : V->users()) {
945 // Skip if User is already in the worklist.
946 if (Worklist.count(User))
947 continue;
948
949 auto Pos = InferredAddrSpace.find(User);
950 // Our algorithm only updates the address spaces of flat address
951 // expressions, which are those in InferredAddrSpace.
952 if (Pos == InferredAddrSpace.end())
953 continue;
954
955 // Function updateAddressSpace moves the address space down a lattice
956 // path. Therefore, nothing to do if User is already inferred as flat (the
957 // bottom element in the lattice).
958 if (Pos->second == FlatAddrSpace)
959 continue;
960
961 Worklist.insert(User);
962 }
963 }
964}
965
966unsigned
967InferAddressSpacesImpl::getPredicatedAddrSpace(const Value &Ptr,
968 const Value *UserCtx) const {
969 const Instruction *UserCtxI = dyn_cast<Instruction>(UserCtx);
970 if (!UserCtxI)
972
973 const Value *StrippedPtr = Ptr.stripInBoundsOffsets();
974 for (auto &AssumeVH : AC.assumptionsFor(StrippedPtr)) {
975 if (!AssumeVH)
976 continue;
977 CallInst *CI = cast<CallInst>(AssumeVH);
978 if (!isValidAssumeForContext(CI, UserCtxI, DT))
979 continue;
980
981 const Value *Ptr;
982 unsigned AS;
983 std::tie(Ptr, AS) = TTI->getPredicatedAddrSpace(CI->getArgOperand(0));
984 if (Ptr)
985 return AS;
986 }
987
989}
990
991bool InferAddressSpacesImpl::updateAddressSpace(
992 const Value &V, ValueToAddrSpaceMapTy &InferredAddrSpace,
993 PredicatedAddrSpaceMapTy &PredicatedAS) const {
994 assert(InferredAddrSpace.count(&V));
995
996 LLVM_DEBUG(dbgs() << "Updating the address space of\n " << V << '\n');
997
998 // The new inferred address space equals the join of the address spaces
999 // of all its pointer operands.
1000 unsigned NewAS = UninitializedAddressSpace;
1001
1002 // isAddressExpression should guarantee that V is an operator or an argument.
1004
1005 unsigned AS = TTI->getAssumedAddrSpace(&V);
1006 if (AS != UninitializedAddressSpace) {
1007 // Use the assumed address space directly.
1008 NewAS = AS;
1009 } else {
1010 // Otherwise, infer the address space from its pointer operands.
1011 SmallVector<Constant *, 2> ConstantPtrOps;
1012 for (Value *PtrOperand : getPointerOperands(V, *DL, TTI)) {
1013 auto I = InferredAddrSpace.find(PtrOperand);
1014 unsigned OperandAS;
1015 if (I == InferredAddrSpace.end()) {
1016 OperandAS = PtrOperand->getType()->getPointerAddressSpace();
1017 if (auto *C = dyn_cast<Constant>(PtrOperand);
1018 C && OperandAS == FlatAddrSpace) {
1019 // Defer joining the address space of constant pointer operands.
1020 ConstantPtrOps.push_back(C);
1021 continue;
1022 }
1023 if (OperandAS == FlatAddrSpace) {
1024 // Check AC for assumption dominating V.
1025 unsigned AS = getPredicatedAddrSpace(*PtrOperand, &V);
1026 if (AS != UninitializedAddressSpace) {
1028 << " deduce operand AS from the predicate addrspace "
1029 << AS << '\n');
1030 OperandAS = AS;
1031 // Record this use with the predicated AS.
1032 PredicatedAS[std::make_pair(&V, PtrOperand)] = OperandAS;
1033 }
1034 }
1035 } else
1036 OperandAS = I->second;
1037
1038 // join(flat, *) = flat. So we can break if NewAS is already flat.
1039 NewAS = joinAddressSpaces(NewAS, OperandAS);
1040 if (NewAS == FlatAddrSpace)
1041 break;
1042 }
1043 if (NewAS != FlatAddrSpace && NewAS != UninitializedAddressSpace) {
1044 if (any_of(ConstantPtrOps, [=](Constant *C) {
1045 return !isSafeToCastConstAddrSpace(C, NewAS);
1046 }))
1047 NewAS = FlatAddrSpace;
1048 }
1049 }
1050
1051 unsigned OldAS = InferredAddrSpace.lookup(&V);
1052 assert(OldAS != FlatAddrSpace);
1053 if (OldAS == NewAS)
1054 return false;
1055
1056 // If any updates are made, grabs its users to the worklist because
1057 // their address spaces can also be possibly updated.
1058 LLVM_DEBUG(dbgs() << " to " << NewAS << '\n');
1059 InferredAddrSpace[&V] = NewAS;
1060 return true;
1061}
1062
1063/// Replace operand \p OpIdx in \p Inst, if the value is the same as \p OldVal
1064/// with \p NewVal.
1065static bool replaceOperandIfSame(Instruction *Inst, unsigned OpIdx,
1066 Value *OldVal, Value *NewVal) {
1067 Use &U = Inst->getOperandUse(OpIdx);
1068 if (U.get() == OldVal) {
1069 U.set(NewVal);
1070 return true;
1071 }
1072
1073 return false;
1074}
1075
1076template <typename InstrType>
1078 InstrType *MemInstr, unsigned AddrSpace,
1079 Value *OldV, Value *NewV) {
1080 if (!MemInstr->isVolatile() || TTI.hasVolatileVariant(MemInstr, AddrSpace)) {
1081 return replaceOperandIfSame(MemInstr, InstrType::getPointerOperandIndex(),
1082 OldV, NewV);
1083 }
1084
1085 return false;
1086}
1087
1088/// If \p OldV is used as the pointer operand of a compatible memory operation
1089/// \p Inst, replaces the pointer operand with NewV.
1090///
1091/// This covers memory instructions with a single pointer operand that can have
1092/// its address space changed by simply mutating the use to a new value.
1093///
1094/// \p returns true the user replacement was made.
1096 User *Inst, unsigned AddrSpace,
1097 Value *OldV, Value *NewV) {
1098 if (auto *LI = dyn_cast<LoadInst>(Inst))
1099 return replaceSimplePointerUse(TTI, LI, AddrSpace, OldV, NewV);
1100
1101 if (auto *SI = dyn_cast<StoreInst>(Inst))
1102 return replaceSimplePointerUse(TTI, SI, AddrSpace, OldV, NewV);
1103
1104 if (auto *RMW = dyn_cast<AtomicRMWInst>(Inst))
1105 return replaceSimplePointerUse(TTI, RMW, AddrSpace, OldV, NewV);
1106
1107 if (auto *CmpX = dyn_cast<AtomicCmpXchgInst>(Inst))
1108 return replaceSimplePointerUse(TTI, CmpX, AddrSpace, OldV, NewV);
1109
1110 return false;
1111}
1112
1113/// Update memory intrinsic uses that require more complex processing than
1114/// simple memory instructions. These require re-mangling and may have multiple
1115/// pointer operands.
1117 Value *NewV) {
1118 IRBuilder<> B(MI);
1119 if (auto *MSI = dyn_cast<MemSetInst>(MI)) {
1120 B.CreateMemSet(NewV, MSI->getValue(), MSI->getLength(), MSI->getDestAlign(),
1121 false, // isVolatile
1122 MI->getAAMetadata());
1123 } else if (auto *MTI = dyn_cast<MemTransferInst>(MI)) {
1124 Value *Src = MTI->getRawSource();
1125 Value *Dest = MTI->getRawDest();
1126
1127 // Be careful in case this is a self-to-self copy.
1128 if (Src == OldV)
1129 Src = NewV;
1130
1131 if (Dest == OldV)
1132 Dest = NewV;
1133
1134 if (auto *MCI = dyn_cast<MemCpyInst>(MTI)) {
1135 if (MCI->isForceInlined())
1136 B.CreateMemCpyInline(Dest, MTI->getDestAlign(), Src,
1137 MTI->getSourceAlign(), MTI->getLength(),
1138 false, // isVolatile
1139 MI->getAAMetadata());
1140 else
1141 B.CreateMemCpy(Dest, MTI->getDestAlign(), Src, MTI->getSourceAlign(),
1142 MTI->getLength(),
1143 false, // isVolatile
1144 MI->getAAMetadata());
1145 } else {
1147 B.CreateMemMove(Dest, MTI->getDestAlign(), Src, MTI->getSourceAlign(),
1148 MTI->getLength(),
1149 false, // isVolatile
1150 MI->getAAMetadata());
1151 }
1152 } else
1153 llvm_unreachable("unhandled MemIntrinsic");
1154
1155 MI->eraseFromParent();
1156 return true;
1157}
1158
1159// \p returns true if it is OK to change the address space of constant \p C with
1160// a ConstantExpr addrspacecast.
1161bool InferAddressSpacesImpl::isSafeToCastConstAddrSpace(Constant *C,
1162 unsigned NewAS) const {
1164
1165 unsigned SrcAS = C->getType()->getPointerAddressSpace();
1166 if (SrcAS == NewAS || isa<UndefValue>(C))
1167 return true;
1168
1169 // Prevent illegal casts between different non-flat address spaces.
1170 if (SrcAS != FlatAddrSpace && NewAS != FlatAddrSpace)
1171 return false;
1172
1174 return true;
1175
1176 if (auto *Op = dyn_cast<Operator>(C)) {
1177 // If we already have a constant addrspacecast, it should be safe to cast it
1178 // off.
1179 if (Op->getOpcode() == Instruction::AddrSpaceCast)
1180 return isSafeToCastConstAddrSpace(cast<Constant>(Op->getOperand(0)),
1181 NewAS);
1182
1183 if (Op->getOpcode() == Instruction::IntToPtr &&
1184 Op->getType()->getPointerAddressSpace() == FlatAddrSpace)
1185 return true;
1186 }
1187
1188 return false;
1189}
1190
1192 Value::use_iterator End) {
1193 User *CurUser = I->getUser();
1194 ++I;
1195
1196 while (I != End && I->getUser() == CurUser)
1197 ++I;
1198
1199 return I;
1200}
1201
1202void InferAddressSpacesImpl::performPointerReplacement(
1203 Value *V, Value *NewV, Use &U, ValueToValueMapTy &ValueWithNewAddrSpace,
1204 SmallVectorImpl<Instruction *> &DeadInstructions) const {
1205
1206 User *CurUser = U.getUser();
1207
1208 unsigned AddrSpace = V->getType()->getPointerAddressSpace();
1209 if (replaceIfSimplePointerUse(*TTI, CurUser, AddrSpace, V, NewV))
1210 return;
1211
1212 // Skip if the current user is the new value itself.
1213 if (CurUser == NewV)
1214 return;
1215
1216 auto *CurUserI = dyn_cast<Instruction>(CurUser);
1217 if (!CurUserI || CurUserI->getFunction() != F)
1218 return;
1219
1220 // Handle more complex cases like intrinsic that need to be remangled.
1221 if (auto *MI = dyn_cast<MemIntrinsic>(CurUser)) {
1222 if (!MI->isVolatile() && handleMemIntrinsicPtrUse(MI, V, NewV))
1223 return;
1224 }
1225
1226 if (auto *II = dyn_cast<IntrinsicInst>(CurUser)) {
1227 if (rewriteIntrinsicOperands(II, V, NewV))
1228 return;
1229 }
1230
1231 if (ICmpInst *Cmp = dyn_cast<ICmpInst>(CurUserI)) {
1232 // If we can infer that both pointers are in the same addrspace,
1233 // transform e.g.
1234 // %cmp = icmp eq float* %p, %q
1235 // into
1236 // %cmp = icmp eq float addrspace(3)* %new_p, %new_q
1237
1238 unsigned NewAS = NewV->getType()->getPointerAddressSpace();
1239 int SrcIdx = U.getOperandNo();
1240 int OtherIdx = (SrcIdx == 0) ? 1 : 0;
1241 Value *OtherSrc = Cmp->getOperand(OtherIdx);
1242
1243 if (Value *OtherNewV = ValueWithNewAddrSpace.lookup(OtherSrc)) {
1244 if (OtherNewV->getType()->getPointerAddressSpace() == NewAS) {
1245 Cmp->setOperand(OtherIdx, OtherNewV);
1246 Cmp->setOperand(SrcIdx, NewV);
1247 return;
1248 }
1249 }
1250
1251 // Even if the type mismatches, we can cast the constant.
1252 if (auto *KOtherSrc = dyn_cast<Constant>(OtherSrc)) {
1253 if (isSafeToCastConstAddrSpace(KOtherSrc, NewAS)) {
1254 Cmp->setOperand(SrcIdx, NewV);
1255 Cmp->setOperand(OtherIdx, ConstantExpr::getAddrSpaceCast(
1256 KOtherSrc, NewV->getType()));
1257 return;
1258 }
1259 }
1260 }
1261
1262 if (AddrSpaceCastInst *ASC = dyn_cast<AddrSpaceCastInst>(CurUserI)) {
1263 unsigned NewAS = NewV->getType()->getPointerAddressSpace();
1264 if (ASC->getDestAddressSpace() == NewAS) {
1265 ASC->replaceAllUsesWith(NewV);
1266 DeadInstructions.push_back(ASC);
1267 return;
1268 }
1269 }
1270
1271 // Otherwise, replaces the use with flat(NewV).
1272 if (isa<Instruction>(V) || isa<Instruction>(NewV)) {
1273 // Don't create a copy of the original addrspacecast.
1274 if (U == V && isa<AddrSpaceCastInst>(V))
1275 return;
1276
1277 // Insert the addrspacecast after NewV.
1278 BasicBlock::iterator InsertPos;
1279 if (Instruction *NewVInst = dyn_cast<Instruction>(NewV))
1280 InsertPos = std::next(NewVInst->getIterator());
1281 else
1282 InsertPos = std::next(cast<Instruction>(V)->getIterator());
1283
1284 while (isa<PHINode>(InsertPos))
1285 ++InsertPos;
1286 // This instruction may contain multiple uses of V, update them all.
1287 CurUser->replaceUsesOfWith(
1288 V, new AddrSpaceCastInst(NewV, V->getType(), "", InsertPos));
1289 } else {
1290 CurUserI->replaceUsesOfWith(
1291 V, ConstantExpr::getAddrSpaceCast(cast<Constant>(NewV), V->getType()));
1292 }
1293}
1294
1295bool InferAddressSpacesImpl::rewriteWithNewAddressSpaces(
1296 ArrayRef<WeakTrackingVH> Postorder,
1297 const ValueToAddrSpaceMapTy &InferredAddrSpace,
1298 const PredicatedAddrSpaceMapTy &PredicatedAS) const {
1299 // For each address expression to be modified, creates a clone of it with its
1300 // pointer operands converted to the new address space. Since the pointer
1301 // operands are converted, the clone is naturally in the new address space by
1302 // construction.
1303 ValueToValueMapTy ValueWithNewAddrSpace;
1304 SmallVector<const Use *, 32> PoisonUsesToFix;
1305 for (Value *V : Postorder) {
1306 unsigned NewAddrSpace = InferredAddrSpace.lookup(V);
1307
1308 // In some degenerate cases (e.g. invalid IR in unreachable code), we may
1309 // not even infer the value to have its original address space.
1310 if (NewAddrSpace == UninitializedAddressSpace)
1311 continue;
1312
1313 if (V->getType()->getPointerAddressSpace() != NewAddrSpace) {
1314 Value *New =
1315 cloneValueWithNewAddressSpace(V, NewAddrSpace, ValueWithNewAddrSpace,
1316 PredicatedAS, &PoisonUsesToFix);
1317 if (New)
1318 ValueWithNewAddrSpace[V] = New;
1319 }
1320 }
1321
1322 if (ValueWithNewAddrSpace.empty())
1323 return false;
1324
1325 // Fixes all the poison uses generated by cloneInstructionWithNewAddressSpace.
1326 for (const Use *PoisonUse : PoisonUsesToFix) {
1327 User *V = PoisonUse->getUser();
1328 User *NewV = cast_or_null<User>(ValueWithNewAddrSpace.lookup(V));
1329 if (!NewV)
1330 continue;
1331
1332 unsigned OperandNo = PoisonUse->getOperandNo();
1333 assert(isa<PoisonValue>(NewV->getOperand(OperandNo)));
1334 NewV->setOperand(OperandNo, ValueWithNewAddrSpace.lookup(PoisonUse->get()));
1335 }
1336
1337 SmallVector<Instruction *, 16> DeadInstructions;
1338 ValueToValueMapTy VMap;
1339 ValueMapper VMapper(VMap, RF_NoModuleLevelChanges | RF_IgnoreMissingLocals);
1340
1341 // Replaces the uses of the old address expressions with the new ones.
1342 for (const WeakTrackingVH &WVH : Postorder) {
1343 assert(WVH && "value was unexpectedly deleted");
1344 Value *V = WVH;
1345 Value *NewV = ValueWithNewAddrSpace.lookup(V);
1346 if (NewV == nullptr)
1347 continue;
1348
1349 LLVM_DEBUG(dbgs() << "Replacing the uses of " << *V << "\n with\n "
1350 << *NewV << '\n');
1351
1352 if (Constant *C = dyn_cast<Constant>(V)) {
1353 Constant *Replace =
1355 if (C != Replace) {
1356 LLVM_DEBUG(dbgs() << "Inserting replacement const cast: " << Replace
1357 << ": " << *Replace << '\n');
1358 SmallVector<User *, 16> WorkList;
1359 for (User *U : make_early_inc_range(C->users())) {
1360 if (auto *I = dyn_cast<Instruction>(U)) {
1361 if (I->getFunction() == F)
1362 I->replaceUsesOfWith(C, Replace);
1363 } else {
1364 WorkList.append(U->user_begin(), U->user_end());
1365 }
1366 }
1367 if (!WorkList.empty()) {
1368 VMap[C] = Replace;
1369 DenseSet<User *> Visited{WorkList.begin(), WorkList.end()};
1370 while (!WorkList.empty()) {
1371 User *U = WorkList.pop_back_val();
1372 if (auto *I = dyn_cast<Instruction>(U)) {
1373 if (I->getFunction() == F)
1374 VMapper.remapInstruction(*I);
1375 continue;
1376 }
1377 for (User *U2 : U->users())
1378 if (Visited.insert(U2).second)
1379 WorkList.push_back(U2);
1380 }
1381 }
1382 V = Replace;
1383 }
1384 }
1385
1386 Value::use_iterator I, E, Next;
1387 for (I = V->use_begin(), E = V->use_end(); I != E;) {
1388 Use &U = *I;
1389
1390 // Some users may see the same pointer operand in multiple operands. Skip
1391 // to the next instruction.
1392 I = skipToNextUser(I, E);
1393
1394 performPointerReplacement(V, NewV, U, ValueWithNewAddrSpace,
1395 DeadInstructions);
1396 }
1397
1398 if (V->use_empty()) {
1399 if (Instruction *I = dyn_cast<Instruction>(V))
1400 DeadInstructions.push_back(I);
1401 }
1402 }
1403
1404 for (Instruction *I : DeadInstructions)
1406
1407 return true;
1408}
1409
1410bool InferAddressSpaces::runOnFunction(Function &F) {
1411 if (skipFunction(F))
1412 return false;
1413
1414 auto *DTWP = getAnalysisIfAvailable<DominatorTreeWrapperPass>();
1415 DominatorTree *DT = DTWP ? &DTWP->getDomTree() : nullptr;
1416 return InferAddressSpacesImpl(
1417 getAnalysis<AssumptionCacheTracker>().getAssumptionCache(F), DT,
1418 &getAnalysis<TargetTransformInfoWrapperPass>().getTTI(F),
1419 FlatAddrSpace)
1420 .run(F);
1421}
1422
1424 return new InferAddressSpaces(AddressSpace);
1425}
1426
1431
1434 bool Changed =
1435 InferAddressSpacesImpl(AM.getResult<AssumptionAnalysis>(F),
1437 &AM.getResult<TargetIRAnalysis>(F), FlatAddrSpace)
1438 .run(F);
1439 if (Changed) {
1443 return PA;
1444 }
1445 return PreservedAnalyses::all();
1446}
assert(UImm &&(UImm !=~static_cast< T >(0)) &&"Invalid immediate!")
Rewrite undef for PHI
MachineBasicBlock MachineBasicBlock::iterator DebugLoc DL
Expand Atomic instructions
static GCRegistry::Add< CoreCLRGC > E("coreclr", "CoreCLR-compatible GC")
static GCRegistry::Add< OcamlGC > B("ocaml", "ocaml 3.10-compatible GC")
This file contains the declarations for the subclasses of Constant, which represent the different fla...
This file defines the DenseMap class.
This file defines the DenseSet and SmallDenseSet classes.
static bool runOnFunction(Function &F, bool PostInlining)
#define DEBUG_TYPE
Hexagon Common GEP
IRTranslator LLVM IR MI
This header defines various interfaces for pass management in LLVM.
This defines the Use class.
static bool replaceIfSimplePointerUse(const TargetTransformInfo &TTI, User *Inst, unsigned AddrSpace, Value *OldV, Value *NewV)
If OldV is used as the pointer operand of a compatible memory operation Inst, replaces the pointer op...
static bool replaceOperandIfSame(Instruction *Inst, unsigned OpIdx, Value *OldVal, Value *NewVal)
Replace operand OpIdx in Inst, if the value is the same as OldVal with NewVal.
static cl::opt< bool > AssumeDefaultIsFlatAddressSpace("assume-default-is-flat-addrspace", cl::init(false), cl::ReallyHidden, cl::desc("The default address space is assumed as the flat address space. " "This is mainly for test purpose."))
static bool isNoopPtrIntCastPair(const Operator *I2P, const DataLayout &DL, const TargetTransformInfo *TTI)
static bool isAddressExpression(const Value &V, const DataLayout &DL, const TargetTransformInfo *TTI)
static bool handleMemIntrinsicPtrUse(MemIntrinsic *MI, Value *OldV, Value *NewV)
Update memory intrinsic uses that require more complex processing than simple memory instructions.
static SmallVector< Value *, 2 > getPointerOperands(const Value &V, const DataLayout &DL, const TargetTransformInfo *TTI)
static Value * operandWithNewAddressSpaceOrCreatePoison(const Use &OperandUse, unsigned NewAddrSpace, const ValueToValueMapTy &ValueWithNewAddrSpace, const PredicatedAddrSpaceMapTy &PredicatedAS, SmallVectorImpl< const Use * > *PoisonUsesToFix)
static Value * cloneConstantExprWithNewAddressSpace(ConstantExpr *CE, unsigned NewAddrSpace, const ValueToValueMapTy &ValueWithNewAddrSpace, const DataLayout *DL, const TargetTransformInfo *TTI)
static Value::use_iterator skipToNextUser(Value::use_iterator I, Value::use_iterator End)
Infer address static false Type * getPtrOrVecOfPtrsWithNewAS(Type *Ty, unsigned NewAddrSpace)
static bool replaceSimplePointerUse(const TargetTransformInfo &TTI, InstrType *MemInstr, unsigned AddrSpace, Value *OldV, Value *NewV)
static const unsigned UninitializedAddressSpace
#define F(x, y, z)
Definition MD5.cpp:55
#define I(x, y, z)
Definition MD5.cpp:58
Machine Check Debug Module
MachineInstr unsigned OpIdx
uint64_t IntrinsicInst * II
#define INITIALIZE_PASS_DEPENDENCY(depName)
Definition PassSupport.h:42
#define INITIALIZE_PASS_END(passName, arg, name, cfg, analysis)
Definition PassSupport.h:44
#define INITIALIZE_PASS_BEGIN(passName, arg, name, cfg, analysis)
Definition PassSupport.h:39
This file implements a set that has insertion order iteration characteristics.
This file defines the SmallVector class.
#define LLVM_DEBUG(...)
Definition Debug.h:114
This pass exposes codegen information to IR-level passes.
This class represents a conversion between pointers from one address space to another.
PassT::Result * getCachedResult(IRUnitT &IR) const
Get the cached result of an analysis pass for a given IR unit.
PassT::Result & getResult(IRUnitT &IR, ExtraArgTs... ExtraArgs)
Get the result of an analysis pass for a given IR unit.
AnalysisUsage & addRequired()
AnalysisUsage & addPreserved()
Add the specified Pass class to the set of analyses preserved by this pass.
LLVM_ABI void setPreservesCFG()
This function should be called by the pass, iff they do not:
Definition Pass.cpp:270
This class represents an incoming formal argument to a Function.
Definition Argument.h:32
A function analysis which provides an AssumptionCache.
An immutable pass that tracks lazily created AssumptionCache objects.
MutableArrayRef< ResultElem > assumptionsFor(const Value *V)
Access the list of assumptions which affect this value.
InstListType::iterator iterator
Instruction iterators...
Definition BasicBlock.h:170
Represents analyses that only rely on functions' control flow.
Definition Analysis.h:73
Value * getArgOperand(unsigned i) const
static LLVM_ABI bool isNoopCast(Instruction::CastOps Opcode, Type *SrcTy, Type *DstTy, const DataLayout &DL)
A no-op cast is one that can be effected without changing any bits.
A constant value that is initialized with an expression using other constant values.
Definition Constants.h:1120
static LLVM_ABI Constant * getAddrSpaceCast(Constant *C, Type *Ty, bool OnlyIfReduced=false)
static LLVM_ABI Constant * getBitCast(Constant *C, Type *Ty, bool OnlyIfReduced=false)
This is an important base class in LLVM.
Definition Constant.h:43
A parsed version of the target data layout string in and methods for querying it.
Definition DataLayout.h:63
Analysis pass which computes a DominatorTree.
Definition Dominators.h:284
FunctionPass class - This class is used to implement most global optimizations.
Definition Pass.h:314
static GetElementPtrInst * Create(Type *PointeeType, Value *Ptr, ArrayRef< Value * > IdxList, const Twine &NameStr="", InsertPosition InsertBefore=nullptr)
LLVM_ABI void setIsInBounds(bool b=true)
Set or clear the inbounds flag on this GEP instruction.
This provides a uniform API for creating instructions and inserting them into a basic block: either a...
Definition IRBuilder.h:2780
const DebugLoc & getDebugLoc() const
Return the debug location for this node as a DebugLoc.
A wrapper class for inspecting calls to intrinsic functions.
This is the common base class for memset/memcpy/memmove.
This is a utility class that provides an abstraction for the common functionality between Instruction...
Definition Operator.h:33
unsigned getOpcode() const
Return the opcode for this Instruction or ConstantExpr.
Definition Operator.h:43
void addIncoming(Value *V, BasicBlock *BB)
Add an incoming value to the end of the PHI list.
static unsigned getOperandNumForIncomingValue(unsigned i)
static PHINode * Create(Type *Ty, unsigned NumReservedValues, const Twine &NameStr="", InsertPosition InsertBefore=nullptr)
Constructors - NumReservedValues is a hint for the number of incoming edges that this phi node will h...
static LLVM_ABI PassRegistry * getPassRegistry()
getPassRegistry - Access the global registry object, which is automatically initialized at applicatio...
static LLVM_ABI PointerType * get(Type *ElementType, unsigned AddressSpace)
This constructs a pointer to an object of the specified type in a numbered address space.
static LLVM_ABI PoisonValue * get(Type *T)
Static factory methods - Return an 'poison' object of the specified type.
A set of analyses that are preserved following a run of a transformation pass.
Definition Analysis.h:112
static PreservedAnalyses all()
Construct a special preserved set that preserves all passes.
Definition Analysis.h:118
PreservedAnalyses & preserveSet()
Mark an analysis set as preserved.
Definition Analysis.h:151
PreservedAnalyses & preserve()
Mark an analysis as preserved.
Definition Analysis.h:132
static SelectInst * Create(Value *C, Value *S1, Value *S2, const Twine &NameStr="", InsertPosition InsertBefore=nullptr, Instruction *MDFrom=nullptr)
This class consists of common code factored out of the SmallVector class to reduce code duplication b...
void append(ItTy in_start, ItTy in_end)
Add the specified range to the end of the SmallVector.
void push_back(const T &Elt)
This is a 'vector' (really, a variable-sized array), optimized for the case when the array is small.
Analysis pass providing the TargetTransformInfo.
Wrapper pass for TargetTransformInfo.
This pass provides access to the codegen interfaces that are needed for IR-level transformations.
LLVM_ABI unsigned getAssumedAddrSpace(const Value *V) const
LLVM_ABI std::pair< const Value *, unsigned > getPredicatedAddrSpace(const Value *V) const
LLVM_ABI bool collectFlatAddressOperands(SmallVectorImpl< int > &OpIndexes, Intrinsic::ID IID) const
Return any intrinsic address operand indexes which may be rewritten if they use a flat address space ...
LLVM_ABI Value * rewriteIntrinsicWithAddressSpace(IntrinsicInst *II, Value *OldV, Value *NewV) const
Rewrite intrinsic call II such that OldV will be replaced with NewV, which has a different address sp...
LLVM_ABI unsigned getFlatAddressSpace() const
Returns the address space ID for a target's 'flat' address space.
The instances of the Type class are immutable: once they are created, they are never changed.
Definition Type.h:45
LLVM_ABI unsigned getPointerAddressSpace() const
Get the address space of this pointer or pointer vector type.
A Use represents the edge between a Value definition and its users.
Definition Use.h:35
User * getUser() const
Returns the User that contains this Use.
Definition Use.h:61
Value * get() const
Definition Use.h:55
LLVM_ABI bool replaceUsesOfWith(Value *From, Value *To)
Replace uses of one Value with another.
Definition User.cpp:21
const Use & getOperandUse(unsigned i) const
Definition User.h:245
void setOperand(unsigned i, Value *Val)
Definition User.h:237
Value * getOperand(unsigned i) const
Definition User.h:232
ValueT lookup(const KeyT &Val) const
lookup - Return the entry for the specified key, or a default constructed value if no such entry exis...
Definition ValueMap.h:169
bool empty() const
Definition ValueMap.h:143
LLVM Value Representation.
Definition Value.h:75
Type * getType() const
All values are typed, get the type of this value.
Definition Value.h:256
use_iterator_impl< Use > use_iterator
Definition Value.h:353
LLVM_ABI const Value * stripPointerCasts() const
Strip off pointer casts, all-zero GEPs and address space casts.
Definition Value.cpp:701
std::pair< iterator, bool > insert(const ValueT &V)
Definition DenseSet.h:194
self_iterator getIterator()
Definition ilist_node.h:130
Changed
#define llvm_unreachable(msg)
Marks that the current location is not supposed to be reachable.
unsigned ID
LLVM IR allows to use arbitrary numbers as calling convention identifiers.
Definition CallingConv.h:24
@ C
The default llvm calling convention, compatible with C.
Definition CallingConv.h:34
InstrType
This represents what is and is not supported when finding similarity in Instructions.
LLVM_ABI Function * getOrInsertDeclaration(Module *M, ID id, ArrayRef< Type * > Tys={})
Look up the Function declaration of the intrinsic id in the Module M.
initializer< Ty > init(const Ty &Val)
PointerTypeMap run(const Module &M)
Compute the PointerTypeMap for the module M.
@ User
could "use" a pointer
NodeAddr< UseNode * > Use
Definition RDFGraph.h:385
friend class Instruction
Iterator for Instructions in a `BasicBlock.
Definition BasicBlock.h:73
This is an optimization pass for GlobalISel generic memory operations.
FunctionAddr VTableAddr Value
Definition InstrProf.h:137
LLVM_ABI bool isValidAssumeForContext(const Instruction *I, const Instruction *CxtI, const DominatorTree *DT=nullptr, bool AllowEphemerals=false)
Return true if it is valid to use the assumptions provided by an assume intrinsic,...
LLVM_ABI bool RecursivelyDeleteTriviallyDeadInstructions(Value *V, const TargetLibraryInfo *TLI=nullptr, MemorySSAUpdater *MSSAU=nullptr, std::function< void(Value *)> AboutToDeleteCallback=std::function< void(Value *)>())
If the specified value is a trivially dead instruction, delete it.
Definition Local.cpp:533
LLVM_ABI void initializeInferAddressSpacesPass(PassRegistry &)
decltype(auto) dyn_cast(const From &Val)
dyn_cast<X> - Return the argument parameter cast to the specified type.
Definition Casting.h:649
constexpr from_range_t from_range
iterator_range< early_inc_iterator_impl< detail::IterOfRange< RangeT > > > make_early_inc_range(RangeT &&Range)
Make a range that does early increment to allow mutation of the underlying range without disrupting i...
Definition STLExtras.h:634
auto cast_or_null(const Y &Val)
Definition Casting.h:720
auto dyn_cast_or_null(const Y &Val)
Definition Casting.h:759
bool any_of(R &&range, UnaryPredicate P)
Provide wrappers to std::any_of which take ranges instead of having to pass begin/end explicitly.
Definition STLExtras.h:1712
@ RF_IgnoreMissingLocals
If this flag is set, the remapper ignores missing function-local entries (Argument,...
Definition ValueMapper.h:98
@ RF_NoModuleLevelChanges
If this flag is set, the remapper knows that only local values within a function (such as an instruct...
Definition ValueMapper.h:80
LLVM_ABI raw_ostream & dbgs()
dbgs() - This returns a reference to a raw_ostream for debugging messages.
Definition Debug.cpp:207
class LLVM_GSL_OWNER SmallVector
Forward declaration of SmallVector so that calculateSmallVectorDefaultInlinedElements can reference s...
bool isa(const From &Val)
isa<X> - Return true if the parameter to the template is an instance of one of the template type argu...
Definition Casting.h:548
TargetTransformInfo TTI
FunctionAddr VTableAddr Next
Definition InstrProf.h:141
DWARFExpression::Operation Op
ArrayRef(const T &OneElt) -> ArrayRef< T >
ValueMap< const Value *, WeakTrackingVH > ValueToValueMapTy
LLVM_ABI FunctionPass * createInferAddressSpacesPass(unsigned AddressSpace=~0u)
decltype(auto) cast(const From &Val)
cast<X> - Return the argument parameter cast to the specified type.
Definition Casting.h:565
AnalysisManager< Function > FunctionAnalysisManager
Convenience typedef for the Function analysis manager.
PreservedAnalyses run(Function &F, FunctionAnalysisManager &AM)