LLVM 22.0.0git
InferAddressSpaces.cpp
Go to the documentation of this file.
1//===- InferAddressSpace.cpp - --------------------------------------------===//
2//
3// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4// See https://llvm.org/LICENSE.txt for license information.
5// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6//
7//===----------------------------------------------------------------------===//
8//
9// CUDA C/C++ includes memory space designation as variable type qualifers (such
10// as __global__ and __shared__). Knowing the space of a memory access allows
11// CUDA compilers to emit faster PTX loads and stores. For example, a load from
12// shared memory can be translated to `ld.shared` which is roughly 10% faster
13// than a generic `ld` on an NVIDIA Tesla K40c.
14//
15// Unfortunately, type qualifiers only apply to variable declarations, so CUDA
16// compilers must infer the memory space of an address expression from
17// type-qualified variables.
18//
19// LLVM IR uses non-zero (so-called) specific address spaces to represent memory
20// spaces (e.g. addrspace(3) means shared memory). The Clang frontend
21// places only type-qualified variables in specific address spaces, and then
22// conservatively `addrspacecast`s each type-qualified variable to addrspace(0)
23// (so-called the generic address space) for other instructions to use.
24//
25// For example, the Clang translates the following CUDA code
26// __shared__ float a[10];
27// float v = a[i];
28// to
29// %0 = addrspacecast [10 x float] addrspace(3)* @a to [10 x float]*
30// %1 = gep [10 x float], [10 x float]* %0, i64 0, i64 %i
31// %v = load float, float* %1 ; emits ld.f32
32// @a is in addrspace(3) since it's type-qualified, but its use from %1 is
33// redirected to %0 (the generic version of @a).
34//
35// The optimization implemented in this file propagates specific address spaces
36// from type-qualified variable declarations to its users. For example, it
37// optimizes the above IR to
38// %1 = gep [10 x float] addrspace(3)* @a, i64 0, i64 %i
39// %v = load float addrspace(3)* %1 ; emits ld.shared.f32
40// propagating the addrspace(3) from @a to %1. As the result, the NVPTX
41// codegen is able to emit ld.shared.f32 for %v.
42//
43// Address space inference works in two steps. First, it uses a data-flow
44// analysis to infer as many generic pointers as possible to point to only one
45// specific address space. In the above example, it can prove that %1 only
46// points to addrspace(3). This algorithm was published in
47// CUDA: Compiling and optimizing for a GPU platform
48// Chakrabarti, Grover, Aarts, Kong, Kudlur, Lin, Marathe, Murphy, Wang
49// ICCS 2012
50//
51// Then, address space inference replaces all refinable generic pointers with
52// equivalent specific pointers.
53//
54// The major challenge of implementing this optimization is handling PHINodes,
55// which may create loops in the data flow graph. This brings two complications.
56//
57// First, the data flow analysis in Step 1 needs to be circular. For example,
58// %generic.input = addrspacecast float addrspace(3)* %input to float*
59// loop:
60// %y = phi [ %generic.input, %y2 ]
61// %y2 = getelementptr %y, 1
62// %v = load %y2
63// br ..., label %loop, ...
64// proving %y specific requires proving both %generic.input and %y2 specific,
65// but proving %y2 specific circles back to %y. To address this complication,
66// the data flow analysis operates on a lattice:
67// uninitialized > specific address spaces > generic.
68// All address expressions (our implementation only considers phi, bitcast,
69// addrspacecast, and getelementptr) start with the uninitialized address space.
70// The monotone transfer function moves the address space of a pointer down a
71// lattice path from uninitialized to specific and then to generic. A join
72// operation of two different specific address spaces pushes the expression down
73// to the generic address space. The analysis completes once it reaches a fixed
74// point.
75//
76// Second, IR rewriting in Step 2 also needs to be circular. For example,
77// converting %y to addrspace(3) requires the compiler to know the converted
78// %y2, but converting %y2 needs the converted %y. To address this complication,
79// we break these cycles using "poison" placeholders. When converting an
80// instruction `I` to a new address space, if its operand `Op` is not converted
81// yet, we let `I` temporarily use `poison` and fix all the uses later.
82// For instance, our algorithm first converts %y to
83// %y' = phi float addrspace(3)* [ %input, poison ]
84// Then, it converts %y2 to
85// %y2' = getelementptr %y', 1
86// Finally, it fixes the poison in %y' so that
87// %y' = phi float addrspace(3)* [ %input, %y2' ]
88//
89//===----------------------------------------------------------------------===//
90
92#include "llvm/ADT/ArrayRef.h"
93#include "llvm/ADT/DenseMap.h"
94#include "llvm/ADT/DenseSet.h"
95#include "llvm/ADT/SetVector.h"
100#include "llvm/IR/BasicBlock.h"
101#include "llvm/IR/Constant.h"
102#include "llvm/IR/Constants.h"
103#include "llvm/IR/Dominators.h"
104#include "llvm/IR/Function.h"
105#include "llvm/IR/IRBuilder.h"
106#include "llvm/IR/InstIterator.h"
107#include "llvm/IR/Instruction.h"
108#include "llvm/IR/Instructions.h"
110#include "llvm/IR/Intrinsics.h"
111#include "llvm/IR/LLVMContext.h"
112#include "llvm/IR/Operator.h"
113#include "llvm/IR/PassManager.h"
114#include "llvm/IR/Type.h"
115#include "llvm/IR/Use.h"
116#include "llvm/IR/User.h"
117#include "llvm/IR/Value.h"
118#include "llvm/IR/ValueHandle.h"
120#include "llvm/Pass.h"
121#include "llvm/Support/Casting.h"
123#include "llvm/Support/Debug.h"
129#include <cassert>
130#include <iterator>
131#include <limits>
132#include <utility>
133#include <vector>
134
135#define DEBUG_TYPE "infer-address-spaces"
136
137using namespace llvm;
138
140 "assume-default-is-flat-addrspace", cl::init(false), cl::ReallyHidden,
141 cl::desc("The default address space is assumed as the flat address space. "
142 "This is mainly for test purpose."));
143
144static const unsigned UninitializedAddressSpace =
145 std::numeric_limits<unsigned>::max();
146
147namespace {
148
149using ValueToAddrSpaceMapTy = DenseMap<const Value *, unsigned>;
150// Different from ValueToAddrSpaceMapTy, where a new addrspace is inferred on
151// the *def* of a value, PredicatedAddrSpaceMapTy is map where a new
152// addrspace is inferred on the *use* of a pointer. This map is introduced to
153// infer addrspace from the addrspace predicate assumption built from assume
154// intrinsic. In that scenario, only specific uses (under valid assumption
155// context) could be inferred with a new addrspace.
156using PredicatedAddrSpaceMapTy =
158using PostorderStackTy = llvm::SmallVector<PointerIntPair<Value *, 1, bool>, 4>;
159
160class InferAddressSpaces : public FunctionPass {
161 unsigned FlatAddrSpace = 0;
162
163public:
164 static char ID;
165
166 InferAddressSpaces()
167 : FunctionPass(ID), FlatAddrSpace(UninitializedAddressSpace) {
169 }
170 InferAddressSpaces(unsigned AS) : FunctionPass(ID), FlatAddrSpace(AS) {
172 }
173
174 void getAnalysisUsage(AnalysisUsage &AU) const override {
175 AU.setPreservesCFG();
176 AU.addPreserved<DominatorTreeWrapperPass>();
177 AU.addRequired<AssumptionCacheTracker>();
178 AU.addRequired<TargetTransformInfoWrapperPass>();
179 }
180
181 bool runOnFunction(Function &F) override;
182};
183
184class InferAddressSpacesImpl {
185 AssumptionCache &AC;
186 Function *F = nullptr;
187 const DominatorTree *DT = nullptr;
188 const TargetTransformInfo *TTI = nullptr;
189 const DataLayout *DL = nullptr;
190
191 /// Target specific address space which uses of should be replaced if
192 /// possible.
193 unsigned FlatAddrSpace = 0;
194
195 // Try to update the address space of V. If V is updated, returns true and
196 // false otherwise.
197 bool updateAddressSpace(const Value &V,
198 ValueToAddrSpaceMapTy &InferredAddrSpace,
199 PredicatedAddrSpaceMapTy &PredicatedAS) const;
200
201 // Tries to infer the specific address space of each address expression in
202 // Postorder.
203 void inferAddressSpaces(ArrayRef<WeakTrackingVH> Postorder,
204 ValueToAddrSpaceMapTy &InferredAddrSpace,
205 PredicatedAddrSpaceMapTy &PredicatedAS) const;
206
207 bool isSafeToCastConstAddrSpace(Constant *C, unsigned NewAS) const;
208
209 Value *cloneInstructionWithNewAddressSpace(
210 Instruction *I, unsigned NewAddrSpace,
211 const ValueToValueMapTy &ValueWithNewAddrSpace,
212 const PredicatedAddrSpaceMapTy &PredicatedAS,
213 SmallVectorImpl<const Use *> *PoisonUsesToFix) const;
214
215 void performPointerReplacement(
216 Value *V, Value *NewV, Use &U, ValueToValueMapTy &ValueWithNewAddrSpace,
217 SmallVectorImpl<Instruction *> &DeadInstructions) const;
218
219 // Changes the flat address expressions in function F to point to specific
220 // address spaces if InferredAddrSpace says so. Postorder is the postorder of
221 // all flat expressions in the use-def graph of function F.
222 bool rewriteWithNewAddressSpaces(
223 ArrayRef<WeakTrackingVH> Postorder,
224 const ValueToAddrSpaceMapTy &InferredAddrSpace,
225 const PredicatedAddrSpaceMapTy &PredicatedAS) const;
226
227 void appendsFlatAddressExpressionToPostorderStack(
228 Value *V, PostorderStackTy &PostorderStack,
229 DenseSet<Value *> &Visited) const;
230
231 bool rewriteIntrinsicOperands(IntrinsicInst *II, Value *OldV,
232 Value *NewV) const;
233 void collectRewritableIntrinsicOperands(IntrinsicInst *II,
234 PostorderStackTy &PostorderStack,
235 DenseSet<Value *> &Visited) const;
236
237 std::vector<WeakTrackingVH> collectFlatAddressExpressions(Function &F) const;
238
239 Value *cloneValueWithNewAddressSpace(
240 Value *V, unsigned NewAddrSpace,
241 const ValueToValueMapTy &ValueWithNewAddrSpace,
242 const PredicatedAddrSpaceMapTy &PredicatedAS,
243 SmallVectorImpl<const Use *> *PoisonUsesToFix) const;
244 unsigned joinAddressSpaces(unsigned AS1, unsigned AS2) const;
245
246 unsigned getPredicatedAddrSpace(const Value &PtrV,
247 const Value *UserCtx) const;
248
249public:
250 InferAddressSpacesImpl(AssumptionCache &AC, const DominatorTree *DT,
251 const TargetTransformInfo *TTI, unsigned FlatAddrSpace)
252 : AC(AC), DT(DT), TTI(TTI), FlatAddrSpace(FlatAddrSpace) {}
253 bool run(Function &F);
254};
255
256} // end anonymous namespace
257
258char InferAddressSpaces::ID = 0;
259
260INITIALIZE_PASS_BEGIN(InferAddressSpaces, DEBUG_TYPE, "Infer address spaces",
261 false, false)
264INITIALIZE_PASS_END(InferAddressSpaces, DEBUG_TYPE, "Infer address spaces",
266
267static Type *getPtrOrVecOfPtrsWithNewAS(Type *Ty, unsigned NewAddrSpace) {
268 assert(Ty->isPtrOrPtrVectorTy());
269 PointerType *NPT = PointerType::get(Ty->getContext(), NewAddrSpace);
270 return Ty->getWithNewType(NPT);
271}
272
273// Check whether that's no-op pointer bicast using a pair of
274// `ptrtoint`/`inttoptr` due to the missing no-op pointer bitcast over
275// different address spaces.
276static bool isNoopPtrIntCastPair(const Operator *I2P, const DataLayout &DL,
277 const TargetTransformInfo *TTI) {
278 assert(I2P->getOpcode() == Instruction::IntToPtr);
279 auto *P2I = dyn_cast<Operator>(I2P->getOperand(0));
280 if (!P2I || P2I->getOpcode() != Instruction::PtrToInt)
281 return false;
282 // Check it's really safe to treat that pair of `ptrtoint`/`inttoptr` as a
283 // no-op cast. Besides checking both of them are no-op casts, as the
284 // reinterpreted pointer may be used in other pointer arithmetic, we also
285 // need to double-check that through the target-specific hook. That ensures
286 // the underlying target also agrees that's a no-op address space cast and
287 // pointer bits are preserved.
288 // The current IR spec doesn't have clear rules on address space casts,
289 // especially a clear definition for pointer bits in non-default address
290 // spaces. It would be undefined if that pointer is dereferenced after an
291 // invalid reinterpret cast. Also, due to the unclearness for the meaning of
292 // bits in non-default address spaces in the current spec, the pointer
293 // arithmetic may also be undefined after invalid pointer reinterpret cast.
294 // However, as we confirm through the target hooks that it's a no-op
295 // addrspacecast, it doesn't matter since the bits should be the same.
296 unsigned P2IOp0AS = P2I->getOperand(0)->getType()->getPointerAddressSpace();
297 unsigned I2PAS = I2P->getType()->getPointerAddressSpace();
299 I2P->getOperand(0)->getType(), I2P->getType(),
300 DL) &&
302 P2I->getOperand(0)->getType(), P2I->getType(),
303 DL) &&
304 (P2IOp0AS == I2PAS || TTI->isNoopAddrSpaceCast(P2IOp0AS, I2PAS));
305}
306
307// Returns true if V is an address expression.
308// TODO: Currently, we only consider:
309// - arguments
310// - phi, bitcast, addrspacecast, and getelementptr operators
311static bool isAddressExpression(const Value &V, const DataLayout &DL,
312 const TargetTransformInfo *TTI) {
313
314 if (const Argument *Arg = dyn_cast<Argument>(&V))
315 return Arg->getType()->isPointerTy() &&
316 TTI->getAssumedAddrSpace(&V) != UninitializedAddressSpace;
317
318 const Operator *Op = dyn_cast<Operator>(&V);
319 if (!Op)
320 return false;
321
322 switch (Op->getOpcode()) {
323 case Instruction::PHI:
324 assert(Op->getType()->isPtrOrPtrVectorTy());
325 return true;
326 case Instruction::BitCast:
327 case Instruction::AddrSpaceCast:
328 case Instruction::GetElementPtr:
329 return true;
330 case Instruction::Select:
331 return Op->getType()->isPtrOrPtrVectorTy();
332 case Instruction::Call: {
334 return II && II->getIntrinsicID() == Intrinsic::ptrmask;
335 }
336 case Instruction::IntToPtr:
337 return isNoopPtrIntCastPair(Op, DL, TTI);
338 default:
339 // That value is an address expression if it has an assumed address space.
340 return TTI->getAssumedAddrSpace(&V) != UninitializedAddressSpace;
341 }
342}
343
344// Returns the pointer operands of V.
345//
346// Precondition: V is an address expression.
349 const TargetTransformInfo *TTI) {
350 if (isa<Argument>(&V))
351 return {};
352
353 const Operator &Op = cast<Operator>(V);
354 switch (Op.getOpcode()) {
355 case Instruction::PHI: {
356 auto IncomingValues = cast<PHINode>(Op).incoming_values();
357 return {IncomingValues.begin(), IncomingValues.end()};
358 }
359 case Instruction::BitCast:
360 case Instruction::AddrSpaceCast:
361 case Instruction::GetElementPtr:
362 return {Op.getOperand(0)};
363 case Instruction::Select:
364 return {Op.getOperand(1), Op.getOperand(2)};
365 case Instruction::Call: {
367 assert(II.getIntrinsicID() == Intrinsic::ptrmask &&
368 "unexpected intrinsic call");
369 return {II.getArgOperand(0)};
370 }
371 case Instruction::IntToPtr: {
373 auto *P2I = cast<Operator>(Op.getOperand(0));
374 return {P2I->getOperand(0)};
375 }
376 default:
377 llvm_unreachable("Unexpected instruction type.");
378 }
379}
380
381bool InferAddressSpacesImpl::rewriteIntrinsicOperands(IntrinsicInst *II,
382 Value *OldV,
383 Value *NewV) const {
384 Module *M = II->getParent()->getParent()->getParent();
385 Intrinsic::ID IID = II->getIntrinsicID();
386 switch (IID) {
387 case Intrinsic::objectsize:
388 case Intrinsic::masked_load: {
389 Type *DestTy = II->getType();
390 Type *SrcTy = NewV->getType();
391 Function *NewDecl =
392 Intrinsic::getOrInsertDeclaration(M, IID, {DestTy, SrcTy});
393 II->setArgOperand(0, NewV);
394 II->setCalledFunction(NewDecl);
395 return true;
396 }
397 case Intrinsic::ptrmask:
398 // This is handled as an address expression, not as a use memory operation.
399 return false;
400 case Intrinsic::masked_gather: {
401 Type *RetTy = II->getType();
402 Type *NewPtrTy = NewV->getType();
403 Function *NewDecl =
404 Intrinsic::getOrInsertDeclaration(M, IID, {RetTy, NewPtrTy});
405 II->setArgOperand(0, NewV);
406 II->setCalledFunction(NewDecl);
407 return true;
408 }
409 case Intrinsic::masked_store:
410 case Intrinsic::masked_scatter: {
411 Type *ValueTy = II->getOperand(0)->getType();
412 Type *NewPtrTy = NewV->getType();
414 M, II->getIntrinsicID(), {ValueTy, NewPtrTy});
415 II->setArgOperand(1, NewV);
416 II->setCalledFunction(NewDecl);
417 return true;
418 }
419 case Intrinsic::prefetch:
420 case Intrinsic::is_constant: {
422 M, II->getIntrinsicID(), {NewV->getType()});
423 II->setArgOperand(0, NewV);
424 II->setCalledFunction(NewDecl);
425 return true;
426 }
427 case Intrinsic::fake_use: {
428 II->replaceUsesOfWith(OldV, NewV);
429 return true;
430 }
431 case Intrinsic::lifetime_start:
432 case Intrinsic::lifetime_end: {
433 // Always force lifetime markers to work directly on the alloca.
434 NewV = NewV->stripPointerCasts();
436 M, II->getIntrinsicID(), {NewV->getType()});
437 II->setArgOperand(0, NewV);
438 II->setCalledFunction(NewDecl);
439 return true;
440 }
441 default: {
442 Value *Rewrite = TTI->rewriteIntrinsicWithAddressSpace(II, OldV, NewV);
443 if (!Rewrite)
444 return false;
445 if (Rewrite != II)
446 II->replaceAllUsesWith(Rewrite);
447 return true;
448 }
449 }
450}
451
452void InferAddressSpacesImpl::collectRewritableIntrinsicOperands(
453 IntrinsicInst *II, PostorderStackTy &PostorderStack,
454 DenseSet<Value *> &Visited) const {
455 auto IID = II->getIntrinsicID();
456 switch (IID) {
457 case Intrinsic::ptrmask:
458 case Intrinsic::objectsize:
459 appendsFlatAddressExpressionToPostorderStack(II->getArgOperand(0),
460 PostorderStack, Visited);
461 break;
462 case Intrinsic::is_constant: {
463 Value *Ptr = II->getArgOperand(0);
464 if (Ptr->getType()->isPtrOrPtrVectorTy()) {
465 appendsFlatAddressExpressionToPostorderStack(Ptr, PostorderStack,
466 Visited);
467 }
468
469 break;
470 }
471 case Intrinsic::masked_load:
472 case Intrinsic::masked_gather:
473 case Intrinsic::prefetch:
474 appendsFlatAddressExpressionToPostorderStack(II->getArgOperand(0),
475 PostorderStack, Visited);
476 break;
477 case Intrinsic::masked_store:
478 case Intrinsic::masked_scatter:
479 appendsFlatAddressExpressionToPostorderStack(II->getArgOperand(1),
480 PostorderStack, Visited);
481 break;
482 case Intrinsic::fake_use: {
483 for (Value *Op : II->operands()) {
484 if (Op->getType()->isPtrOrPtrVectorTy()) {
485 appendsFlatAddressExpressionToPostorderStack(Op, PostorderStack,
486 Visited);
487 }
488 }
489
490 break;
491 }
492 case Intrinsic::lifetime_start:
493 case Intrinsic::lifetime_end: {
494 appendsFlatAddressExpressionToPostorderStack(II->getArgOperand(0),
495 PostorderStack, Visited);
496 break;
497 }
498 default:
499 SmallVector<int, 2> OpIndexes;
500 if (TTI->collectFlatAddressOperands(OpIndexes, IID)) {
501 for (int Idx : OpIndexes) {
502 appendsFlatAddressExpressionToPostorderStack(II->getArgOperand(Idx),
503 PostorderStack, Visited);
504 }
505 }
506 break;
507 }
508}
509
510// Returns all flat address expressions in function F. The elements are
511// If V is an unvisited flat address expression, appends V to PostorderStack
512// and marks it as visited.
513void InferAddressSpacesImpl::appendsFlatAddressExpressionToPostorderStack(
514 Value *V, PostorderStackTy &PostorderStack,
515 DenseSet<Value *> &Visited) const {
516 assert(V->getType()->isPtrOrPtrVectorTy());
517
518 // Generic addressing expressions may be hidden in nested constant
519 // expressions.
520 if (ConstantExpr *CE = dyn_cast<ConstantExpr>(V)) {
521 // TODO: Look in non-address parts, like icmp operands.
522 if (isAddressExpression(*CE, *DL, TTI) && Visited.insert(CE).second)
523 PostorderStack.emplace_back(CE, false);
524
525 return;
526 }
527
528 if (V->getType()->getPointerAddressSpace() == FlatAddrSpace &&
529 isAddressExpression(*V, *DL, TTI)) {
530 if (Visited.insert(V).second) {
531 PostorderStack.emplace_back(V, false);
532
533 if (auto *Op = dyn_cast<Operator>(V))
534 for (auto &O : Op->operands())
535 if (ConstantExpr *CE = dyn_cast<ConstantExpr>(O))
536 if (isAddressExpression(*CE, *DL, TTI) && Visited.insert(CE).second)
537 PostorderStack.emplace_back(CE, false);
538 }
539 }
540}
541
542// Returns all flat address expressions in function F. The elements are ordered
543// in postorder.
544std::vector<WeakTrackingVH>
545InferAddressSpacesImpl::collectFlatAddressExpressions(Function &F) const {
546 // This function implements a non-recursive postorder traversal of a partial
547 // use-def graph of function F.
548 PostorderStackTy PostorderStack;
549 // The set of visited expressions.
550 DenseSet<Value *> Visited;
551
552 auto PushPtrOperand = [&](Value *Ptr) {
553 appendsFlatAddressExpressionToPostorderStack(Ptr, PostorderStack, Visited);
554 };
555
556 // Look at operations that may be interesting accelerate by moving to a known
557 // address space. We aim at generating after loads and stores, but pure
558 // addressing calculations may also be faster.
559 for (Instruction &I : instructions(F)) {
560 if (auto *GEP = dyn_cast<GetElementPtrInst>(&I)) {
561 PushPtrOperand(GEP->getPointerOperand());
562 } else if (auto *LI = dyn_cast<LoadInst>(&I))
563 PushPtrOperand(LI->getPointerOperand());
564 else if (auto *SI = dyn_cast<StoreInst>(&I))
565 PushPtrOperand(SI->getPointerOperand());
566 else if (auto *RMW = dyn_cast<AtomicRMWInst>(&I))
567 PushPtrOperand(RMW->getPointerOperand());
568 else if (auto *CmpX = dyn_cast<AtomicCmpXchgInst>(&I))
569 PushPtrOperand(CmpX->getPointerOperand());
570 else if (auto *MI = dyn_cast<MemIntrinsic>(&I)) {
571 // For memset/memcpy/memmove, any pointer operand can be replaced.
572 PushPtrOperand(MI->getRawDest());
573
574 // Handle 2nd operand for memcpy/memmove.
575 if (auto *MTI = dyn_cast<MemTransferInst>(MI))
576 PushPtrOperand(MTI->getRawSource());
577 } else if (auto *II = dyn_cast<IntrinsicInst>(&I))
578 collectRewritableIntrinsicOperands(II, PostorderStack, Visited);
579 else if (ICmpInst *Cmp = dyn_cast<ICmpInst>(&I)) {
580 if (Cmp->getOperand(0)->getType()->isPtrOrPtrVectorTy()) {
581 PushPtrOperand(Cmp->getOperand(0));
582 PushPtrOperand(Cmp->getOperand(1));
583 }
584 } else if (auto *ASC = dyn_cast<AddrSpaceCastInst>(&I)) {
585 PushPtrOperand(ASC->getPointerOperand());
586 } else if (auto *I2P = dyn_cast<IntToPtrInst>(&I)) {
588 PushPtrOperand(cast<Operator>(I2P->getOperand(0))->getOperand(0));
589 } else if (auto *RI = dyn_cast<ReturnInst>(&I)) {
590 if (auto *RV = RI->getReturnValue();
591 RV && RV->getType()->isPtrOrPtrVectorTy())
592 PushPtrOperand(RV);
593 }
594 }
595
596 std::vector<WeakTrackingVH> Postorder; // The resultant postorder.
597 while (!PostorderStack.empty()) {
598 Value *TopVal = PostorderStack.back().getPointer();
599 // If the operands of the expression on the top are already explored,
600 // adds that expression to the resultant postorder.
601 if (PostorderStack.back().getInt()) {
602 if (TopVal->getType()->getPointerAddressSpace() == FlatAddrSpace)
603 Postorder.push_back(TopVal);
604 PostorderStack.pop_back();
605 continue;
606 }
607 // Otherwise, adds its operands to the stack and explores them.
608 PostorderStack.back().setInt(true);
609 // Skip values with an assumed address space.
611 for (Value *PtrOperand : getPointerOperands(*TopVal, *DL, TTI)) {
612 appendsFlatAddressExpressionToPostorderStack(PtrOperand, PostorderStack,
613 Visited);
614 }
615 }
616 }
617 return Postorder;
618}
619
620// Inserts an addrspacecast for a phi node operand, handling the proper
621// insertion position based on the operand type.
623 Value *Operand) {
624 auto InsertBefore = [NewI](auto It) {
625 NewI->insertBefore(It);
626 NewI->setDebugLoc(It->getDebugLoc());
627 return NewI;
628 };
629
630 if (auto *Arg = dyn_cast<Argument>(Operand)) {
631 // For arguments, insert the cast at the beginning of entry block.
632 // Consider inserting at the dominating block for better placement.
633 Function *F = Arg->getParent();
634 auto InsertI = F->getEntryBlock().getFirstNonPHIIt();
635 return InsertBefore(InsertI);
636 }
637
638 // No check for Constant here, as constants are already handled.
639 assert(isa<Instruction>(Operand));
640
641 Instruction *OpInst = cast<Instruction>(Operand);
642 if (LLVM_UNLIKELY(OpInst->getOpcode() == Instruction::PHI)) {
643 // If the operand is defined by another PHI node, insert after the first
644 // non-PHI instruction at the corresponding basic block.
645 auto InsertI = OpInst->getParent()->getFirstNonPHIIt();
646 return InsertBefore(InsertI);
647 }
648
649 // Otherwise, insert immediately after the operand definition.
650 NewI->insertAfter(OpInst->getIterator());
651 NewI->setDebugLoc(OpInst->getDebugLoc());
652 return NewI;
653}
654
655// A helper function for cloneInstructionWithNewAddressSpace. Returns the clone
656// of OperandUse.get() in the new address space. If the clone is not ready yet,
657// returns poison in the new address space as a placeholder.
659 const Use &OperandUse, unsigned NewAddrSpace,
660 const ValueToValueMapTy &ValueWithNewAddrSpace,
661 const PredicatedAddrSpaceMapTy &PredicatedAS,
662 SmallVectorImpl<const Use *> *PoisonUsesToFix) {
663 Value *Operand = OperandUse.get();
664
665 Type *NewPtrTy = getPtrOrVecOfPtrsWithNewAS(Operand->getType(), NewAddrSpace);
666
667 if (Constant *C = dyn_cast<Constant>(Operand))
668 return ConstantExpr::getAddrSpaceCast(C, NewPtrTy);
669
670 if (Value *NewOperand = ValueWithNewAddrSpace.lookup(Operand))
671 return NewOperand;
672
673 Instruction *Inst = cast<Instruction>(OperandUse.getUser());
674 auto I = PredicatedAS.find(std::make_pair(Inst, Operand));
675 if (I != PredicatedAS.end()) {
676 // Insert an addrspacecast on that operand before the user.
677 unsigned NewAS = I->second;
678 Type *NewPtrTy = getPtrOrVecOfPtrsWithNewAS(Operand->getType(), NewAS);
679 auto *NewI = new AddrSpaceCastInst(Operand, NewPtrTy);
680
681 if (LLVM_UNLIKELY(Inst->getOpcode() == Instruction::PHI))
682 return phiNodeOperandWithNewAddressSpace(NewI, Operand);
683
684 NewI->insertBefore(Inst->getIterator());
685 NewI->setDebugLoc(Inst->getDebugLoc());
686 return NewI;
687 }
688
689 PoisonUsesToFix->push_back(&OperandUse);
690 return PoisonValue::get(NewPtrTy);
691}
692
693// Returns a clone of `I` with its operands converted to those specified in
694// ValueWithNewAddrSpace. Due to potential cycles in the data flow graph, an
695// operand whose address space needs to be modified might not exist in
696// ValueWithNewAddrSpace. In that case, uses poison as a placeholder operand and
697// adds that operand use to PoisonUsesToFix so that caller can fix them later.
698//
699// Note that we do not necessarily clone `I`, e.g., if it is an addrspacecast
700// from a pointer whose type already matches. Therefore, this function returns a
701// Value* instead of an Instruction*.
702//
703// This may also return nullptr in the case the instruction could not be
704// rewritten.
705Value *InferAddressSpacesImpl::cloneInstructionWithNewAddressSpace(
706 Instruction *I, unsigned NewAddrSpace,
707 const ValueToValueMapTy &ValueWithNewAddrSpace,
708 const PredicatedAddrSpaceMapTy &PredicatedAS,
709 SmallVectorImpl<const Use *> *PoisonUsesToFix) const {
710 Type *NewPtrType = getPtrOrVecOfPtrsWithNewAS(I->getType(), NewAddrSpace);
711
712 if (I->getOpcode() == Instruction::AddrSpaceCast) {
713 Value *Src = I->getOperand(0);
714 // Because `I` is flat, the source address space must be specific.
715 // Therefore, the inferred address space must be the source space, according
716 // to our algorithm.
717 assert(Src->getType()->getPointerAddressSpace() == NewAddrSpace);
718 return Src;
719 }
720
721 if (IntrinsicInst *II = dyn_cast<IntrinsicInst>(I)) {
722 // Technically the intrinsic ID is a pointer typed argument, so specially
723 // handle calls early.
724 assert(II->getIntrinsicID() == Intrinsic::ptrmask);
726 II->getArgOperandUse(0), NewAddrSpace, ValueWithNewAddrSpace,
727 PredicatedAS, PoisonUsesToFix);
728 Value *Rewrite =
729 TTI->rewriteIntrinsicWithAddressSpace(II, II->getArgOperand(0), NewPtr);
730 if (Rewrite) {
731 assert(Rewrite != II && "cannot modify this pointer operation in place");
732 return Rewrite;
733 }
734
735 return nullptr;
736 }
737
738 unsigned AS = TTI->getAssumedAddrSpace(I);
739 if (AS != UninitializedAddressSpace) {
740 // For the assumed address space, insert an `addrspacecast` to make that
741 // explicit.
742 Type *NewPtrTy = getPtrOrVecOfPtrsWithNewAS(I->getType(), AS);
743 auto *NewI = new AddrSpaceCastInst(I, NewPtrTy);
744 NewI->insertAfter(I->getIterator());
745 NewI->setDebugLoc(I->getDebugLoc());
746 return NewI;
747 }
748
749 // Computes the converted pointer operands.
750 SmallVector<Value *, 4> NewPointerOperands;
751 for (const Use &OperandUse : I->operands()) {
752 if (!OperandUse.get()->getType()->isPtrOrPtrVectorTy())
753 NewPointerOperands.push_back(nullptr);
754 else
756 OperandUse, NewAddrSpace, ValueWithNewAddrSpace, PredicatedAS,
757 PoisonUsesToFix));
758 }
759
760 switch (I->getOpcode()) {
761 case Instruction::BitCast:
762 return new BitCastInst(NewPointerOperands[0], NewPtrType);
763 case Instruction::PHI: {
764 assert(I->getType()->isPtrOrPtrVectorTy());
765 PHINode *PHI = cast<PHINode>(I);
766 PHINode *NewPHI = PHINode::Create(NewPtrType, PHI->getNumIncomingValues());
767 for (unsigned Index = 0; Index < PHI->getNumIncomingValues(); ++Index) {
768 unsigned OperandNo = PHINode::getOperandNumForIncomingValue(Index);
769 NewPHI->addIncoming(NewPointerOperands[OperandNo],
770 PHI->getIncomingBlock(Index));
771 }
772 return NewPHI;
773 }
774 case Instruction::GetElementPtr: {
775 GetElementPtrInst *GEP = cast<GetElementPtrInst>(I);
776 GetElementPtrInst *NewGEP = GetElementPtrInst::Create(
777 GEP->getSourceElementType(), NewPointerOperands[0],
778 SmallVector<Value *, 4>(GEP->indices()));
779 NewGEP->setIsInBounds(GEP->isInBounds());
780 return NewGEP;
781 }
782 case Instruction::Select:
783 assert(I->getType()->isPtrOrPtrVectorTy());
784 return SelectInst::Create(I->getOperand(0), NewPointerOperands[1],
785 NewPointerOperands[2], "", nullptr, I);
786 case Instruction::IntToPtr: {
788 Value *Src = cast<Operator>(I->getOperand(0))->getOperand(0);
789 if (Src->getType() == NewPtrType)
790 return Src;
791
792 // If we had a no-op inttoptr/ptrtoint pair, we may still have inferred a
793 // source address space from a generic pointer source need to insert a cast
794 // back.
795 return new AddrSpaceCastInst(Src, NewPtrType);
796 }
797 default:
798 llvm_unreachable("Unexpected opcode");
799 }
800}
801
802// Similar to cloneInstructionWithNewAddressSpace, returns a clone of the
803// constant expression `CE` with its operands replaced as specified in
804// ValueWithNewAddrSpace.
806 ConstantExpr *CE, unsigned NewAddrSpace,
807 const ValueToValueMapTy &ValueWithNewAddrSpace, const DataLayout *DL,
808 const TargetTransformInfo *TTI) {
809 Type *TargetType =
810 CE->getType()->isPtrOrPtrVectorTy()
811 ? getPtrOrVecOfPtrsWithNewAS(CE->getType(), NewAddrSpace)
812 : CE->getType();
813
814 if (CE->getOpcode() == Instruction::AddrSpaceCast) {
815 // Because CE is flat, the source address space must be specific.
816 // Therefore, the inferred address space must be the source space according
817 // to our algorithm.
818 assert(CE->getOperand(0)->getType()->getPointerAddressSpace() ==
819 NewAddrSpace);
820 return CE->getOperand(0);
821 }
822
823 if (CE->getOpcode() == Instruction::BitCast) {
824 if (Value *NewOperand = ValueWithNewAddrSpace.lookup(CE->getOperand(0)))
825 return ConstantExpr::getBitCast(cast<Constant>(NewOperand), TargetType);
826 return ConstantExpr::getAddrSpaceCast(CE, TargetType);
827 }
828
829 if (CE->getOpcode() == Instruction::IntToPtr) {
831 Constant *Src = cast<ConstantExpr>(CE->getOperand(0))->getOperand(0);
832 assert(Src->getType()->getPointerAddressSpace() == NewAddrSpace);
833 return Src;
834 }
835
836 // Computes the operands of the new constant expression.
837 bool IsNew = false;
838 SmallVector<Constant *, 4> NewOperands;
839 for (unsigned Index = 0; Index < CE->getNumOperands(); ++Index) {
840 Constant *Operand = CE->getOperand(Index);
841 // If the address space of `Operand` needs to be modified, the new operand
842 // with the new address space should already be in ValueWithNewAddrSpace
843 // because (1) the constant expressions we consider (i.e. addrspacecast,
844 // bitcast, and getelementptr) do not incur cycles in the data flow graph
845 // and (2) this function is called on constant expressions in postorder.
846 if (Value *NewOperand = ValueWithNewAddrSpace.lookup(Operand)) {
847 IsNew = true;
848 NewOperands.push_back(cast<Constant>(NewOperand));
849 continue;
850 }
851 if (auto *CExpr = dyn_cast<ConstantExpr>(Operand))
853 CExpr, NewAddrSpace, ValueWithNewAddrSpace, DL, TTI)) {
854 IsNew = true;
855 NewOperands.push_back(cast<Constant>(NewOperand));
856 continue;
857 }
858 // Otherwise, reuses the old operand.
859 NewOperands.push_back(Operand);
860 }
861
862 // If !IsNew, we will replace the Value with itself. However, replaced values
863 // are assumed to wrapped in an addrspacecast cast later so drop it now.
864 if (!IsNew)
865 return nullptr;
866
867 if (CE->getOpcode() == Instruction::GetElementPtr) {
868 // Needs to specify the source type while constructing a getelementptr
869 // constant expression.
870 return CE->getWithOperands(NewOperands, TargetType, /*OnlyIfReduced=*/false,
871 cast<GEPOperator>(CE)->getSourceElementType());
872 }
873
874 return CE->getWithOperands(NewOperands, TargetType);
875}
876
877// Returns a clone of the value `V`, with its operands replaced as specified in
878// ValueWithNewAddrSpace. This function is called on every flat address
879// expression whose address space needs to be modified, in postorder.
880//
881// See cloneInstructionWithNewAddressSpace for the meaning of PoisonUsesToFix.
882Value *InferAddressSpacesImpl::cloneValueWithNewAddressSpace(
883 Value *V, unsigned NewAddrSpace,
884 const ValueToValueMapTy &ValueWithNewAddrSpace,
885 const PredicatedAddrSpaceMapTy &PredicatedAS,
886 SmallVectorImpl<const Use *> *PoisonUsesToFix) const {
887 // All values in Postorder are flat address expressions.
888 assert(V->getType()->getPointerAddressSpace() == FlatAddrSpace &&
889 isAddressExpression(*V, *DL, TTI));
890
891 if (auto *Arg = dyn_cast<Argument>(V)) {
892 // Arguments are address space casted in the function body, as we do not
893 // want to change the function signature.
894 Function *F = Arg->getParent();
895 BasicBlock::iterator Insert = F->getEntryBlock().getFirstNonPHIIt();
896
897 Type *NewPtrTy = PointerType::get(Arg->getContext(), NewAddrSpace);
898 auto *NewI = new AddrSpaceCastInst(Arg, NewPtrTy);
899 NewI->insertBefore(Insert);
900 return NewI;
901 }
902
903 if (Instruction *I = dyn_cast<Instruction>(V)) {
904 Value *NewV = cloneInstructionWithNewAddressSpace(
905 I, NewAddrSpace, ValueWithNewAddrSpace, PredicatedAS, PoisonUsesToFix);
906 if (Instruction *NewI = dyn_cast_or_null<Instruction>(NewV)) {
907 if (NewI->getParent() == nullptr) {
908 NewI->insertBefore(I->getIterator());
909 NewI->takeName(I);
910 NewI->setDebugLoc(I->getDebugLoc());
911 }
912 }
913 return NewV;
914 }
915
917 cast<ConstantExpr>(V), NewAddrSpace, ValueWithNewAddrSpace, DL, TTI);
918}
919
920// Defines the join operation on the address space lattice (see the file header
921// comments).
922unsigned InferAddressSpacesImpl::joinAddressSpaces(unsigned AS1,
923 unsigned AS2) const {
924 if (AS1 == FlatAddrSpace || AS2 == FlatAddrSpace)
925 return FlatAddrSpace;
926
927 if (AS1 == UninitializedAddressSpace)
928 return AS2;
929 if (AS2 == UninitializedAddressSpace)
930 return AS1;
931
932 // The join of two different specific address spaces is flat.
933 return (AS1 == AS2) ? AS1 : FlatAddrSpace;
934}
935
936bool InferAddressSpacesImpl::run(Function &CurFn) {
937 F = &CurFn;
938 DL = &F->getDataLayout();
939
941 FlatAddrSpace = 0;
942
943 if (FlatAddrSpace == UninitializedAddressSpace) {
944 FlatAddrSpace = TTI->getFlatAddressSpace();
945 if (FlatAddrSpace == UninitializedAddressSpace)
946 return false;
947 }
948
949 // Collects all flat address expressions in postorder.
950 std::vector<WeakTrackingVH> Postorder = collectFlatAddressExpressions(*F);
951
952 // Runs a data-flow analysis to refine the address spaces of every expression
953 // in Postorder.
954 ValueToAddrSpaceMapTy InferredAddrSpace;
955 PredicatedAddrSpaceMapTy PredicatedAS;
956 inferAddressSpaces(Postorder, InferredAddrSpace, PredicatedAS);
957
958 // Changes the address spaces of the flat address expressions who are inferred
959 // to point to a specific address space.
960 return rewriteWithNewAddressSpaces(Postorder, InferredAddrSpace,
961 PredicatedAS);
962}
963
964// Constants need to be tracked through RAUW to handle cases with nested
965// constant expressions, so wrap values in WeakTrackingVH.
966void InferAddressSpacesImpl::inferAddressSpaces(
967 ArrayRef<WeakTrackingVH> Postorder,
968 ValueToAddrSpaceMapTy &InferredAddrSpace,
969 PredicatedAddrSpaceMapTy &PredicatedAS) const {
970 SetVector<Value *> Worklist(llvm::from_range, Postorder);
971 // Initially, all expressions are in the uninitialized address space.
972 for (Value *V : Postorder)
973 InferredAddrSpace[V] = UninitializedAddressSpace;
974
975 while (!Worklist.empty()) {
976 Value *V = Worklist.pop_back_val();
977
978 // Try to update the address space of the stack top according to the
979 // address spaces of its operands.
980 if (!updateAddressSpace(*V, InferredAddrSpace, PredicatedAS))
981 continue;
982
983 for (Value *User : V->users()) {
984 // Skip if User is already in the worklist.
985 if (Worklist.count(User))
986 continue;
987
988 auto Pos = InferredAddrSpace.find(User);
989 // Our algorithm only updates the address spaces of flat address
990 // expressions, which are those in InferredAddrSpace.
991 if (Pos == InferredAddrSpace.end())
992 continue;
993
994 // Function updateAddressSpace moves the address space down a lattice
995 // path. Therefore, nothing to do if User is already inferred as flat (the
996 // bottom element in the lattice).
997 if (Pos->second == FlatAddrSpace)
998 continue;
999
1000 Worklist.insert(User);
1001 }
1002 }
1003}
1004
1005unsigned
1006InferAddressSpacesImpl::getPredicatedAddrSpace(const Value &Ptr,
1007 const Value *UserCtx) const {
1008 const Instruction *UserCtxI = dyn_cast<Instruction>(UserCtx);
1009 if (!UserCtxI)
1011
1012 const Value *StrippedPtr = Ptr.stripInBoundsOffsets();
1013 for (auto &AssumeVH : AC.assumptionsFor(StrippedPtr)) {
1014 if (!AssumeVH)
1015 continue;
1016 CallInst *CI = cast<CallInst>(AssumeVH);
1017 if (!isValidAssumeForContext(CI, UserCtxI, DT))
1018 continue;
1019
1020 const Value *Ptr;
1021 unsigned AS;
1022 std::tie(Ptr, AS) = TTI->getPredicatedAddrSpace(CI->getArgOperand(0));
1023 if (Ptr)
1024 return AS;
1025 }
1026
1028}
1029
1030bool InferAddressSpacesImpl::updateAddressSpace(
1031 const Value &V, ValueToAddrSpaceMapTy &InferredAddrSpace,
1032 PredicatedAddrSpaceMapTy &PredicatedAS) const {
1033 assert(InferredAddrSpace.count(&V));
1034
1035 LLVM_DEBUG(dbgs() << "Updating the address space of\n " << V << '\n');
1036
1037 // The new inferred address space equals the join of the address spaces
1038 // of all its pointer operands.
1039 unsigned NewAS = UninitializedAddressSpace;
1040
1041 // isAddressExpression should guarantee that V is an operator or an argument.
1043
1044 unsigned AS = TTI->getAssumedAddrSpace(&V);
1045 if (AS != UninitializedAddressSpace) {
1046 // Use the assumed address space directly.
1047 NewAS = AS;
1048 } else {
1049 // Otherwise, infer the address space from its pointer operands.
1050 SmallVector<Constant *, 2> ConstantPtrOps;
1051 for (Value *PtrOperand : getPointerOperands(V, *DL, TTI)) {
1052 auto I = InferredAddrSpace.find(PtrOperand);
1053 unsigned OperandAS;
1054 if (I == InferredAddrSpace.end()) {
1055 OperandAS = PtrOperand->getType()->getPointerAddressSpace();
1056 if (auto *C = dyn_cast<Constant>(PtrOperand);
1057 C && OperandAS == FlatAddrSpace) {
1058 // Defer joining the address space of constant pointer operands.
1059 ConstantPtrOps.push_back(C);
1060 continue;
1061 }
1062 if (OperandAS == FlatAddrSpace) {
1063 // Check AC for assumption dominating V.
1064 unsigned AS = getPredicatedAddrSpace(*PtrOperand, &V);
1065 if (AS != UninitializedAddressSpace) {
1067 << " deduce operand AS from the predicate addrspace "
1068 << AS << '\n');
1069 OperandAS = AS;
1070 // Record this use with the predicated AS.
1071 PredicatedAS[std::make_pair(&V, PtrOperand)] = OperandAS;
1072 }
1073 }
1074 } else
1075 OperandAS = I->second;
1076
1077 // join(flat, *) = flat. So we can break if NewAS is already flat.
1078 NewAS = joinAddressSpaces(NewAS, OperandAS);
1079 if (NewAS == FlatAddrSpace)
1080 break;
1081 }
1082 if (NewAS != FlatAddrSpace && NewAS != UninitializedAddressSpace) {
1083 if (any_of(ConstantPtrOps, [=](Constant *C) {
1084 return !isSafeToCastConstAddrSpace(C, NewAS);
1085 }))
1086 NewAS = FlatAddrSpace;
1087 }
1088 }
1089
1090 unsigned OldAS = InferredAddrSpace.lookup(&V);
1091 assert(OldAS != FlatAddrSpace);
1092 if (OldAS == NewAS)
1093 return false;
1094
1095 // If any updates are made, grabs its users to the worklist because
1096 // their address spaces can also be possibly updated.
1097 LLVM_DEBUG(dbgs() << " to " << NewAS << '\n');
1098 InferredAddrSpace[&V] = NewAS;
1099 return true;
1100}
1101
1102/// Replace operand \p OpIdx in \p Inst, if the value is the same as \p OldVal
1103/// with \p NewVal.
1104static bool replaceOperandIfSame(Instruction *Inst, unsigned OpIdx,
1105 Value *OldVal, Value *NewVal) {
1106 Use &U = Inst->getOperandUse(OpIdx);
1107 if (U.get() == OldVal) {
1108 U.set(NewVal);
1109 return true;
1110 }
1111
1112 return false;
1113}
1114
1115template <typename InstrType>
1117 InstrType *MemInstr, unsigned AddrSpace,
1118 Value *OldV, Value *NewV) {
1119 if (!MemInstr->isVolatile() || TTI.hasVolatileVariant(MemInstr, AddrSpace)) {
1120 return replaceOperandIfSame(MemInstr, InstrType::getPointerOperandIndex(),
1121 OldV, NewV);
1122 }
1123
1124 return false;
1125}
1126
1127/// If \p OldV is used as the pointer operand of a compatible memory operation
1128/// \p Inst, replaces the pointer operand with NewV.
1129///
1130/// This covers memory instructions with a single pointer operand that can have
1131/// its address space changed by simply mutating the use to a new value.
1132///
1133/// \p returns true the user replacement was made.
1135 User *Inst, unsigned AddrSpace,
1136 Value *OldV, Value *NewV) {
1137 if (auto *LI = dyn_cast<LoadInst>(Inst))
1138 return replaceSimplePointerUse(TTI, LI, AddrSpace, OldV, NewV);
1139
1140 if (auto *SI = dyn_cast<StoreInst>(Inst))
1141 return replaceSimplePointerUse(TTI, SI, AddrSpace, OldV, NewV);
1142
1143 if (auto *RMW = dyn_cast<AtomicRMWInst>(Inst))
1144 return replaceSimplePointerUse(TTI, RMW, AddrSpace, OldV, NewV);
1145
1146 if (auto *CmpX = dyn_cast<AtomicCmpXchgInst>(Inst))
1147 return replaceSimplePointerUse(TTI, CmpX, AddrSpace, OldV, NewV);
1148
1149 return false;
1150}
1151
1152/// Update memory intrinsic uses that require more complex processing than
1153/// simple memory instructions. These require re-mangling and may have multiple
1154/// pointer operands.
1156 Value *NewV) {
1157 IRBuilder<> B(MI);
1158 if (auto *MSI = dyn_cast<MemSetInst>(MI)) {
1159 B.CreateMemSet(NewV, MSI->getValue(), MSI->getLength(), MSI->getDestAlign(),
1160 false, // isVolatile
1161 MI->getAAMetadata());
1162 } else if (auto *MTI = dyn_cast<MemTransferInst>(MI)) {
1163 Value *Src = MTI->getRawSource();
1164 Value *Dest = MTI->getRawDest();
1165
1166 // Be careful in case this is a self-to-self copy.
1167 if (Src == OldV)
1168 Src = NewV;
1169
1170 if (Dest == OldV)
1171 Dest = NewV;
1172
1173 if (auto *MCI = dyn_cast<MemCpyInst>(MTI)) {
1174 if (MCI->isForceInlined())
1175 B.CreateMemCpyInline(Dest, MTI->getDestAlign(), Src,
1176 MTI->getSourceAlign(), MTI->getLength(),
1177 false, // isVolatile
1178 MI->getAAMetadata());
1179 else
1180 B.CreateMemCpy(Dest, MTI->getDestAlign(), Src, MTI->getSourceAlign(),
1181 MTI->getLength(),
1182 false, // isVolatile
1183 MI->getAAMetadata());
1184 } else {
1186 B.CreateMemMove(Dest, MTI->getDestAlign(), Src, MTI->getSourceAlign(),
1187 MTI->getLength(),
1188 false, // isVolatile
1189 MI->getAAMetadata());
1190 }
1191 } else
1192 llvm_unreachable("unhandled MemIntrinsic");
1193
1194 MI->eraseFromParent();
1195 return true;
1196}
1197
1198// \p returns true if it is OK to change the address space of constant \p C with
1199// a ConstantExpr addrspacecast.
1200bool InferAddressSpacesImpl::isSafeToCastConstAddrSpace(Constant *C,
1201 unsigned NewAS) const {
1203
1204 unsigned SrcAS = C->getType()->getPointerAddressSpace();
1205 if (SrcAS == NewAS || isa<UndefValue>(C))
1206 return true;
1207
1208 // Prevent illegal casts between different non-flat address spaces.
1209 if (SrcAS != FlatAddrSpace && NewAS != FlatAddrSpace)
1210 return false;
1211
1213 return true;
1214
1215 if (auto *Op = dyn_cast<Operator>(C)) {
1216 // If we already have a constant addrspacecast, it should be safe to cast it
1217 // off.
1218 if (Op->getOpcode() == Instruction::AddrSpaceCast)
1219 return isSafeToCastConstAddrSpace(cast<Constant>(Op->getOperand(0)),
1220 NewAS);
1221
1222 if (Op->getOpcode() == Instruction::IntToPtr &&
1223 Op->getType()->getPointerAddressSpace() == FlatAddrSpace)
1224 return true;
1225 }
1226
1227 return false;
1228}
1229
1231 Value::use_iterator End) {
1232 User *CurUser = I->getUser();
1233 ++I;
1234
1235 while (I != End && I->getUser() == CurUser)
1236 ++I;
1237
1238 return I;
1239}
1240
1241void InferAddressSpacesImpl::performPointerReplacement(
1242 Value *V, Value *NewV, Use &U, ValueToValueMapTy &ValueWithNewAddrSpace,
1243 SmallVectorImpl<Instruction *> &DeadInstructions) const {
1244
1245 User *CurUser = U.getUser();
1246
1247 unsigned AddrSpace = V->getType()->getPointerAddressSpace();
1248 if (replaceIfSimplePointerUse(*TTI, CurUser, AddrSpace, V, NewV))
1249 return;
1250
1251 // Skip if the current user is the new value itself.
1252 if (CurUser == NewV)
1253 return;
1254
1255 auto *CurUserI = dyn_cast<Instruction>(CurUser);
1256 if (!CurUserI || CurUserI->getFunction() != F)
1257 return;
1258
1259 // Handle more complex cases like intrinsic that need to be remangled.
1260 if (auto *MI = dyn_cast<MemIntrinsic>(CurUser)) {
1261 if (!MI->isVolatile() && handleMemIntrinsicPtrUse(MI, V, NewV))
1262 return;
1263 }
1264
1265 if (auto *II = dyn_cast<IntrinsicInst>(CurUser)) {
1266 if (rewriteIntrinsicOperands(II, V, NewV))
1267 return;
1268 }
1269
1270 if (ICmpInst *Cmp = dyn_cast<ICmpInst>(CurUserI)) {
1271 // If we can infer that both pointers are in the same addrspace,
1272 // transform e.g.
1273 // %cmp = icmp eq float* %p, %q
1274 // into
1275 // %cmp = icmp eq float addrspace(3)* %new_p, %new_q
1276
1277 unsigned NewAS = NewV->getType()->getPointerAddressSpace();
1278 int SrcIdx = U.getOperandNo();
1279 int OtherIdx = (SrcIdx == 0) ? 1 : 0;
1280 Value *OtherSrc = Cmp->getOperand(OtherIdx);
1281
1282 if (Value *OtherNewV = ValueWithNewAddrSpace.lookup(OtherSrc)) {
1283 if (OtherNewV->getType()->getPointerAddressSpace() == NewAS) {
1284 Cmp->setOperand(OtherIdx, OtherNewV);
1285 Cmp->setOperand(SrcIdx, NewV);
1286 return;
1287 }
1288 }
1289
1290 // Even if the type mismatches, we can cast the constant.
1291 if (auto *KOtherSrc = dyn_cast<Constant>(OtherSrc)) {
1292 if (isSafeToCastConstAddrSpace(KOtherSrc, NewAS)) {
1293 Cmp->setOperand(SrcIdx, NewV);
1294 Cmp->setOperand(OtherIdx, ConstantExpr::getAddrSpaceCast(
1295 KOtherSrc, NewV->getType()));
1296 return;
1297 }
1298 }
1299 }
1300
1301 if (AddrSpaceCastInst *ASC = dyn_cast<AddrSpaceCastInst>(CurUserI)) {
1302 unsigned NewAS = NewV->getType()->getPointerAddressSpace();
1303 if (ASC->getDestAddressSpace() == NewAS) {
1304 ASC->replaceAllUsesWith(NewV);
1305 DeadInstructions.push_back(ASC);
1306 return;
1307 }
1308 }
1309
1310 // Otherwise, replaces the use with flat(NewV).
1311 if (isa<Instruction>(V) || isa<Instruction>(NewV)) {
1312 // Don't create a copy of the original addrspacecast.
1313 if (U == V && isa<AddrSpaceCastInst>(V))
1314 return;
1315
1316 // Insert the addrspacecast after NewV.
1317 BasicBlock::iterator InsertPos;
1318 if (Instruction *NewVInst = dyn_cast<Instruction>(NewV))
1319 InsertPos = std::next(NewVInst->getIterator());
1320 else
1321 InsertPos = std::next(cast<Instruction>(V)->getIterator());
1322
1323 while (isa<PHINode>(InsertPos))
1324 ++InsertPos;
1325 // This instruction may contain multiple uses of V, update them all.
1326 CurUser->replaceUsesOfWith(
1327 V, new AddrSpaceCastInst(NewV, V->getType(), "", InsertPos));
1328 } else {
1329 CurUserI->replaceUsesOfWith(
1330 V, ConstantExpr::getAddrSpaceCast(cast<Constant>(NewV), V->getType()));
1331 }
1332}
1333
1334bool InferAddressSpacesImpl::rewriteWithNewAddressSpaces(
1335 ArrayRef<WeakTrackingVH> Postorder,
1336 const ValueToAddrSpaceMapTy &InferredAddrSpace,
1337 const PredicatedAddrSpaceMapTy &PredicatedAS) const {
1338 // For each address expression to be modified, creates a clone of it with its
1339 // pointer operands converted to the new address space. Since the pointer
1340 // operands are converted, the clone is naturally in the new address space by
1341 // construction.
1342 ValueToValueMapTy ValueWithNewAddrSpace;
1343 SmallVector<const Use *, 32> PoisonUsesToFix;
1344 for (Value *V : Postorder) {
1345 unsigned NewAddrSpace = InferredAddrSpace.lookup(V);
1346
1347 // In some degenerate cases (e.g. invalid IR in unreachable code), we may
1348 // not even infer the value to have its original address space.
1349 if (NewAddrSpace == UninitializedAddressSpace)
1350 continue;
1351
1352 if (V->getType()->getPointerAddressSpace() != NewAddrSpace) {
1353 Value *New =
1354 cloneValueWithNewAddressSpace(V, NewAddrSpace, ValueWithNewAddrSpace,
1355 PredicatedAS, &PoisonUsesToFix);
1356 if (New)
1357 ValueWithNewAddrSpace[V] = New;
1358 }
1359 }
1360
1361 if (ValueWithNewAddrSpace.empty())
1362 return false;
1363
1364 // Fixes all the poison uses generated by cloneInstructionWithNewAddressSpace.
1365 for (const Use *PoisonUse : PoisonUsesToFix) {
1366 User *V = PoisonUse->getUser();
1367 User *NewV = cast_or_null<User>(ValueWithNewAddrSpace.lookup(V));
1368 if (!NewV)
1369 continue;
1370
1371 unsigned OperandNo = PoisonUse->getOperandNo();
1372 assert(isa<PoisonValue>(NewV->getOperand(OperandNo)));
1373 NewV->setOperand(OperandNo, ValueWithNewAddrSpace.lookup(PoisonUse->get()));
1374 }
1375
1376 SmallVector<Instruction *, 16> DeadInstructions;
1377 ValueToValueMapTy VMap;
1378 ValueMapper VMapper(VMap, RF_NoModuleLevelChanges | RF_IgnoreMissingLocals);
1379
1380 // Replaces the uses of the old address expressions with the new ones.
1381 for (const WeakTrackingVH &WVH : Postorder) {
1382 assert(WVH && "value was unexpectedly deleted");
1383 Value *V = WVH;
1384 Value *NewV = ValueWithNewAddrSpace.lookup(V);
1385 if (NewV == nullptr)
1386 continue;
1387
1388 LLVM_DEBUG(dbgs() << "Replacing the uses of " << *V << "\n with\n "
1389 << *NewV << '\n');
1390
1391 if (Constant *C = dyn_cast<Constant>(V)) {
1392 Constant *Replace =
1394 if (C != Replace) {
1395 LLVM_DEBUG(dbgs() << "Inserting replacement const cast: " << Replace
1396 << ": " << *Replace << '\n');
1397 SmallVector<User *, 16> WorkList;
1398 for (User *U : make_early_inc_range(C->users())) {
1399 if (auto *I = dyn_cast<Instruction>(U)) {
1400 if (I->getFunction() == F)
1401 I->replaceUsesOfWith(C, Replace);
1402 } else {
1403 WorkList.append(U->user_begin(), U->user_end());
1404 }
1405 }
1406 if (!WorkList.empty()) {
1407 VMap[C] = Replace;
1408 DenseSet<User *> Visited{WorkList.begin(), WorkList.end()};
1409 while (!WorkList.empty()) {
1410 User *U = WorkList.pop_back_val();
1411 if (auto *I = dyn_cast<Instruction>(U)) {
1412 if (I->getFunction() == F)
1413 VMapper.remapInstruction(*I);
1414 continue;
1415 }
1416 for (User *U2 : U->users())
1417 if (Visited.insert(U2).second)
1418 WorkList.push_back(U2);
1419 }
1420 }
1421 V = Replace;
1422 }
1423 }
1424
1425 Value::use_iterator I, E, Next;
1426 for (I = V->use_begin(), E = V->use_end(); I != E;) {
1427 Use &U = *I;
1428
1429 // Some users may see the same pointer operand in multiple operands. Skip
1430 // to the next instruction.
1431 I = skipToNextUser(I, E);
1432
1433 performPointerReplacement(V, NewV, U, ValueWithNewAddrSpace,
1434 DeadInstructions);
1435 }
1436
1437 if (V->use_empty()) {
1438 if (Instruction *I = dyn_cast<Instruction>(V))
1439 DeadInstructions.push_back(I);
1440 }
1441 }
1442
1443 for (Instruction *I : DeadInstructions)
1445
1446 return true;
1447}
1448
1449bool InferAddressSpaces::runOnFunction(Function &F) {
1450 if (skipFunction(F))
1451 return false;
1452
1453 auto *DTWP = getAnalysisIfAvailable<DominatorTreeWrapperPass>();
1454 DominatorTree *DT = DTWP ? &DTWP->getDomTree() : nullptr;
1455 return InferAddressSpacesImpl(
1456 getAnalysis<AssumptionCacheTracker>().getAssumptionCache(F), DT,
1457 &getAnalysis<TargetTransformInfoWrapperPass>().getTTI(F),
1458 FlatAddrSpace)
1459 .run(F);
1460}
1461
1463 return new InferAddressSpaces(AddressSpace);
1464}
1465
1470
1473 bool Changed =
1474 InferAddressSpacesImpl(AM.getResult<AssumptionAnalysis>(F),
1476 &AM.getResult<TargetIRAnalysis>(F), FlatAddrSpace)
1477 .run(F);
1478 if (Changed) {
1482 return PA;
1483 }
1484 return PreservedAnalyses::all();
1485}
assert(UImm &&(UImm !=~static_cast< T >(0)) &&"Invalid immediate!")
Rewrite undef for PHI
MachineBasicBlock MachineBasicBlock::iterator DebugLoc DL
Expand Atomic instructions
static GCRegistry::Add< CoreCLRGC > E("coreclr", "CoreCLR-compatible GC")
static GCRegistry::Add< OcamlGC > B("ocaml", "ocaml 3.10-compatible GC")
#define LLVM_UNLIKELY(EXPR)
Definition Compiler.h:336
This file contains the declarations for the subclasses of Constant, which represent the different fla...
This file defines the DenseMap class.
This file defines the DenseSet and SmallDenseSet classes.
static bool runOnFunction(Function &F, bool PostInlining)
#define DEBUG_TYPE
Hexagon Common GEP
IRTranslator LLVM IR MI
This header defines various interfaces for pass management in LLVM.
This defines the Use class.
static bool replaceIfSimplePointerUse(const TargetTransformInfo &TTI, User *Inst, unsigned AddrSpace, Value *OldV, Value *NewV)
If OldV is used as the pointer operand of a compatible memory operation Inst, replaces the pointer op...
static bool replaceOperandIfSame(Instruction *Inst, unsigned OpIdx, Value *OldVal, Value *NewVal)
Replace operand OpIdx in Inst, if the value is the same as OldVal with NewVal.
static cl::opt< bool > AssumeDefaultIsFlatAddressSpace("assume-default-is-flat-addrspace", cl::init(false), cl::ReallyHidden, cl::desc("The default address space is assumed as the flat address space. " "This is mainly for test purpose."))
static bool isNoopPtrIntCastPair(const Operator *I2P, const DataLayout &DL, const TargetTransformInfo *TTI)
static Value * phiNodeOperandWithNewAddressSpace(AddrSpaceCastInst *NewI, Value *Operand)
static bool isAddressExpression(const Value &V, const DataLayout &DL, const TargetTransformInfo *TTI)
static bool handleMemIntrinsicPtrUse(MemIntrinsic *MI, Value *OldV, Value *NewV)
Update memory intrinsic uses that require more complex processing than simple memory instructions.
static SmallVector< Value *, 2 > getPointerOperands(const Value &V, const DataLayout &DL, const TargetTransformInfo *TTI)
static Value * operandWithNewAddressSpaceOrCreatePoison(const Use &OperandUse, unsigned NewAddrSpace, const ValueToValueMapTy &ValueWithNewAddrSpace, const PredicatedAddrSpaceMapTy &PredicatedAS, SmallVectorImpl< const Use * > *PoisonUsesToFix)
static Value * cloneConstantExprWithNewAddressSpace(ConstantExpr *CE, unsigned NewAddrSpace, const ValueToValueMapTy &ValueWithNewAddrSpace, const DataLayout *DL, const TargetTransformInfo *TTI)
static Value::use_iterator skipToNextUser(Value::use_iterator I, Value::use_iterator End)
Infer address static false Type * getPtrOrVecOfPtrsWithNewAS(Type *Ty, unsigned NewAddrSpace)
static bool replaceSimplePointerUse(const TargetTransformInfo &TTI, InstrType *MemInstr, unsigned AddrSpace, Value *OldV, Value *NewV)
static const unsigned UninitializedAddressSpace
#define F(x, y, z)
Definition MD5.cpp:54
#define I(x, y, z)
Definition MD5.cpp:57
Machine Check Debug Module
MachineInstr unsigned OpIdx
uint64_t IntrinsicInst * II
#define INITIALIZE_PASS_DEPENDENCY(depName)
Definition PassSupport.h:42
#define INITIALIZE_PASS_END(passName, arg, name, cfg, analysis)
Definition PassSupport.h:44
#define INITIALIZE_PASS_BEGIN(passName, arg, name, cfg, analysis)
Definition PassSupport.h:39
This file implements a set that has insertion order iteration characteristics.
This file defines the SmallVector class.
#define LLVM_DEBUG(...)
Definition Debug.h:114
This pass exposes codegen information to IR-level passes.
This class represents a conversion between pointers from one address space to another.
PassT::Result * getCachedResult(IRUnitT &IR) const
Get the cached result of an analysis pass for a given IR unit.
PassT::Result & getResult(IRUnitT &IR, ExtraArgTs... ExtraArgs)
Get the result of an analysis pass for a given IR unit.
AnalysisUsage & addRequired()
AnalysisUsage & addPreserved()
Add the specified Pass class to the set of analyses preserved by this pass.
LLVM_ABI void setPreservesCFG()
This function should be called by the pass, iff they do not:
Definition Pass.cpp:270
This class represents an incoming formal argument to a Function.
Definition Argument.h:32
A function analysis which provides an AssumptionCache.
An immutable pass that tracks lazily created AssumptionCache objects.
MutableArrayRef< ResultElem > assumptionsFor(const Value *V)
Access the list of assumptions which affect this value.
InstListType::iterator iterator
Instruction iterators...
Definition BasicBlock.h:170
Represents analyses that only rely on functions' control flow.
Definition Analysis.h:73
Value * getArgOperand(unsigned i) const
static LLVM_ABI bool isNoopCast(Instruction::CastOps Opcode, Type *SrcTy, Type *DstTy, const DataLayout &DL)
A no-op cast is one that can be effected without changing any bits.
A constant value that is initialized with an expression using other constant values.
Definition Constants.h:1120
static LLVM_ABI Constant * getAddrSpaceCast(Constant *C, Type *Ty, bool OnlyIfReduced=false)
static LLVM_ABI Constant * getBitCast(Constant *C, Type *Ty, bool OnlyIfReduced=false)
This is an important base class in LLVM.
Definition Constant.h:43
A parsed version of the target data layout string in and methods for querying it.
Definition DataLayout.h:63
Analysis pass which computes a DominatorTree.
Definition Dominators.h:283
FunctionPass class - This class is used to implement most global optimizations.
Definition Pass.h:314
static GetElementPtrInst * Create(Type *PointeeType, Value *Ptr, ArrayRef< Value * > IdxList, const Twine &NameStr="", InsertPosition InsertBefore=nullptr)
LLVM_ABI void setIsInBounds(bool b=true)
Set or clear the inbounds flag on this GEP instruction.
This provides a uniform API for creating instructions and inserting them into a basic block: either a...
Definition IRBuilder.h:2788
const DebugLoc & getDebugLoc() const
Return the debug location for this node as a DebugLoc.
LLVM_ABI void insertBefore(InstListType::iterator InsertPos)
Insert an unlinked instruction into a basic block immediately before the specified position.
unsigned getOpcode() const
Returns a member of one of the enums like Instruction::Add.
void setDebugLoc(DebugLoc Loc)
Set the debug location information for this instruction.
LLVM_ABI void insertAfter(Instruction *InsertPos)
Insert an unlinked instruction into a basic block immediately after the specified instruction.
A wrapper class for inspecting calls to intrinsic functions.
This is the common base class for memset/memcpy/memmove.
This is a utility class that provides an abstraction for the common functionality between Instruction...
Definition Operator.h:33
unsigned getOpcode() const
Return the opcode for this Instruction or ConstantExpr.
Definition Operator.h:43
void addIncoming(Value *V, BasicBlock *BB)
Add an incoming value to the end of the PHI list.
static unsigned getOperandNumForIncomingValue(unsigned i)
static PHINode * Create(Type *Ty, unsigned NumReservedValues, const Twine &NameStr="", InsertPosition InsertBefore=nullptr)
Constructors - NumReservedValues is a hint for the number of incoming edges that this phi node will h...
static LLVM_ABI PassRegistry * getPassRegistry()
getPassRegistry - Access the global registry object, which is automatically initialized at applicatio...
static LLVM_ABI PointerType * get(Type *ElementType, unsigned AddressSpace)
This constructs a pointer to an object of the specified type in a numbered address space.
static LLVM_ABI PoisonValue * get(Type *T)
Static factory methods - Return an 'poison' object of the specified type.
A set of analyses that are preserved following a run of a transformation pass.
Definition Analysis.h:112
static PreservedAnalyses all()
Construct a special preserved set that preserves all passes.
Definition Analysis.h:118
PreservedAnalyses & preserveSet()
Mark an analysis set as preserved.
Definition Analysis.h:151
PreservedAnalyses & preserve()
Mark an analysis as preserved.
Definition Analysis.h:132
static SelectInst * Create(Value *C, Value *S1, Value *S2, const Twine &NameStr="", InsertPosition InsertBefore=nullptr, const Instruction *MDFrom=nullptr)
This class consists of common code factored out of the SmallVector class to reduce code duplication b...
void append(ItTy in_start, ItTy in_end)
Add the specified range to the end of the SmallVector.
void push_back(const T &Elt)
This is a 'vector' (really, a variable-sized array), optimized for the case when the array is small.
Analysis pass providing the TargetTransformInfo.
Wrapper pass for TargetTransformInfo.
This pass provides access to the codegen interfaces that are needed for IR-level transformations.
LLVM_ABI unsigned getAssumedAddrSpace(const Value *V) const
LLVM_ABI std::pair< const Value *, unsigned > getPredicatedAddrSpace(const Value *V) const
LLVM_ABI bool collectFlatAddressOperands(SmallVectorImpl< int > &OpIndexes, Intrinsic::ID IID) const
Return any intrinsic address operand indexes which may be rewritten if they use a flat address space ...
LLVM_ABI Value * rewriteIntrinsicWithAddressSpace(IntrinsicInst *II, Value *OldV, Value *NewV) const
Rewrite intrinsic call II such that OldV will be replaced with NewV, which has a different address sp...
LLVM_ABI unsigned getFlatAddressSpace() const
Returns the address space ID for a target's 'flat' address space.
The instances of the Type class are immutable: once they are created, they are never changed.
Definition Type.h:45
LLVM_ABI unsigned getPointerAddressSpace() const
Get the address space of this pointer or pointer vector type.
bool isPtrOrPtrVectorTy() const
Return true if this is a pointer type or a vector of pointer types.
Definition Type.h:270
A Use represents the edge between a Value definition and its users.
Definition Use.h:35
User * getUser() const
Returns the User that contains this Use.
Definition Use.h:61
Value * get() const
Definition Use.h:55
const Use & getOperandUse(unsigned i) const
Definition User.h:245
void setOperand(unsigned i, Value *Val)
Definition User.h:237
LLVM_ABI bool replaceUsesOfWith(Value *From, Value *To)
Replace uses of one Value with another.
Definition User.cpp:24
Value * getOperand(unsigned i) const
Definition User.h:232
ValueT lookup(const KeyT &Val) const
lookup - Return the entry for the specified key, or a default constructed value if no such entry exis...
Definition ValueMap.h:167
bool empty() const
Definition ValueMap.h:143
LLVM Value Representation.
Definition Value.h:75
Type * getType() const
All values are typed, get the type of this value.
Definition Value.h:256
LLVM_ABI const Value * stripInBoundsOffsets(function_ref< void(const Value *)> Func=[](const Value *) {}) const
Strip off pointer casts and inbounds GEPs.
Definition Value.cpp:812
use_iterator_impl< Use > use_iterator
Definition Value.h:353
LLVM_ABI const Value * stripPointerCasts() const
Strip off pointer casts, all-zero GEPs and address space casts.
Definition Value.cpp:701
std::pair< iterator, bool > insert(const ValueT &V)
Definition DenseSet.h:202
const ParentTy * getParent() const
Definition ilist_node.h:34
self_iterator getIterator()
Definition ilist_node.h:123
Changed
#define llvm_unreachable(msg)
Marks that the current location is not supposed to be reachable.
unsigned ID
LLVM IR allows to use arbitrary numbers as calling convention identifiers.
Definition CallingConv.h:24
@ C
The default llvm calling convention, compatible with C.
Definition CallingConv.h:34
InstrType
This represents what is and is not supported when finding similarity in Instructions.
LLVM_ABI Function * getOrInsertDeclaration(Module *M, ID id, ArrayRef< Type * > Tys={})
Look up the Function declaration of the intrinsic id in the Module M.
initializer< Ty > init(const Ty &Val)
PointerTypeMap run(const Module &M)
Compute the PointerTypeMap for the module M.
@ User
could "use" a pointer
NodeAddr< UseNode * > Use
Definition RDFGraph.h:385
friend class Instruction
Iterator for Instructions in a `BasicBlock.
Definition BasicBlock.h:73
This is an optimization pass for GlobalISel generic memory operations.
FunctionAddr VTableAddr Value
Definition InstrProf.h:137
LLVM_ABI bool isValidAssumeForContext(const Instruction *I, const Instruction *CxtI, const DominatorTree *DT=nullptr, bool AllowEphemerals=false)
Return true if it is valid to use the assumptions provided by an assume intrinsic,...
LLVM_ABI bool RecursivelyDeleteTriviallyDeadInstructions(Value *V, const TargetLibraryInfo *TLI=nullptr, MemorySSAUpdater *MSSAU=nullptr, std::function< void(Value *)> AboutToDeleteCallback=std::function< void(Value *)>())
If the specified value is a trivially dead instruction, delete it.
Definition Local.cpp:533
LLVM_ABI void initializeInferAddressSpacesPass(PassRegistry &)
decltype(auto) dyn_cast(const From &Val)
dyn_cast<X> - Return the argument parameter cast to the specified type.
Definition Casting.h:643
constexpr from_range_t from_range
iterator_range< early_inc_iterator_impl< detail::IterOfRange< RangeT > > > make_early_inc_range(RangeT &&Range)
Make a range that does early increment to allow mutation of the underlying range without disrupting i...
Definition STLExtras.h:632
auto cast_or_null(const Y &Val)
Definition Casting.h:714
auto dyn_cast_or_null(const Y &Val)
Definition Casting.h:753
bool any_of(R &&range, UnaryPredicate P)
Provide wrappers to std::any_of which take ranges instead of having to pass begin/end explicitly.
Definition STLExtras.h:1732
@ RF_IgnoreMissingLocals
If this flag is set, the remapper ignores missing function-local entries (Argument,...
Definition ValueMapper.h:98
@ RF_NoModuleLevelChanges
If this flag is set, the remapper knows that only local values within a function (such as an instruct...
Definition ValueMapper.h:80
LLVM_ABI raw_ostream & dbgs()
dbgs() - This returns a reference to a raw_ostream for debugging messages.
Definition Debug.cpp:207
class LLVM_GSL_OWNER SmallVector
Forward declaration of SmallVector so that calculateSmallVectorDefaultInlinedElements can reference s...
bool isa(const From &Val)
isa<X> - Return true if the parameter to the template is an instance of one of the template type argu...
Definition Casting.h:547
TargetTransformInfo TTI
FunctionAddr VTableAddr Next
Definition InstrProf.h:141
DWARFExpression::Operation Op
ArrayRef(const T &OneElt) -> ArrayRef< T >
ValueMap< const Value *, WeakTrackingVH > ValueToValueMapTy
LLVM_ABI FunctionPass * createInferAddressSpacesPass(unsigned AddressSpace=~0u)
decltype(auto) cast(const From &Val)
cast<X> - Return the argument parameter cast to the specified type.
Definition Casting.h:559
AnalysisManager< Function > FunctionAnalysisManager
Convenience typedef for the Function analysis manager.
PreservedAnalyses run(Function &F, FunctionAnalysisManager &AM)