LLVM 20.0.0git
Public Member Functions | Static Public Member Functions | List of all members
llvm::SITargetLowering Class Referencefinal

#include "Target/AMDGPU/SIISelLowering.h"

Inheritance diagram for llvm::SITargetLowering:
Inheritance graph
[legend]

Public Member Functions

MVT getRegisterTypeForCallingConv (LLVMContext &Context, CallingConv::ID CC, EVT VT) const override
 Certain combinations of ABIs, Targets and features require that types are legal for some operations and not for other operations.
 
unsigned getNumRegistersForCallingConv (LLVMContext &Context, CallingConv::ID CC, EVT VT) const override
 Certain targets require unusual breakdowns of certain types.
 
unsigned getVectorTypeBreakdownForCallingConv (LLVMContext &Context, CallingConv::ID CC, EVT VT, EVT &IntermediateVT, unsigned &NumIntermediates, MVT &RegisterVT) const override
 Certain targets such as MIPS require that some types such as vectors are always broken down into scalars in some contexts.
 
bool shouldEmitFixup (const GlobalValue *GV) const
 
bool shouldEmitGOTReloc (const GlobalValue *GV) const
 
bool shouldEmitPCReloc (const GlobalValue *GV) const
 
bool shouldUseLDSConstAddress (const GlobalValue *GV) const
 
bool shouldExpandVectorDynExt (SDNode *N) const
 
 SITargetLowering (const TargetMachine &tm, const GCNSubtarget &STI)
 
const GCNSubtargetgetSubtarget () const
 
ArrayRef< MCPhysReggetRoundingControlRegisters () const override
 Returns a 0 terminated array of rounding control registers that can be attached into strict FP call.
 
bool isFPExtFoldable (const SelectionDAG &DAG, unsigned Opcode, EVT DestVT, EVT SrcVT) const override
 Return true if an fpext operation input to an Opcode operation is free (for instance, because half-precision floating-point numbers are implicitly extended to float-precision) for an FMA instruction.
 
bool isFPExtFoldable (const MachineInstr &MI, unsigned Opcode, LLT DestTy, LLT SrcTy) const override
 Return true if an fpext operation input to an Opcode operation is free (for instance, because half-precision floating-point numbers are implicitly extended to float-precision) for an FMA instruction.
 
bool isShuffleMaskLegal (ArrayRef< int >, EVT) const override
 Targets can use this to indicate that they only support some VECTOR_SHUFFLE operations, those with specific masks.
 
MVT getPointerTy (const DataLayout &DL, unsigned AS) const override
 Map address space 7 to MVT::v5i32 because that's its in-memory representation.
 
MVT getPointerMemTy (const DataLayout &DL, unsigned AS) const override
 Similarly, the in-memory representation of a p7 is {p8, i32}, aka v8i32 when padding is added.
 
bool getTgtMemIntrinsic (IntrinsicInfo &, const CallInst &, MachineFunction &MF, unsigned IntrinsicID) const override
 Given an intrinsic, checks if on the target the intrinsic will need to map to a MemIntrinsicNode (touches memory).
 
void CollectTargetIntrinsicOperands (const CallInst &I, SmallVectorImpl< SDValue > &Ops, SelectionDAG &DAG) const override
 
bool getAddrModeArguments (IntrinsicInst *, SmallVectorImpl< Value * > &, Type *&) const override
 CodeGenPrepare sinks address calculations into the same BB as Load/Store instructions reading the address.
 
bool isLegalFlatAddressingMode (const AddrMode &AM, unsigned AddrSpace) const
 
bool isLegalGlobalAddressingMode (const AddrMode &AM) const
 
bool isLegalAddressingMode (const DataLayout &DL, const AddrMode &AM, Type *Ty, unsigned AS, Instruction *I=nullptr) const override
 Return true if the addressing mode represented by AM is legal for this target, for a load/store of the specified type.
 
bool canMergeStoresTo (unsigned AS, EVT MemVT, const MachineFunction &MF) const override
 Returns if it's reasonable to merge stores to MemVT size.
 
bool allowsMisalignedMemoryAccessesImpl (unsigned Size, unsigned AddrSpace, Align Alignment, MachineMemOperand::Flags Flags=MachineMemOperand::MONone, unsigned *IsFast=nullptr) const
 
bool allowsMisalignedMemoryAccesses (LLT Ty, unsigned AddrSpace, Align Alignment, MachineMemOperand::Flags Flags=MachineMemOperand::MONone, unsigned *IsFast=nullptr) const override
 LLT handling variant.
 
bool allowsMisalignedMemoryAccesses (EVT VT, unsigned AS, Align Alignment, MachineMemOperand::Flags Flags=MachineMemOperand::MONone, unsigned *IsFast=nullptr) const override
 Determine if the target supports unaligned memory accesses.
 
EVT getOptimalMemOpType (const MemOp &Op, const AttributeList &FuncAttributes) const override
 Returns the target specific optimal type for load and store operations as a result of memset, memcpy, and memmove lowering.
 
bool isMemOpHasNoClobberedMemOperand (const SDNode *N) const
 
bool isFreeAddrSpaceCast (unsigned SrcAS, unsigned DestAS) const override
 Returns true if a cast from SrcAS to DestAS is "cheap", such that e.g.
 
TargetLoweringBase::LegalizeTypeAction getPreferredVectorAction (MVT VT) const override
 Return the preferred vector type legalization action.
 
bool shouldConvertConstantLoadToIntImm (const APInt &Imm, Type *Ty) const override
 Return true if it is beneficial to convert a load of a constant to just the constant itself.
 
bool isExtractSubvectorCheap (EVT ResVT, EVT SrcVT, unsigned Index) const override
 Return true if EXTRACT_SUBVECTOR is cheap for extracting this result type from this source type with this index.
 
bool isTypeDesirableForOp (unsigned Op, EVT VT) const override
 Return true if the target has native support for the specified value type and it is 'desirable' to use the type for the given node type.
 
bool isOffsetFoldingLegal (const GlobalAddressSDNode *GA) const override
 Return true if folding a constant offset with the given GlobalAddress is legal.
 
unsigned combineRepeatedFPDivisors () const override
 Indicate whether this target prefers to combine FDIVs with the same divisor.
 
bool supportSplitCSR (MachineFunction *MF) const override
 Return true if the target supports that a subset of CSRs for the given machine function is handled explicitly via copies.
 
void initializeSplitCSR (MachineBasicBlock *Entry) const override
 Perform necessary initialization to handle a subset of CSRs explicitly via copies.
 
void insertCopiesSplitCSR (MachineBasicBlock *Entry, const SmallVectorImpl< MachineBasicBlock * > &Exits) const override
 Insert explicit copies in entry and exit blocks.
 
SDValue LowerFormalArguments (SDValue Chain, CallingConv::ID CallConv, bool isVarArg, const SmallVectorImpl< ISD::InputArg > &Ins, const SDLoc &DL, SelectionDAG &DAG, SmallVectorImpl< SDValue > &InVals) const override
 This hook must be implemented to lower the incoming (formal) arguments, described by the Ins array, into the specified DAG.
 
bool CanLowerReturn (CallingConv::ID CallConv, MachineFunction &MF, bool isVarArg, const SmallVectorImpl< ISD::OutputArg > &Outs, LLVMContext &Context) const override
 This hook should be implemented to check whether the return values described by the Outs array can fit into the return registers.
 
SDValue LowerReturn (SDValue Chain, CallingConv::ID CallConv, bool IsVarArg, const SmallVectorImpl< ISD::OutputArg > &Outs, const SmallVectorImpl< SDValue > &OutVals, const SDLoc &DL, SelectionDAG &DAG) const override
 This hook must be implemented to lower outgoing return values, described by the Outs array, into the specified DAG.
 
void passSpecialInputs (CallLoweringInfo &CLI, CCState &CCInfo, const SIMachineFunctionInfo &Info, SmallVectorImpl< std::pair< unsigned, SDValue > > &RegsToPass, SmallVectorImpl< SDValue > &MemOpChains, SDValue Chain) const
 
SDValue LowerCallResult (SDValue Chain, SDValue InGlue, CallingConv::ID CallConv, bool isVarArg, const SmallVectorImpl< ISD::InputArg > &Ins, const SDLoc &DL, SelectionDAG &DAG, SmallVectorImpl< SDValue > &InVals, bool isThisReturn, SDValue ThisVal) const
 
bool mayBeEmittedAsTailCall (const CallInst *) const override
 Return true if the target may be able emit the call instruction as a tail call.
 
bool isEligibleForTailCallOptimization (SDValue Callee, CallingConv::ID CalleeCC, bool isVarArg, const SmallVectorImpl< ISD::OutputArg > &Outs, const SmallVectorImpl< SDValue > &OutVals, const SmallVectorImpl< ISD::InputArg > &Ins, SelectionDAG &DAG) const
 
SDValue LowerCall (CallLoweringInfo &CLI, SmallVectorImpl< SDValue > &InVals) const override
 This hook must be implemented to lower calls into the specified DAG.
 
SDValue lowerDYNAMIC_STACKALLOCImpl (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerDYNAMIC_STACKALLOC (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerSTACKSAVE (SDValue Op, SelectionDAG &DAG) const
 
SDValue lowerGET_ROUNDING (SDValue Op, SelectionDAG &DAG) const
 
SDValue lowerSET_ROUNDING (SDValue Op, SelectionDAG &DAG) const
 
SDValue lowerPREFETCH (SDValue Op, SelectionDAG &DAG) const
 
SDValue lowerFP_EXTEND (SDValue Op, SelectionDAG &DAG) const
 
SDValue lowerGET_FPENV (SDValue Op, SelectionDAG &DAG) const
 
SDValue lowerSET_FPENV (SDValue Op, SelectionDAG &DAG) const
 
Register getRegisterByName (const char *RegName, LLT VT, const MachineFunction &MF) const override
 Return the register ID of the name passed in.
 
MachineBasicBlocksplitKillBlock (MachineInstr &MI, MachineBasicBlock *BB) const
 
void bundleInstWithWaitcnt (MachineInstr &MI) const
 Insert MI into a BUNDLE with an S_WAITCNT 0 immediately following it.
 
MachineBasicBlockemitGWSMemViolTestLoop (MachineInstr &MI, MachineBasicBlock *BB) const
 
MachineBasicBlockEmitInstrWithCustomInserter (MachineInstr &MI, MachineBasicBlock *BB) const override
 This method should be implemented by targets that mark instructions with the 'usesCustomInserter' flag.
 
bool enableAggressiveFMAFusion (EVT VT) const override
 Return true if target always benefits from combining into FMA for a given value type.
 
bool enableAggressiveFMAFusion (LLT Ty) const override
 Return true if target always benefits from combining into FMA for a given value type.
 
EVT getSetCCResultType (const DataLayout &DL, LLVMContext &Context, EVT VT) const override
 Return the ValueType of the result of SETCC operations.
 
MVT getScalarShiftAmountTy (const DataLayout &, EVT) const override
 Return the type to use for a scalar shift opcode, given the shifted amount type.
 
LLT getPreferredShiftAmountTy (LLT Ty) const override
 Return the preferred type to use for a shift opcode, given the shifted amount type is ShiftValueTy.
 
bool isFMAFasterThanFMulAndFAdd (const MachineFunction &MF, EVT VT) const override
 Return true if an FMA operation is faster than a pair of fmul and fadd instructions.
 
bool isFMAFasterThanFMulAndFAdd (const MachineFunction &MF, const LLT Ty) const override
 Return true if an FMA operation is faster than a pair of fmul and fadd instructions.
 
bool isFMADLegal (const SelectionDAG &DAG, const SDNode *N) const override
 Returns true if be combined with to form an ISD::FMAD.
 
bool isFMADLegal (const MachineInstr &MI, const LLT Ty) const override
 Returns true if MI can be combined with another instruction to form TargetOpcode::G_FMAD.
 
SDValue splitUnaryVectorOp (SDValue Op, SelectionDAG &DAG) const
 
SDValue splitBinaryVectorOp (SDValue Op, SelectionDAG &DAG) const
 
SDValue splitTernaryVectorOp (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerOperation (SDValue Op, SelectionDAG &DAG) const override
 This callback is invoked for operations that are unsupported by the target, which are registered to use 'custom' lowering, and whose defined values are all legal.
 
void ReplaceNodeResults (SDNode *N, SmallVectorImpl< SDValue > &Results, SelectionDAG &DAG) const override
 This callback is invoked when a node result type is illegal for the target, and the operation was registered to use 'custom' lowering for that result type.
 
SDValue PerformDAGCombine (SDNode *N, DAGCombinerInfo &DCI) const override
 This method will be invoked for all target nodes and for any target-independent nodes that the target has registered with invoke it for.
 
SDNodePostISelFolding (MachineSDNode *N, SelectionDAG &DAG) const override
 Fold the instructions after selecting them.
 
void AddMemOpInit (MachineInstr &MI) const
 
void AdjustInstrPostInstrSelection (MachineInstr &MI, SDNode *Node) const override
 Assign the register class depending on the number of bits set in the writemask.
 
SDNodelegalizeTargetIndependentNode (SDNode *Node, SelectionDAG &DAG) const
 Legalize target independent instructions (e.g.
 
MachineSDNodewrapAddr64Rsrc (SelectionDAG &DAG, const SDLoc &DL, SDValue Ptr) const
 
MachineSDNodebuildRSRC (SelectionDAG &DAG, const SDLoc &DL, SDValue Ptr, uint32_t RsrcDword1, uint64_t RsrcDword2And3) const
 Return a resource descriptor with the 'Add TID' bit enabled The TID (Thread ID) is multiplied by the stride value (bits [61:48] of the resource descriptor) to create an offset, which is added to the resource pointer.
 
std::pair< unsigned, const TargetRegisterClass * > getRegForInlineAsmConstraint (const TargetRegisterInfo *TRI, StringRef Constraint, MVT VT) const override
 Given a physical register constraint (e.g.
 
ConstraintType getConstraintType (StringRef Constraint) const override
 Given a constraint, return the type of constraint it is for this target.
 
void LowerAsmOperandForConstraint (SDValue Op, StringRef Constraint, std::vector< SDValue > &Ops, SelectionDAG &DAG) const override
 Lower the specified operand into the Ops vector.
 
bool getAsmOperandConstVal (SDValue Op, uint64_t &Val) const
 
bool checkAsmConstraintVal (SDValue Op, StringRef Constraint, uint64_t Val) const
 
bool checkAsmConstraintValA (SDValue Op, uint64_t Val, unsigned MaxSize=64) const
 
SDValue copyToM0 (SelectionDAG &DAG, SDValue Chain, const SDLoc &DL, SDValue V) const
 
void finalizeLowering (MachineFunction &MF) const override
 Execute target specific actions to finalize target lowering.
 
void computeKnownBitsForTargetNode (const SDValue Op, KnownBits &Known, const APInt &DemandedElts, const SelectionDAG &DAG, unsigned Depth=0) const override
 Determine which of the bits specified in Mask are known to be either zero or one and return them in the KnownZero and KnownOne bitsets.
 
void computeKnownBitsForFrameIndex (int FrameIdx, KnownBits &Known, const MachineFunction &MF) const override
 Determine which of the bits of FrameIndex FIOp are known to be 0.
 
void computeKnownBitsForTargetInstr (GISelKnownBits &Analysis, Register R, KnownBits &Known, const APInt &DemandedElts, const MachineRegisterInfo &MRI, unsigned Depth=0) const override
 Determine which of the bits specified in Mask are known to be either zero or one and return them in the KnownZero/KnownOne bitsets.
 
Align computeKnownAlignForTargetInstr (GISelKnownBits &Analysis, Register R, const MachineRegisterInfo &MRI, unsigned Depth=0) const override
 Determine the known alignment for the pointer value R.
 
bool isSDNodeSourceOfDivergence (const SDNode *N, FunctionLoweringInfo *FLI, UniformityInfo *UA) const override
 
bool hasMemSDNodeUser (SDNode *N) const
 
bool isReassocProfitable (SelectionDAG &DAG, SDValue N0, SDValue N1) const override
 
bool isReassocProfitable (MachineRegisterInfo &MRI, Register N0, Register N1) const override
 
bool isCanonicalized (SelectionDAG &DAG, SDValue Op, unsigned MaxDepth=5) const
 
bool isCanonicalized (Register Reg, const MachineFunction &MF, unsigned MaxDepth=5) const
 
bool denormalsEnabledForType (const SelectionDAG &DAG, EVT VT) const
 
bool denormalsEnabledForType (LLT Ty, const MachineFunction &MF) const
 
bool checkForPhysRegDependency (SDNode *Def, SDNode *User, unsigned Op, const TargetRegisterInfo *TRI, const TargetInstrInfo *TII, unsigned &PhysReg, int &Cost) const override
 Allows the target to handle physreg-carried dependency in target-specific way.
 
bool isKnownNeverNaNForTargetNode (SDValue Op, const SelectionDAG &DAG, bool SNaN=false, unsigned Depth=0) const override
 If SNaN is false,.
 
AtomicExpansionKind shouldExpandAtomicRMWInIR (AtomicRMWInst *) const override
 Returns how the IR-level AtomicExpand pass should expand the given AtomicRMW, if at all.
 
AtomicExpansionKind shouldExpandAtomicLoadInIR (LoadInst *LI) const override
 Returns how the given (atomic) load should be expanded by the IR-level AtomicExpand pass.
 
AtomicExpansionKind shouldExpandAtomicStoreInIR (StoreInst *SI) const override
 Returns how the given (atomic) store should be expanded by the IR-level AtomicExpand pass into.
 
AtomicExpansionKind shouldExpandAtomicCmpXchgInIR (AtomicCmpXchgInst *AI) const override
 Returns how the given atomic cmpxchg should be expanded by the IR-level AtomicExpand pass.
 
void emitExpandAtomicAddrSpacePredicate (Instruction *AI) const
 
void emitExpandAtomicRMW (AtomicRMWInst *AI) const override
 Perform a atomicrmw expansion using a target-specific way.
 
void emitExpandAtomicCmpXchg (AtomicCmpXchgInst *CI) const override
 Perform a cmpxchg expansion using a target-specific method.
 
LoadInstlowerIdempotentRMWIntoFencedLoad (AtomicRMWInst *AI) const override
 On some platforms, an AtomicRMW that never actually modifies the value (such as fetch_add of 0) can be turned into a fence followed by an atomic load.
 
const TargetRegisterClassgetRegClassFor (MVT VT, bool isDivergent) const override
 Return the register class that should be used for the specified value type.
 
bool requiresUniformRegister (MachineFunction &MF, const Value *V) const override
 Allows target to decide about the register class of the specific value that is live outside the defining block.
 
Align getPrefLoopAlignment (MachineLoop *ML) const override
 Return the preferred loop alignment.
 
void allocateHSAUserSGPRs (CCState &CCInfo, MachineFunction &MF, const SIRegisterInfo &TRI, SIMachineFunctionInfo &Info) const
 
void allocatePreloadKernArgSGPRs (CCState &CCInfo, SmallVectorImpl< CCValAssign > &ArgLocs, const SmallVectorImpl< ISD::InputArg > &Ins, MachineFunction &MF, const SIRegisterInfo &TRI, SIMachineFunctionInfo &Info) const
 
void allocateLDSKernelId (CCState &CCInfo, MachineFunction &MF, const SIRegisterInfo &TRI, SIMachineFunctionInfo &Info) const
 
void allocateSystemSGPRs (CCState &CCInfo, MachineFunction &MF, SIMachineFunctionInfo &Info, CallingConv::ID CallConv, bool IsShader) const
 
void allocateSpecialEntryInputVGPRs (CCState &CCInfo, MachineFunction &MF, const SIRegisterInfo &TRI, SIMachineFunctionInfo &Info) const
 
void allocateSpecialInputSGPRs (CCState &CCInfo, MachineFunction &MF, const SIRegisterInfo &TRI, SIMachineFunctionInfo &Info) const
 
void allocateSpecialInputVGPRs (CCState &CCInfo, MachineFunction &MF, const SIRegisterInfo &TRI, SIMachineFunctionInfo &Info) const
 Allocate implicit function VGPR arguments at the end of allocated user arguments.
 
void allocateSpecialInputVGPRsFixed (CCState &CCInfo, MachineFunction &MF, const SIRegisterInfo &TRI, SIMachineFunctionInfo &Info) const
 Allocate implicit function VGPR arguments in fixed registers.
 
MachineMemOperand::Flags getTargetMMOFlags (const Instruction &I) const override
 This callback is used to inspect load/store instructions and add target-specific MachineMemOperand flags to them.
 
- Public Member Functions inherited from llvm::AMDGPUTargetLowering
 AMDGPUTargetLowering (const TargetMachine &TM, const AMDGPUSubtarget &STI)
 
bool mayIgnoreSignedZero (SDValue Op) const
 
bool isFAbsFree (EVT VT) const override
 Return true if an fabs operation is free to the point where it is never worthwhile to replace it with a bitwise operation.
 
bool isFNegFree (EVT VT) const override
 Return true if an fneg operation is free to the point where it is never worthwhile to replace it with a bitwise operation.
 
bool isTruncateFree (EVT Src, EVT Dest) const override
 
bool isTruncateFree (Type *Src, Type *Dest) const override
 Return true if it's free to truncate a value of type FromTy to type ToTy.
 
bool isZExtFree (Type *Src, Type *Dest) const override
 Return true if any actual instruction that defines a value of type FromTy implicitly zero-extends the value to ToTy in the result register.
 
bool isZExtFree (EVT Src, EVT Dest) const override
 
SDValue getNegatedExpression (SDValue Op, SelectionDAG &DAG, bool LegalOperations, bool ForCodeSize, NegatibleCost &Cost, unsigned Depth) const override
 Return the newly negated expression if the cost is not expensive and set the cost in Cost to indicate that if it is cheaper or neutral to do the negation.
 
bool isNarrowingProfitable (SDNode *N, EVT SrcVT, EVT DestVT) const override
 Return true if it's profitable to narrow operations of type SrcVT to DestVT.
 
bool isDesirableToCommuteWithShift (const SDNode *N, CombineLevel Level) const override
 Return true if it is profitable to move this shift by a constant amount through its operand, adjusting any immediate operands as necessary to preserve semantics.
 
EVT getTypeForExtReturn (LLVMContext &Context, EVT VT, ISD::NodeType ExtendKind) const override
 Return the type that should be used to zero or sign extend a zeroext/signext integer return value.
 
MVT getVectorIdxTy (const DataLayout &) const override
 Returns the type to be used for the index operand of: ISD::INSERT_VECTOR_ELT, ISD::EXTRACT_VECTOR_ELT, ISD::INSERT_SUBVECTOR, and ISD::EXTRACT_SUBVECTOR.
 
bool isSelectSupported (SelectSupportKind) const override
 
bool isFPImmLegal (const APFloat &Imm, EVT VT, bool ForCodeSize) const override
 Returns true if the target can instruction select the specified FP immediate natively.
 
bool ShouldShrinkFPConstant (EVT VT) const override
 If true, then instruction selection should seek to shrink the FP constant of the specified type to a smaller type in order to save space and / or reduce runtime.
 
bool shouldReduceLoadWidth (SDNode *Load, ISD::LoadExtType ExtType, EVT ExtVT) const override
 Return true if it is profitable to reduce a load to a smaller type.
 
bool isLoadBitCastBeneficial (EVT, EVT, const SelectionDAG &DAG, const MachineMemOperand &MMO) const final
 Return true if the following transform is beneficial: fold (conv (load x)) -> (load (conv*)x) On architectures that don't natively support some vector loads efficiently, casting the load to a smaller vector of larger types and loading is more efficient, however, this can be undone by optimizations in dag combiner.
 
bool storeOfVectorConstantIsCheap (bool IsZero, EVT MemVT, unsigned NumElem, unsigned AS) const override
 Return true if it is expected to be cheaper to do a store of vector constant with the given size and type for the address space than to store the individual scalar element constants.
 
bool aggressivelyPreferBuildVectorSources (EVT VecVT) const override
 
bool isCheapToSpeculateCttz (Type *Ty) const override
 Return true if it is cheap to speculate a call to intrinsic cttz.
 
bool isCheapToSpeculateCtlz (Type *Ty) const override
 Return true if it is cheap to speculate a call to intrinsic ctlz.
 
bool isSDNodeAlwaysUniform (const SDNode *N) const override
 
AtomicExpansionKind shouldCastAtomicLoadInIR (LoadInst *LI) const override
 Returns how the given (atomic) load should be cast by the IR-level AtomicExpand pass.
 
AtomicExpansionKind shouldCastAtomicStoreInIR (StoreInst *SI) const override
 Returns how the given (atomic) store should be cast by the IR-level AtomicExpand pass into.
 
AtomicExpansionKind shouldCastAtomicRMWIInIR (AtomicRMWInst *) const override
 Returns how the given atomic atomicrmw should be cast by the IR-level AtomicExpand pass.
 
SDValue LowerReturn (SDValue Chain, CallingConv::ID CallConv, bool isVarArg, const SmallVectorImpl< ISD::OutputArg > &Outs, const SmallVectorImpl< SDValue > &OutVals, const SDLoc &DL, SelectionDAG &DAG) const override
 This hook must be implemented to lower outgoing return values, described by the Outs array, into the specified DAG.
 
SDValue addTokenForArgument (SDValue Chain, SelectionDAG &DAG, MachineFrameInfo &MFI, int ClobberedFI) const
 
SDValue lowerUnhandledCall (CallLoweringInfo &CLI, SmallVectorImpl< SDValue > &InVals, StringRef Reason) const
 
SDValue LowerCall (CallLoweringInfo &CLI, SmallVectorImpl< SDValue > &InVals) const override
 This hook must be implemented to lower calls into the specified DAG.
 
SDValue LowerDYNAMIC_STACKALLOC (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerOperation (SDValue Op, SelectionDAG &DAG) const override
 This callback is invoked for operations that are unsupported by the target, which are registered to use 'custom' lowering, and whose defined values are all legal.
 
SDValue PerformDAGCombine (SDNode *N, DAGCombinerInfo &DCI) const override
 This method will be invoked for all target nodes and for any target-independent nodes that the target has registered with invoke it for.
 
void ReplaceNodeResults (SDNode *N, SmallVectorImpl< SDValue > &Results, SelectionDAG &DAG) const override
 This callback is invoked when a node result type is illegal for the target, and the operation was registered to use 'custom' lowering for that result type.
 
SDValue combineFMinMaxLegacyImpl (const SDLoc &DL, EVT VT, SDValue LHS, SDValue RHS, SDValue True, SDValue False, SDValue CC, DAGCombinerInfo &DCI) const
 
SDValue combineFMinMaxLegacy (const SDLoc &DL, EVT VT, SDValue LHS, SDValue RHS, SDValue True, SDValue False, SDValue CC, DAGCombinerInfo &DCI) const
 Generate Min/Max node.
 
const chargetTargetNodeName (unsigned Opcode) const override
 This method returns the name of a target specific DAG node.
 
bool mergeStoresAfterLegalization (EVT) const override
 Allow store merging for the specified type after legalization in addition to before legalization.
 
bool isFsqrtCheap (SDValue Operand, SelectionDAG &DAG) const override
 Return true if SQRT(X) shouldn't be replaced with X*RSQRT(X).
 
SDValue getSqrtEstimate (SDValue Operand, SelectionDAG &DAG, int Enabled, int &RefinementSteps, bool &UseOneConstNR, bool Reciprocal) const override
 Hooks for building estimates in place of slower divisions and square roots.
 
SDValue getRecipEstimate (SDValue Operand, SelectionDAG &DAG, int Enabled, int &RefinementSteps) const override
 Return a reciprocal estimate value for the input operand.
 
virtual SDNodePostISelFolding (MachineSDNode *N, SelectionDAG &DAG) const =0
 
void computeKnownBitsForTargetNode (const SDValue Op, KnownBits &Known, const APInt &DemandedElts, const SelectionDAG &DAG, unsigned Depth=0) const override
 Determine which of the bits specified in Mask are known to be either zero or one and return them in the KnownZero and KnownOne bitsets.
 
unsigned ComputeNumSignBitsForTargetNode (SDValue Op, const APInt &DemandedElts, const SelectionDAG &DAG, unsigned Depth=0) const override
 This method can be implemented by targets that want to expose additional information about sign bits to the DAG Combiner.
 
unsigned computeNumSignBitsForTargetInstr (GISelKnownBits &Analysis, Register R, const APInt &DemandedElts, const MachineRegisterInfo &MRI, unsigned Depth=0) const override
 This method can be implemented by targets that want to expose additional information about sign bits to GlobalISel combiners.
 
bool isKnownNeverNaNForTargetNode (SDValue Op, const SelectionDAG &DAG, bool SNaN=false, unsigned Depth=0) const override
 If SNaN is false,.
 
bool isReassocProfitable (MachineRegisterInfo &MRI, Register N0, Register N1) const override
 
SDValue CreateLiveInRegister (SelectionDAG &DAG, const TargetRegisterClass *RC, Register Reg, EVT VT, const SDLoc &SL, bool RawReg=false) const
 Helper function that adds Reg to the LiveIn list of the DAG's MachineFunction.
 
SDValue CreateLiveInRegister (SelectionDAG &DAG, const TargetRegisterClass *RC, Register Reg, EVT VT) const
 
SDValue CreateLiveInRegisterRaw (SelectionDAG &DAG, const TargetRegisterClass *RC, Register Reg, EVT VT) const
 
SDValue loadStackInputValue (SelectionDAG &DAG, EVT VT, const SDLoc &SL, int64_t Offset) const
 Similar to CreateLiveInRegister, except value maybe loaded from a stack slot rather than passed in a register.
 
SDValue storeStackInputValue (SelectionDAG &DAG, const SDLoc &SL, SDValue Chain, SDValue ArgVal, int64_t Offset) const
 
SDValue loadInputValue (SelectionDAG &DAG, const TargetRegisterClass *RC, EVT VT, const SDLoc &SL, const ArgDescriptor &Arg) const
 
uint32_t getImplicitParameterOffset (const MachineFunction &MF, const ImplicitParameter Param) const
 Helper function that returns the byte offset of the given type of implicit parameter.
 
uint32_t getImplicitParameterOffset (const uint64_t ExplicitKernArgSize, const ImplicitParameter Param) const
 
MVT getFenceOperandTy (const DataLayout &DL) const override
 Return the type for operands of fence.
 
- Public Member Functions inherited from llvm::TargetLowering
 TargetLowering (const TargetLowering &)=delete
 
TargetLoweringoperator= (const TargetLowering &)=delete
 
 TargetLowering (const TargetMachine &TM)
 NOTE: The TargetMachine owns TLOF.
 
bool isPositionIndependent () const
 
virtual bool isSDNodeSourceOfDivergence (const SDNode *N, FunctionLoweringInfo *FLI, UniformityInfo *UA) const
 
virtual bool isReassocProfitable (SelectionDAG &DAG, SDValue N0, SDValue N1) const
 
virtual bool isReassocProfitable (MachineRegisterInfo &MRI, Register N0, Register N1) const
 
virtual bool isSDNodeAlwaysUniform (const SDNode *N) const
 
virtual bool getPreIndexedAddressParts (SDNode *, SDValue &, SDValue &, ISD::MemIndexedMode &, SelectionDAG &) const
 Returns true by value, base pointer and offset pointer and addressing mode by reference if the node's address can be legally represented as pre-indexed load / store address.
 
virtual bool getPostIndexedAddressParts (SDNode *, SDNode *, SDValue &, SDValue &, ISD::MemIndexedMode &, SelectionDAG &) const
 Returns true by value, base pointer and offset pointer and addressing mode by reference if this node can be combined with a load / store to form a post-indexed load / store.
 
virtual bool isIndexingLegal (MachineInstr &MI, Register Base, Register Offset, bool IsPre, MachineRegisterInfo &MRI) const
 Returns true if the specified base+offset is a legal indexed addressing mode for this target.
 
virtual unsigned getJumpTableEncoding () const
 Return the entry encoding for a jump table in the current function.
 
virtual MVT getJumpTableRegTy (const DataLayout &DL) const
 
virtual const MCExprLowerCustomJumpTableEntry (const MachineJumpTableInfo *, const MachineBasicBlock *, unsigned, MCContext &) const
 
virtual SDValue getPICJumpTableRelocBase (SDValue Table, SelectionDAG &DAG) const
 Returns relocation base for the given PIC jumptable.
 
virtual const MCExprgetPICJumpTableRelocBaseExpr (const MachineFunction *MF, unsigned JTI, MCContext &Ctx) const
 This returns the relocation base for the given PIC jumptable, the same as getPICJumpTableRelocBase, but as an MCExpr.
 
virtual bool isOffsetFoldingLegal (const GlobalAddressSDNode *GA) const
 Return true if folding a constant offset with the given GlobalAddress is legal.
 
virtual bool isInlineAsmTargetBranch (const SmallVectorImpl< StringRef > &AsmStrs, unsigned OpNo) const
 On x86, return true if the operand with index OpNo is a CALL or JUMP instruction, which can use either a memory constraint or an address constraint.
 
bool isInTailCallPosition (SelectionDAG &DAG, SDNode *Node, SDValue &Chain) const
 Check whether a given call node is in tail position within its function.
 
void softenSetCCOperands (SelectionDAG &DAG, EVT VT, SDValue &NewLHS, SDValue &NewRHS, ISD::CondCode &CCCode, const SDLoc &DL, const SDValue OldLHS, const SDValue OldRHS) const
 Soften the operands of a comparison.
 
void softenSetCCOperands (SelectionDAG &DAG, EVT VT, SDValue &NewLHS, SDValue &NewRHS, ISD::CondCode &CCCode, const SDLoc &DL, const SDValue OldLHS, const SDValue OldRHS, SDValue &Chain, bool IsSignaling=false) const
 
virtual SDValue visitMaskedLoad (SelectionDAG &DAG, const SDLoc &DL, SDValue Chain, MachineMemOperand *MMO, SDValue &NewLoad, SDValue Ptr, SDValue PassThru, SDValue Mask) const
 
virtual SDValue visitMaskedStore (SelectionDAG &DAG, const SDLoc &DL, SDValue Chain, MachineMemOperand *MMO, SDValue Ptr, SDValue Val, SDValue Mask) const
 
std::pair< SDValue, SDValuemakeLibCall (SelectionDAG &DAG, RTLIB::Libcall LC, EVT RetVT, ArrayRef< SDValue > Ops, MakeLibCallOptions CallOptions, const SDLoc &dl, SDValue Chain=SDValue()) const
 Returns a pair of (return value, chain).
 
bool parametersInCSRMatch (const MachineRegisterInfo &MRI, const uint32_t *CallerPreservedMask, const SmallVectorImpl< CCValAssign > &ArgLocs, const SmallVectorImpl< SDValue > &OutVals) const
 Check whether parameters to a call that are passed in callee saved registers are the same as from the calling function.
 
virtual bool findOptimalMemOpLowering (std::vector< EVT > &MemOps, unsigned Limit, const MemOp &Op, unsigned DstAS, unsigned SrcAS, const AttributeList &FuncAttributes) const
 Determines the optimal series of memory ops to replace the memset / memcpy.
 
bool ShrinkDemandedConstant (SDValue Op, const APInt &DemandedBits, const APInt &DemandedElts, TargetLoweringOpt &TLO) const
 Check to see if the specified operand of the specified instruction is a constant integer.
 
bool ShrinkDemandedConstant (SDValue Op, const APInt &DemandedBits, TargetLoweringOpt &TLO) const
 Helper wrapper around ShrinkDemandedConstant, demanding all elements.
 
virtual bool targetShrinkDemandedConstant (SDValue Op, const APInt &DemandedBits, const APInt &DemandedElts, TargetLoweringOpt &TLO) const
 
bool ShrinkDemandedOp (SDValue Op, unsigned BitWidth, const APInt &DemandedBits, TargetLoweringOpt &TLO) const
 Convert x+y to (VT)((SmallVT)x+(SmallVT)y) if the casts are free.
 
bool SimplifyDemandedBits (SDValue Op, const APInt &DemandedBits, const APInt &DemandedElts, KnownBits &Known, TargetLoweringOpt &TLO, unsigned Depth=0, bool AssumeSingleUse=false) const
 Look at Op.
 
bool SimplifyDemandedBits (SDValue Op, const APInt &DemandedBits, KnownBits &Known, TargetLoweringOpt &TLO, unsigned Depth=0, bool AssumeSingleUse=false) const
 Helper wrapper around SimplifyDemandedBits, demanding all elements.
 
bool SimplifyDemandedBits (SDValue Op, const APInt &DemandedBits, DAGCombinerInfo &DCI) const
 Helper wrapper around SimplifyDemandedBits.
 
bool SimplifyDemandedBits (SDValue Op, const APInt &DemandedBits, const APInt &DemandedElts, DAGCombinerInfo &DCI) const
 Helper wrapper around SimplifyDemandedBits.
 
SDValue SimplifyMultipleUseDemandedBits (SDValue Op, const APInt &DemandedBits, const APInt &DemandedElts, SelectionDAG &DAG, unsigned Depth=0) const
 More limited version of SimplifyDemandedBits that can be used to "look through" ops that don't contribute to the DemandedBits/DemandedElts - bitwise ops etc.
 
SDValue SimplifyMultipleUseDemandedBits (SDValue Op, const APInt &DemandedBits, SelectionDAG &DAG, unsigned Depth=0) const
 Helper wrapper around SimplifyMultipleUseDemandedBits, demanding all elements.
 
SDValue SimplifyMultipleUseDemandedVectorElts (SDValue Op, const APInt &DemandedElts, SelectionDAG &DAG, unsigned Depth=0) const
 Helper wrapper around SimplifyMultipleUseDemandedBits, demanding all bits from only some vector elements.
 
bool SimplifyDemandedVectorElts (SDValue Op, const APInt &DemandedEltMask, APInt &KnownUndef, APInt &KnownZero, TargetLoweringOpt &TLO, unsigned Depth=0, bool AssumeSingleUse=false) const
 Look at Vector Op.
 
bool SimplifyDemandedVectorElts (SDValue Op, const APInt &DemandedElts, DAGCombinerInfo &DCI) const
 Helper wrapper around SimplifyDemandedVectorElts.
 
virtual bool shouldSimplifyDemandedVectorElts (SDValue Op, const TargetLoweringOpt &TLO) const
 Return true if the target supports simplifying demanded vector elements by converting them to undefs.
 
virtual void computeKnownBitsForTargetNode (const SDValue Op, KnownBits &Known, const APInt &DemandedElts, const SelectionDAG &DAG, unsigned Depth=0) const
 Determine which of the bits specified in Mask are known to be either zero or one and return them in the KnownZero/KnownOne bitsets.
 
virtual void computeKnownBitsForTargetInstr (GISelKnownBits &Analysis, Register R, KnownBits &Known, const APInt &DemandedElts, const MachineRegisterInfo &MRI, unsigned Depth=0) const
 Determine which of the bits specified in Mask are known to be either zero or one and return them in the KnownZero/KnownOne bitsets.
 
virtual Align computeKnownAlignForTargetInstr (GISelKnownBits &Analysis, Register R, const MachineRegisterInfo &MRI, unsigned Depth=0) const
 Determine the known alignment for the pointer value R.
 
virtual void computeKnownBitsForFrameIndex (int FIOp, KnownBits &Known, const MachineFunction &MF) const
 Determine which of the bits of FrameIndex FIOp are known to be 0.
 
virtual unsigned ComputeNumSignBitsForTargetNode (SDValue Op, const APInt &DemandedElts, const SelectionDAG &DAG, unsigned Depth=0) const
 This method can be implemented by targets that want to expose additional information about sign bits to the DAG Combiner.
 
virtual unsigned computeNumSignBitsForTargetInstr (GISelKnownBits &Analysis, Register R, const APInt &DemandedElts, const MachineRegisterInfo &MRI, unsigned Depth=0) const
 This method can be implemented by targets that want to expose additional information about sign bits to GlobalISel combiners.
 
virtual bool SimplifyDemandedVectorEltsForTargetNode (SDValue Op, const APInt &DemandedElts, APInt &KnownUndef, APInt &KnownZero, TargetLoweringOpt &TLO, unsigned Depth=0) const
 Attempt to simplify any target nodes based on the demanded vector elements, returning true on success.
 
virtual bool SimplifyDemandedBitsForTargetNode (SDValue Op, const APInt &DemandedBits, const APInt &DemandedElts, KnownBits &Known, TargetLoweringOpt &TLO, unsigned Depth=0) const
 Attempt to simplify any target nodes based on the demanded bits/elts, returning true on success.
 
virtual SDValue SimplifyMultipleUseDemandedBitsForTargetNode (SDValue Op, const APInt &DemandedBits, const APInt &DemandedElts, SelectionDAG &DAG, unsigned Depth) const
 More limited version of SimplifyDemandedBits that can be used to "look through" ops that don't contribute to the DemandedBits/DemandedElts - bitwise ops etc.
 
virtual bool isGuaranteedNotToBeUndefOrPoisonForTargetNode (SDValue Op, const APInt &DemandedElts, const SelectionDAG &DAG, bool PoisonOnly, unsigned Depth) const
 Return true if this function can prove that Op is never poison and, if PoisonOnly is false, does not have undef bits.
 
virtual bool canCreateUndefOrPoisonForTargetNode (SDValue Op, const APInt &DemandedElts, const SelectionDAG &DAG, bool PoisonOnly, bool ConsiderFlags, unsigned Depth) const
 Return true if Op can create undef or poison from non-undef & non-poison operands.
 
SDValue buildLegalVectorShuffle (EVT VT, const SDLoc &DL, SDValue N0, SDValue N1, MutableArrayRef< int > Mask, SelectionDAG &DAG) const
 Tries to build a legal vector shuffle using the provided parameters or equivalent variations.
 
virtual const ConstantgetTargetConstantFromLoad (LoadSDNode *LD) const
 This method returns the constant pool value that will be loaded by LD.
 
virtual bool isKnownNeverNaNForTargetNode (SDValue Op, const SelectionDAG &DAG, bool SNaN=false, unsigned Depth=0) const
 If SNaN is false,.
 
virtual bool isSplatValueForTargetNode (SDValue Op, const APInt &DemandedElts, APInt &UndefElts, const SelectionDAG &DAG, unsigned Depth=0) const
 Return true if vector Op has the same value across all DemandedElts, indicating any elements which may be undef in the output UndefElts.
 
virtual bool isTargetCanonicalConstantNode (SDValue Op) const
 Returns true if the given Opc is considered a canonical constant for the target, which should not be transformed back into a BUILD_VECTOR.
 
bool isConstTrueVal (SDValue N) const
 Return if the N is a constant or constant vector equal to the true value from getBooleanContents().
 
bool isConstFalseVal (SDValue N) const
 Return if the N is a constant or constant vector equal to the false value from getBooleanContents().
 
bool isExtendedTrueVal (const ConstantSDNode *N, EVT VT, bool SExt) const
 Return if N is a True value when extended to VT.
 
SDValue SimplifySetCC (EVT VT, SDValue N0, SDValue N1, ISD::CondCode Cond, bool foldBooleans, DAGCombinerInfo &DCI, const SDLoc &dl) const
 Try to simplify a setcc built with the specified operands and cc.
 
virtual SDValue unwrapAddress (SDValue N) const
 
virtual bool isGAPlusOffset (SDNode *N, const GlobalValue *&GA, int64_t &Offset) const
 Returns true (and the GlobalValue and the offset) if the node is a GlobalAddress + offset.
 
virtual SDValue PerformDAGCombine (SDNode *N, DAGCombinerInfo &DCI) const
 This method will be invoked for all target nodes and for any target-independent nodes that the target has registered with invoke it for.
 
virtual bool isDesirableToCommuteWithShift (const SDNode *N, CombineLevel Level) const
 Return true if it is profitable to move this shift by a constant amount through its operand, adjusting any immediate operands as necessary to preserve semantics.
 
virtual bool isDesirableToCommuteWithShift (const MachineInstr &MI, bool IsAfterLegal) const
 GlobalISel - return true if it is profitable to move this shift by a constant amount through its operand, adjusting any immediate operands as necessary to preserve semantics.
 
virtual bool isDesirableToPullExtFromShl (const MachineInstr &MI) const
 GlobalISel - return true if it's profitable to perform the combine: shl ([sza]ext x), y => zext (shl x, y)
 
virtual AndOrSETCCFoldKind isDesirableToCombineLogicOpOfSETCC (const SDNode *LogicOp, const SDNode *SETCC0, const SDNode *SETCC1) const
 
virtual bool isDesirableToCommuteXorWithShift (const SDNode *N) const
 Return true if it is profitable to combine an XOR of a logical shift to create a logical shift of NOT.
 
virtual bool isTypeDesirableForOp (unsigned, EVT VT) const
 Return true if the target has native support for the specified value type and it is 'desirable' to use the type for the given node type.
 
virtual bool isDesirableToTransformToIntegerOp (unsigned, EVT) const
 Return true if it is profitable for dag combiner to transform a floating point op of specified opcode to a equivalent op of an integer type.
 
virtual bool IsDesirableToPromoteOp (SDValue, EVT &) const
 This method query the target whether it is beneficial for dag combiner to promote the specified node.
 
virtual bool supportSwiftError () const
 Return true if the target supports swifterror attribute.
 
virtual bool supportSplitCSR (MachineFunction *MF) const
 Return true if the target supports that a subset of CSRs for the given machine function is handled explicitly via copies.
 
virtual bool supportKCFIBundles () const
 Return true if the target supports kcfi operand bundles.
 
virtual bool supportPtrAuthBundles () const
 Return true if the target supports ptrauth operand bundles.
 
virtual void initializeSplitCSR (MachineBasicBlock *Entry) const
 Perform necessary initialization to handle a subset of CSRs explicitly via copies.
 
virtual void insertCopiesSplitCSR (MachineBasicBlock *Entry, const SmallVectorImpl< MachineBasicBlock * > &Exits) const
 Insert explicit copies in entry and exit blocks.
 
virtual SDValue getNegatedExpression (SDValue Op, SelectionDAG &DAG, bool LegalOps, bool OptForSize, NegatibleCost &Cost, unsigned Depth=0) const
 Return the newly negated expression if the cost is not expensive and set the cost in Cost to indicate that if it is cheaper or neutral to do the negation.
 
SDValue getCheaperOrNeutralNegatedExpression (SDValue Op, SelectionDAG &DAG, bool LegalOps, bool OptForSize, const NegatibleCost CostThreshold=NegatibleCost::Neutral, unsigned Depth=0) const
 
SDValue getCheaperNegatedExpression (SDValue Op, SelectionDAG &DAG, bool LegalOps, bool OptForSize, unsigned Depth=0) const
 This is the helper function to return the newly negated expression only when the cost is cheaper.
 
SDValue getNegatedExpression (SDValue Op, SelectionDAG &DAG, bool LegalOps, bool OptForSize, unsigned Depth=0) const
 This is the helper function to return the newly negated expression if the cost is not expensive.
 
virtual bool splitValueIntoRegisterParts (SelectionDAG &DAG, const SDLoc &DL, SDValue Val, SDValue *Parts, unsigned NumParts, MVT PartVT, std::optional< CallingConv::ID > CC) const
 Target-specific splitting of values into parts that fit a register storing a legal type.
 
virtual bool checkForPhysRegDependency (SDNode *Def, SDNode *User, unsigned Op, const TargetRegisterInfo *TRI, const TargetInstrInfo *TII, unsigned &PhysReg, int &Cost) const
 Allows the target to handle physreg-carried dependency in target-specific way.
 
virtual SDValue joinRegisterPartsIntoValue (SelectionDAG &DAG, const SDLoc &DL, const SDValue *Parts, unsigned NumParts, MVT PartVT, EVT ValueVT, std::optional< CallingConv::ID > CC) const
 Target-specific combining of register parts into its original value.
 
virtual SDValue LowerFormalArguments (SDValue, CallingConv::ID, bool, const SmallVectorImpl< ISD::InputArg > &, const SDLoc &, SelectionDAG &, SmallVectorImpl< SDValue > &) const
 This hook must be implemented to lower the incoming (formal) arguments, described by the Ins array, into the specified DAG.
 
std::pair< SDValue, SDValueLowerCallTo (CallLoweringInfo &CLI) const
 This function lowers an abstract call to a function into an actual call.
 
virtual SDValue LowerCall (CallLoweringInfo &, SmallVectorImpl< SDValue > &) const
 This hook must be implemented to lower calls into the specified DAG.
 
virtual void HandleByVal (CCState *, unsigned &, Align) const
 Target-specific cleanup for formal ByVal parameters.
 
virtual bool CanLowerReturn (CallingConv::ID, MachineFunction &, bool, const SmallVectorImpl< ISD::OutputArg > &, LLVMContext &) const
 This hook should be implemented to check whether the return values described by the Outs array can fit into the return registers.
 
virtual SDValue LowerReturn (SDValue, CallingConv::ID, bool, const SmallVectorImpl< ISD::OutputArg > &, const SmallVectorImpl< SDValue > &, const SDLoc &, SelectionDAG &) const
 This hook must be implemented to lower outgoing return values, described by the Outs array, into the specified DAG.
 
virtual bool isUsedByReturnOnly (SDNode *, SDValue &) const
 Return true if result of the specified node is used by a return node only.
 
virtual bool mayBeEmittedAsTailCall (const CallInst *) const
 Return true if the target may be able emit the call instruction as a tail call.
 
virtual Register getRegisterByName (const char *RegName, LLT Ty, const MachineFunction &MF) const
 Return the register ID of the name passed in.
 
virtual EVT getTypeForExtReturn (LLVMContext &Context, EVT VT, ISD::NodeType) const
 Return the type that should be used to zero or sign extend a zeroext/signext integer return value.
 
virtual bool functionArgumentNeedsConsecutiveRegisters (Type *Ty, CallingConv::ID CallConv, bool isVarArg, const DataLayout &DL) const
 For some targets, an LLVM struct type must be broken down into multiple simple types, but the calling convention specifies that the entire struct must be passed in a block of consecutive registers.
 
virtual bool shouldSplitFunctionArgumentsAsLittleEndian (const DataLayout &DL) const
 For most targets, an LLVM type must be broken down into multiple smaller types.
 
virtual const MCPhysReggetScratchRegisters (CallingConv::ID CC) const
 Returns a 0 terminated array of registers that can be safely used as scratch registers.
 
virtual ArrayRef< MCPhysReggetRoundingControlRegisters () const
 Returns a 0 terminated array of rounding control registers that can be attached into strict FP call.
 
virtual SDValue prepareVolatileOrAtomicLoad (SDValue Chain, const SDLoc &DL, SelectionDAG &DAG) const
 This callback is used to prepare for a volatile or atomic load.
 
virtual void LowerOperationWrapper (SDNode *N, SmallVectorImpl< SDValue > &Results, SelectionDAG &DAG) const
 This callback is invoked by the type legalizer to legalize nodes with an illegal operand type but legal result types.
 
virtual SDValue LowerOperation (SDValue Op, SelectionDAG &DAG) const
 This callback is invoked for operations that are unsupported by the target, which are registered to use 'custom' lowering, and whose defined values are all legal.
 
virtual void ReplaceNodeResults (SDNode *, SmallVectorImpl< SDValue > &, SelectionDAG &) const
 This callback is invoked when a node result type is illegal for the target, and the operation was registered to use 'custom' lowering for that result type.
 
virtual const chargetTargetNodeName (unsigned Opcode) const
 This method returns the name of a target specific DAG node.
 
virtual FastISelcreateFastISel (FunctionLoweringInfo &, const TargetLibraryInfo *) const
 This method returns a target specific FastISel object, or null if the target does not support "fast" ISel.
 
bool verifyReturnAddressArgumentIsConstant (SDValue Op, SelectionDAG &DAG) const
 
virtual void verifyTargetSDNode (const SDNode *N) const
 Check the given SDNode. Aborts if it is invalid.
 
virtual bool ExpandInlineAsm (CallInst *) const
 This hook allows the target to expand an inline asm call to be explicit llvm code if it wants to.
 
virtual AsmOperandInfoVector ParseConstraints (const DataLayout &DL, const TargetRegisterInfo *TRI, const CallBase &Call) const
 Split up the constraint string from the inline assembly value into the specific constraints and their prefixes, and also tie in the associated operand values.
 
virtual ConstraintWeight getMultipleConstraintMatchWeight (AsmOperandInfo &info, int maIndex) const
 Examine constraint type and operand type and determine a weight value.
 
virtual ConstraintWeight getSingleConstraintMatchWeight (AsmOperandInfo &info, const char *constraint) const
 Examine constraint string and operand type and determine a weight value.
 
virtual void ComputeConstraintToUse (AsmOperandInfo &OpInfo, SDValue Op, SelectionDAG *DAG=nullptr) const
 Determines the constraint code and constraint type to use for the specific AsmOperandInfo, setting OpInfo.ConstraintCode and OpInfo.ConstraintType.
 
virtual ConstraintType getConstraintType (StringRef Constraint) const
 Given a constraint, return the type of constraint it is for this target.
 
ConstraintGroup getConstraintPreferences (AsmOperandInfo &OpInfo) const
 Given an OpInfo with list of constraints codes as strings, return a sorted Vector of pairs of constraint codes and their types in priority of what we'd prefer to lower them as.
 
virtual std::pair< unsigned, const TargetRegisterClass * > getRegForInlineAsmConstraint (const TargetRegisterInfo *TRI, StringRef Constraint, MVT VT) const
 Given a physical register constraint (e.g.
 
virtual InlineAsm::ConstraintCode getInlineAsmMemConstraint (StringRef ConstraintCode) const
 
virtual const charLowerXConstraint (EVT ConstraintVT) const
 Try to replace an X constraint, which matches anything, with another that has more specific requirements based on the type of the corresponding operand.
 
virtual void LowerAsmOperandForConstraint (SDValue Op, StringRef Constraint, std::vector< SDValue > &Ops, SelectionDAG &DAG) const
 Lower the specified operand into the Ops vector.
 
virtual SDValue LowerAsmOutputForConstraint (SDValue &Chain, SDValue &Glue, const SDLoc &DL, const AsmOperandInfo &OpInfo, SelectionDAG &DAG) const
 
virtual void CollectTargetIntrinsicOperands (const CallInst &I, SmallVectorImpl< SDValue > &Ops, SelectionDAG &DAG) const
 
SDValue BuildSDIV (SDNode *N, SelectionDAG &DAG, bool IsAfterLegalization, bool IsAfterLegalTypes, SmallVectorImpl< SDNode * > &Created) const
 Given an ISD::SDIV node expressing a divide by constant, return a DAG expression to select that will generate the same value by multiplying by a magic number.
 
SDValue BuildUDIV (SDNode *N, SelectionDAG &DAG, bool IsAfterLegalization, bool IsAfterLegalTypes, SmallVectorImpl< SDNode * > &Created) const
 Given an ISD::UDIV node expressing a divide by constant, return a DAG expression to select that will generate the same value by multiplying by a magic number.
 
SDValue buildSDIVPow2WithCMov (SDNode *N, const APInt &Divisor, SelectionDAG &DAG, SmallVectorImpl< SDNode * > &Created) const
 Build sdiv by power-of-2 with conditional move instructions Ref: "Hacker's Delight" by Henry Warren 10-1 If conditional move/branch is preferred, we lower sdiv x, +/-2**k into: bgez x, label add x, x, 2**k-1 label: sra res, x, k neg res, res (when the divisor is negative)
 
virtual SDValue BuildSDIVPow2 (SDNode *N, const APInt &Divisor, SelectionDAG &DAG, SmallVectorImpl< SDNode * > &Created) const
 Targets may override this function to provide custom SDIV lowering for power-of-2 denominators.
 
virtual SDValue BuildSREMPow2 (SDNode *N, const APInt &Divisor, SelectionDAG &DAG, SmallVectorImpl< SDNode * > &Created) const
 Targets may override this function to provide custom SREM lowering for power-of-2 denominators.
 
virtual unsigned combineRepeatedFPDivisors () const
 Indicate whether this target prefers to combine FDIVs with the same divisor.
 
virtual SDValue getSqrtEstimate (SDValue Operand, SelectionDAG &DAG, int Enabled, int &RefinementSteps, bool &UseOneConstNR, bool Reciprocal) const
 Hooks for building estimates in place of slower divisions and square roots.
 
SDValue createSelectForFMINNUM_FMAXNUM (SDNode *Node, SelectionDAG &DAG) const
 Try to convert the fminnum/fmaxnum to a compare/select sequence.
 
virtual SDValue getRecipEstimate (SDValue Operand, SelectionDAG &DAG, int Enabled, int &RefinementSteps) const
 Return a reciprocal estimate value for the input operand.
 
virtual SDValue getSqrtInputTest (SDValue Operand, SelectionDAG &DAG, const DenormalMode &Mode) const
 Return a target-dependent comparison result if the input operand is suitable for use with a square root estimate calculation.
 
virtual SDValue getSqrtResultForDenormInput (SDValue Operand, SelectionDAG &DAG) const
 Return a target-dependent result if the input operand is not suitable for use with a square root estimate calculation.
 
bool expandMUL_LOHI (unsigned Opcode, EVT VT, const SDLoc &dl, SDValue LHS, SDValue RHS, SmallVectorImpl< SDValue > &Result, EVT HiLoVT, SelectionDAG &DAG, MulExpansionKind Kind, SDValue LL=SDValue(), SDValue LH=SDValue(), SDValue RL=SDValue(), SDValue RH=SDValue()) const
 Expand a MUL or [US]MUL_LOHI of n-bit values into two or four nodes, respectively, each computing an n/2-bit part of the result.
 
bool expandMUL (SDNode *N, SDValue &Lo, SDValue &Hi, EVT HiLoVT, SelectionDAG &DAG, MulExpansionKind Kind, SDValue LL=SDValue(), SDValue LH=SDValue(), SDValue RL=SDValue(), SDValue RH=SDValue()) const
 Expand a MUL into two nodes.
 
bool expandDIVREMByConstant (SDNode *N, SmallVectorImpl< SDValue > &Result, EVT HiLoVT, SelectionDAG &DAG, SDValue LL=SDValue(), SDValue LH=SDValue()) const
 Attempt to expand an n-bit div/rem/divrem by constant using a n/2-bit urem by constant and other arithmetic ops.
 
SDValue expandFunnelShift (SDNode *N, SelectionDAG &DAG) const
 Expand funnel shift.
 
SDValue expandROT (SDNode *N, bool AllowVectorOps, SelectionDAG &DAG) const
 Expand rotations.
 
void expandShiftParts (SDNode *N, SDValue &Lo, SDValue &Hi, SelectionDAG &DAG) const
 Expand shift-by-parts.
 
bool expandFP_TO_SINT (SDNode *N, SDValue &Result, SelectionDAG &DAG) const
 Expand float(f32) to SINT(i64) conversion.
 
bool expandFP_TO_UINT (SDNode *N, SDValue &Result, SDValue &Chain, SelectionDAG &DAG) const
 Expand float to UINT conversion.
 
bool expandUINT_TO_FP (SDNode *N, SDValue &Result, SDValue &Chain, SelectionDAG &DAG) const
 Expand UINT(i64) to double(f64) conversion.
 
SDValue expandFMINNUM_FMAXNUM (SDNode *N, SelectionDAG &DAG) const
 Expand fminnum/fmaxnum into fminnum_ieee/fmaxnum_ieee with quieted inputs.
 
SDValue expandFMINIMUM_FMAXIMUM (SDNode *N, SelectionDAG &DAG) const
 Expand fminimum/fmaximum into multiple comparison with selects.
 
SDValue expandFMINIMUMNUM_FMAXIMUMNUM (SDNode *N, SelectionDAG &DAG) const
 Expand fminimumnum/fmaximumnum into multiple comparison with selects.
 
SDValue expandFP_TO_INT_SAT (SDNode *N, SelectionDAG &DAG) const
 Expand FP_TO_[US]INT_SAT into FP_TO_[US]INT and selects or min/max.
 
SDValue expandRoundInexactToOdd (EVT ResultVT, SDValue Op, const SDLoc &DL, SelectionDAG &DAG) const
 Truncate Op to ResultVT.
 
SDValue expandFP_ROUND (SDNode *Node, SelectionDAG &DAG) const
 Expand round(fp) to fp conversion.
 
SDValue expandIS_FPCLASS (EVT ResultVT, SDValue Op, FPClassTest Test, SDNodeFlags Flags, const SDLoc &DL, SelectionDAG &DAG) const
 Expand check for floating point class.
 
SDValue expandCTPOP (SDNode *N, SelectionDAG &DAG) const
 Expand CTPOP nodes.
 
SDValue expandVPCTPOP (SDNode *N, SelectionDAG &DAG) const
 Expand VP_CTPOP nodes.
 
SDValue expandCTLZ (SDNode *N, SelectionDAG &DAG) const
 Expand CTLZ/CTLZ_ZERO_UNDEF nodes.
 
SDValue expandVPCTLZ (SDNode *N, SelectionDAG &DAG) const
 Expand VP_CTLZ/VP_CTLZ_ZERO_UNDEF nodes.
 
SDValue CTTZTableLookup (SDNode *N, SelectionDAG &DAG, const SDLoc &DL, EVT VT, SDValue Op, unsigned NumBitsPerElt) const
 Expand CTTZ via Table Lookup.
 
SDValue expandCTTZ (SDNode *N, SelectionDAG &DAG) const
 Expand CTTZ/CTTZ_ZERO_UNDEF nodes.
 
SDValue expandVPCTTZ (SDNode *N, SelectionDAG &DAG) const
 Expand VP_CTTZ/VP_CTTZ_ZERO_UNDEF nodes.
 
SDValue expandVPCTTZElements (SDNode *N, SelectionDAG &DAG) const
 Expand VP_CTTZ_ELTS/VP_CTTZ_ELTS_ZERO_UNDEF nodes.
 
SDValue expandABS (SDNode *N, SelectionDAG &DAG, bool IsNegative=false) const
 Expand ABS nodes.
 
SDValue expandABD (SDNode *N, SelectionDAG &DAG) const
 Expand ABDS/ABDU nodes.
 
SDValue expandAVG (SDNode *N, SelectionDAG &DAG) const
 Expand vector/scalar AVGCEILS/AVGCEILU/AVGFLOORS/AVGFLOORU nodes.
 
SDValue expandBSWAP (SDNode *N, SelectionDAG &DAG) const
 Expand BSWAP nodes.
 
SDValue expandVPBSWAP (SDNode *N, SelectionDAG &DAG) const
 Expand VP_BSWAP nodes.
 
SDValue expandBITREVERSE (SDNode *N, SelectionDAG &DAG) const
 Expand BITREVERSE nodes.
 
SDValue expandVPBITREVERSE (SDNode *N, SelectionDAG &DAG) const
 Expand VP_BITREVERSE nodes.
 
std::pair< SDValue, SDValuescalarizeVectorLoad (LoadSDNode *LD, SelectionDAG &DAG) const
 Turn load of vector type into a load of the individual elements.
 
SDValue scalarizeVectorStore (StoreSDNode *ST, SelectionDAG &DAG) const
 
std::pair< SDValue, SDValueexpandUnalignedLoad (LoadSDNode *LD, SelectionDAG &DAG) const
 Expands an unaligned load to 2 half-size loads for an integer, and possibly more for vectors.
 
SDValue expandUnalignedStore (StoreSDNode *ST, SelectionDAG &DAG) const
 Expands an unaligned store to 2 half-size stores for integer values, and possibly more for vectors.
 
SDValue IncrementMemoryAddress (SDValue Addr, SDValue Mask, const SDLoc &DL, EVT DataVT, SelectionDAG &DAG, bool IsCompressedMemory) const
 Increments memory address Addr according to the type of the value DataVT that should be stored.
 
SDValue getVectorElementPointer (SelectionDAG &DAG, SDValue VecPtr, EVT VecVT, SDValue Index) const
 Get a pointer to vector element Idx located in memory for a vector of type VecVT starting at a base address of VecPtr.
 
SDValue getVectorSubVecPointer (SelectionDAG &DAG, SDValue VecPtr, EVT VecVT, EVT SubVecVT, SDValue Index) const
 Get a pointer to a sub-vector of type SubVecVT at index Idx located in memory for a vector of type VecVT starting at a base address of VecPtr.
 
SDValue expandIntMINMAX (SDNode *Node, SelectionDAG &DAG) const
 Method for building the DAG expansion of ISD::[US][MIN|MAX].
 
SDValue expandAddSubSat (SDNode *Node, SelectionDAG &DAG) const
 Method for building the DAG expansion of ISD::[US][ADD|SUB]SAT.
 
SDValue expandCMP (SDNode *Node, SelectionDAG &DAG) const
 Method for building the DAG expansion of ISD::[US]CMP.
 
SDValue expandShlSat (SDNode *Node, SelectionDAG &DAG) const
 Method for building the DAG expansion of ISD::[US]SHLSAT.
 
SDValue expandFixedPointMul (SDNode *Node, SelectionDAG &DAG) const
 Method for building the DAG expansion of ISD::[U|S]MULFIX[SAT].
 
SDValue expandFixedPointDiv (unsigned Opcode, const SDLoc &dl, SDValue LHS, SDValue RHS, unsigned Scale, SelectionDAG &DAG) const
 Method for building the DAG expansion of ISD::[US]DIVFIX[SAT].
 
void expandUADDSUBO (SDNode *Node, SDValue &Result, SDValue &Overflow, SelectionDAG &DAG) const
 Method for building the DAG expansion of ISD::U(ADD|SUB)O.
 
void expandSADDSUBO (SDNode *Node, SDValue &Result, SDValue &Overflow, SelectionDAG &DAG) const
 Method for building the DAG expansion of ISD::S(ADD|SUB)O.
 
bool expandMULO (SDNode *Node, SDValue &Result, SDValue &Overflow, SelectionDAG &DAG) const
 Method for building the DAG expansion of ISD::[US]MULO.
 
void forceExpandWideMUL (SelectionDAG &DAG, const SDLoc &dl, bool Signed, EVT WideVT, const SDValue LL, const SDValue LH, const SDValue RL, const SDValue RH, SDValue &Lo, SDValue &Hi) const
 forceExpandWideMUL - Unconditionally expand a MUL into either a libcall or brute force via a wide multiplication.
 
void forceExpandWideMUL (SelectionDAG &DAG, const SDLoc &dl, bool Signed, const SDValue LHS, const SDValue RHS, SDValue &Lo, SDValue &Hi) const
 Same as above, but creates the upper halves of each operand by sign/zero-extending the operands.
 
SDValue expandVecReduce (SDNode *Node, SelectionDAG &DAG) const
 Expand a VECREDUCE_* into an explicit calculation.
 
SDValue expandVecReduceSeq (SDNode *Node, SelectionDAG &DAG) const
 Expand a VECREDUCE_SEQ_* into an explicit ordered calculation.
 
bool expandREM (SDNode *Node, SDValue &Result, SelectionDAG &DAG) const
 Expand an SREM or UREM using SDIV/UDIV or SDIVREM/UDIVREM, if legal.
 
SDValue expandVectorSplice (SDNode *Node, SelectionDAG &DAG) const
 Method for building the DAG expansion of ISD::VECTOR_SPLICE.
 
SDValue expandVECTOR_COMPRESS (SDNode *Node, SelectionDAG &DAG) const
 Expand a vector VECTOR_COMPRESS into a sequence of extract element, store temporarily, advance store position, before re-loading the final vector.
 
bool LegalizeSetCCCondCode (SelectionDAG &DAG, EVT VT, SDValue &LHS, SDValue &RHS, SDValue &CC, SDValue Mask, SDValue EVL, bool &NeedInvert, const SDLoc &dl, SDValue &Chain, bool IsSignaling=false) const
 Legalize a SETCC or VP_SETCC with given LHS and RHS and condition code CC on the current target.
 
virtual MachineBasicBlockEmitInstrWithCustomInserter (MachineInstr &MI, MachineBasicBlock *MBB) const
 This method should be implemented by targets that mark instructions with the 'usesCustomInserter' flag.
 
virtual void AdjustInstrPostInstrSelection (MachineInstr &MI, SDNode *Node) const
 This method should be implemented by targets that mark instructions with the 'hasPostISelHook' flag.
 
virtual bool useLoadStackGuardNode (const Module &M) const
 If this function returns true, SelectionDAGBuilder emits a LOAD_STACK_GUARD node when it is lowering Intrinsic::stackprotector.
 
virtual SDValue emitStackGuardXorFP (SelectionDAG &DAG, SDValue Val, const SDLoc &DL) const
 
virtual SDValue LowerToTLSEmulatedModel (const GlobalAddressSDNode *GA, SelectionDAG &DAG) const
 Lower TLS global address SDNode for target independent emulated TLS model.
 
virtual SDValue expandIndirectJTBranch (const SDLoc &dl, SDValue Value, SDValue Addr, int JTI, SelectionDAG &DAG) const
 Expands target specific indirect branch for the case of JumpTable expansion.
 
SDValue lowerCmpEqZeroToCtlzSrl (SDValue Op, SelectionDAG &DAG) const
 
virtual bool isXAndYEqZeroPreferableToXAndYEqY (ISD::CondCode, EVT) const
 
SDValue expandVectorNaryOpBySplitting (SDNode *Node, SelectionDAG &DAG) const
 
- Public Member Functions inherited from llvm::TargetLoweringBase
virtual void markLibCallAttributes (MachineFunction *MF, unsigned CC, ArgListTy &Args) const
 
 TargetLoweringBase (const TargetMachine &TM)
 NOTE: The TargetMachine owns TLOF.
 
 TargetLoweringBase (const TargetLoweringBase &)=delete
 
TargetLoweringBaseoperator= (const TargetLoweringBase &)=delete
 
virtual ~TargetLoweringBase ()=default
 
bool isStrictFPEnabled () const
 Return true if the target support strict float operation.
 
const TargetMachinegetTargetMachine () const
 
virtual bool useSoftFloat () const
 
virtual MVT getPointerTy (const DataLayout &DL, uint32_t AS=0) const
 Return the pointer type for the given address space, defaults to the pointer type from the data layout.
 
virtual MVT getPointerMemTy (const DataLayout &DL, uint32_t AS=0) const
 Return the in-memory pointer type for the given address space, defaults to the pointer type from the data layout.
 
MVT getFrameIndexTy (const DataLayout &DL) const
 Return the type for frame index, which is determined by the alloca address space specified through the data layout.
 
MVT getProgramPointerTy (const DataLayout &DL) const
 Return the type for code pointers, which is determined by the program address space specified through the data layout.
 
virtual MVT getFenceOperandTy (const DataLayout &DL) const
 Return the type for operands of fence.
 
virtual MVT getScalarShiftAmountTy (const DataLayout &, EVT) const
 Return the type to use for a scalar shift opcode, given the shifted amount type.
 
EVT getShiftAmountTy (EVT LHSTy, const DataLayout &DL) const
 Returns the type for the shift amount of a shift opcode.
 
virtual LLVM_READONLY LLT getPreferredShiftAmountTy (LLT ShiftValueTy) const
 Return the preferred type to use for a shift opcode, given the shifted amount type is ShiftValueTy.
 
virtual MVT getVectorIdxTy (const DataLayout &DL) const
 Returns the type to be used for the index operand of: ISD::INSERT_VECTOR_ELT, ISD::EXTRACT_VECTOR_ELT, ISD::INSERT_SUBVECTOR, and ISD::EXTRACT_SUBVECTOR.
 
virtual MVT getVPExplicitVectorLengthTy () const
 Returns the type to be used for the EVL/AVL operand of VP nodes: ISD::VP_ADD, ISD::VP_SUB, etc.
 
virtual MachineMemOperand::Flags getTargetMMOFlags (const Instruction &I) const
 This callback is used to inspect load/store instructions and add target-specific MachineMemOperand flags to them.
 
virtual MachineMemOperand::Flags getTargetMMOFlags (const MemSDNode &Node) const
 This callback is used to inspect load/store SDNode.
 
MachineMemOperand::Flags getLoadMemOperandFlags (const LoadInst &LI, const DataLayout &DL, AssumptionCache *AC=nullptr, const TargetLibraryInfo *LibInfo=nullptr) const
 
MachineMemOperand::Flags getStoreMemOperandFlags (const StoreInst &SI, const DataLayout &DL) const
 
MachineMemOperand::Flags getAtomicMemOperandFlags (const Instruction &AI, const DataLayout &DL) const
 
virtual bool isSelectSupported (SelectSupportKind) const
 
virtual bool shouldExpandPartialReductionIntrinsic (const IntrinsicInst *I) const
 Return true if the @llvm.experimental.vector.partial.reduce.
 
virtual bool shouldExpandGetActiveLaneMask (EVT VT, EVT OpVT) const
 Return true if the @llvm.get.active.lane.mask intrinsic should be expanded using generic code in SelectionDAGBuilder.
 
virtual bool shouldExpandGetVectorLength (EVT CountVT, unsigned VF, bool IsScalable) const
 
virtual bool shouldExpandCttzElements (EVT VT) const
 Return true if the @llvm.experimental.cttz.elts intrinsic should be expanded using generic code in SelectionDAGBuilder.
 
unsigned getBitWidthForCttzElements (Type *RetTy, ElementCount EC, bool ZeroIsPoison, const ConstantRange *VScaleRange) const
 Return the minimum number of bits required to hold the maximum possible number of trailing zero vector elements.
 
virtual bool shouldExpandVectorMatch (EVT VT, unsigned SearchSize) const
 Return true if the @llvm.experimental.vector.match intrinsic should be expanded for vector type ‘VT’ and search size ‘SearchSize’ using generic code in SelectionDAGBuilder.
 
virtual bool shouldReassociateReduction (unsigned RedOpc, EVT VT) const
 
virtual bool reduceSelectOfFPConstantLoads (EVT CmpOpVT) const
 Return true if it is profitable to convert a select of FP constants into a constant pool load whose address depends on the select condition.
 
bool hasMultipleConditionRegisters () const
 Return true if multiple condition registers are available.
 
bool hasExtractBitsInsn () const
 Return true if the target has BitExtract instructions.
 
virtual TargetLoweringBase::LegalizeTypeAction getPreferredVectorAction (MVT VT) const
 Return the preferred vector type legalization action.
 
virtual bool softPromoteHalfType () const
 
virtual bool useFPRegsForHalfType () const
 
virtual bool shouldExpandBuildVectorWithShuffles (EVT, unsigned DefinedValues) const
 
virtual bool isIntDivCheap (EVT VT, AttributeList Attr) const
 Return true if integer divide is usually cheaper than a sequence of several shifts, adds, and multiplies for this target.
 
virtual bool hasStandaloneRem (EVT VT) const
 Return true if the target can handle a standalone remainder operation.
 
virtual bool isFsqrtCheap (SDValue X, SelectionDAG &DAG) const
 Return true if SQRT(X) shouldn't be replaced with X*RSQRT(X).
 
int getRecipEstimateSqrtEnabled (EVT VT, MachineFunction &MF) const
 Return a ReciprocalEstimate enum value for a square root of the given type based on the function's attributes.
 
int getRecipEstimateDivEnabled (EVT VT, MachineFunction &MF) const
 Return a ReciprocalEstimate enum value for a division of the given type based on the function's attributes.
 
int getSqrtRefinementSteps (EVT VT, MachineFunction &MF) const
 Return the refinement step count for a square root of the given type based on the function's attributes.
 
int getDivRefinementSteps (EVT VT, MachineFunction &MF) const
 Return the refinement step count for a division of the given type based on the function's attributes.
 
bool isSlowDivBypassed () const
 Returns true if target has indicated at least one type should be bypassed.
 
const DenseMap< unsigned int, unsigned int > & getBypassSlowDivWidths () const
 Returns map of slow types for division or remainder with corresponding fast types.
 
virtual bool isVScaleKnownToBeAPowerOfTwo () const
 Return true only if vscale must be a power of two.
 
bool isJumpExpensive () const
 Return true if Flow Control is an expensive operation that should be avoided.
 
virtual CondMergingParams getJumpConditionMergingParams (Instruction::BinaryOps, const Value *, const Value *) const
 
bool isPredictableSelectExpensive () const
 Return true if selects are only cheaper than branches if the branch is unlikely to be predicted right.
 
virtual bool fallBackToDAGISel (const Instruction &Inst) const
 
virtual bool isLoadBitCastBeneficial (EVT LoadVT, EVT BitcastVT, const SelectionDAG &DAG, const MachineMemOperand &MMO) const
 Return true if the following transform is beneficial: fold (conv (load x)) -> (load (conv*)x) On architectures that don't natively support some vector loads efficiently, casting the load to a smaller vector of larger types and loading is more efficient, however, this can be undone by optimizations in dag combiner.
 
virtual bool isStoreBitCastBeneficial (EVT StoreVT, EVT BitcastVT, const SelectionDAG &DAG, const MachineMemOperand &MMO) const
 Return true if the following transform is beneficial: (store (y (conv x)), y*)) -> (store x, (x*))
 
virtual bool storeOfVectorConstantIsCheap (bool IsZero, EVT MemVT, unsigned NumElem, unsigned AddrSpace) const
 Return true if it is expected to be cheaper to do a store of vector constant with the given size and type for the address space than to store the individual scalar element constants.
 
virtual bool mergeStoresAfterLegalization (EVT MemVT) const
 Allow store merging for the specified type after legalization in addition to before legalization.
 
virtual bool canMergeStoresTo (unsigned AS, EVT MemVT, const MachineFunction &MF) const
 Returns if it's reasonable to merge stores to MemVT size.
 
virtual bool isCheapToSpeculateCttz (Type *Ty) const
 Return true if it is cheap to speculate a call to intrinsic cttz.
 
virtual bool isCheapToSpeculateCtlz (Type *Ty) const
 Return true if it is cheap to speculate a call to intrinsic ctlz.
 
virtual bool isCtlzFast () const
 Return true if ctlz instruction is fast.
 
virtual bool isCtpopFast (EVT VT) const
 Return true if ctpop instruction is fast.
 
virtual unsigned getCustomCtpopCost (EVT VT, ISD::CondCode Cond) const
 Return the maximum number of "x & (x - 1)" operations that can be done instead of deferring to a custom CTPOP.
 
virtual bool isEqualityCmpFoldedWithSignedCmp () const
 Return true if instruction generated for equality comparison is folded with instruction generated for signed comparison.
 
virtual bool preferZeroCompareBranch () const
 Return true if the heuristic to prefer icmp eq zero should be used in code gen prepare.
 
virtual bool isMultiStoresCheaperThanBitsMerge (EVT LTy, EVT HTy) const
 Return true if it is cheaper to split the store of a merged int val from a pair of smaller values into multiple stores.
 
virtual bool isMaskAndCmp0FoldingBeneficial (const Instruction &AndI) const
 Return if the target supports combining a chain like:
 
virtual bool areTwoSDNodeTargetMMOFlagsMergeable (const MemSDNode &NodeX, const MemSDNode &NodeY) const
 Return true if it is valid to merge the TargetMMOFlags in two SDNodes.
 
virtual bool convertSetCCLogicToBitwiseLogic (EVT VT) const
 Use bitwise logic to make pairs of compares more efficient.
 
virtual MVT hasFastEqualityCompare (unsigned NumBits) const
 Return the preferred operand type if the target has a quick way to compare integer values of the given size.
 
virtual bool hasAndNotCompare (SDValue Y) const
 Return true if the target should transform: (X & Y) == Y —> (~X & Y) == 0 (X & Y) != Y —> (~X & Y) != 0.
 
virtual bool hasAndNot (SDValue X) const
 Return true if the target has a bitwise and-not operation: X = ~A & B This can be used to simplify select or other instructions.
 
virtual bool hasBitTest (SDValue X, SDValue Y) const
 Return true if the target has a bit-test instruction: (X & (1 << Y)) ==/!= 0 This knowledge can be used to prevent breaking the pattern, or creating it if it could be recognized.
 
virtual bool shouldFoldMaskToVariableShiftPair (SDValue X) const
 There are two ways to clear extreme bits (either low or high): Mask: x & (-1 << y) (the instcombine canonical form) Shifts: x >> y << y Return true if the variant with 2 variable shifts is preferred.
 
virtual bool shouldFoldConstantShiftPairToMask (const SDNode *N, CombineLevel Level) const
 Return true if it is profitable to fold a pair of shifts into a mask.
 
virtual bool shouldTransformSignedTruncationCheck (EVT XVT, unsigned KeptBits) const
 Should we tranform the IR-optimal check for whether given truncation down into KeptBits would be truncating or not: (add x, (1 << (KeptBits-1))) srccond (1 << KeptBits) Into it's more traditional form: ((x << C) a>> C) dstcond x Return true if we should transform.
 
virtual bool shouldProduceAndByConstByHoistingConstFromShiftsLHSOfAnd (SDValue X, ConstantSDNode *XC, ConstantSDNode *CC, SDValue Y, unsigned OldShiftOpcode, unsigned NewShiftOpcode, SelectionDAG &DAG) const
 Given the pattern (X & (C l>>/<< Y)) ==/!= 0 return true if it should be transformed into: ((X <</l>> Y) & C) ==/!= 0 WARNING: if 'X' is a constant, the fold may deadlock! FIXME: we could avoid passing XC, but we can't use isConstOrConstSplat() here because it can end up being not linked in.
 
virtual bool optimizeFMulOrFDivAsShiftAddBitcast (SDNode *N, SDValue FPConst, SDValue IntPow2) const
 
virtual unsigned preferedOpcodeForCmpEqPiecesOfOperand (EVT VT, unsigned ShiftOpc, bool MayTransformRotate, const APInt &ShiftOrRotateAmt, const std::optional< APInt > &AndMask) const
 
virtual bool preferIncOfAddToSubOfNot (EVT VT) const
 These two forms are equivalent: sub y, (xor x, -1) add (add x, 1), y The variant with two add's is IR-canonical.
 
virtual bool preferABDSToABSWithNSW (EVT VT) const
 
virtual bool preferScalarizeSplat (SDNode *N) const
 
virtual bool preferSextInRegOfTruncate (EVT TruncVT, EVT VT, EVT ExtVT) const
 
bool enableExtLdPromotion () const
 Return true if the target wants to use the optimization that turns ext(promotableInst1(...(promotableInstN(load)))) into promotedInst1(...(promotedInstN(ext(load)))).
 
virtual bool canCombineStoreAndExtract (Type *VectorTy, Value *Idx, unsigned &Cost) const
 Return true if the target can combine store(extractelement VectorTy, Idx).
 
virtual bool shallExtractConstSplatVectorElementToStore (Type *VectorTy, unsigned ElemSizeInBits, unsigned &Index) const
 Return true if the target shall perform extract vector element and store given that the vector is known to be splat of constant.
 
virtual bool shouldSplatInsEltVarIndex (EVT) const
 Return true if inserting a scalar into a variable element of an undef vector is more efficiently handled by splatting the scalar instead.
 
virtual bool enableAggressiveFMAFusion (EVT VT) const
 Return true if target always benefits from combining into FMA for a given value type.
 
virtual bool enableAggressiveFMAFusion (LLT Ty) const
 Return true if target always benefits from combining into FMA for a given value type.
 
virtual EVT getSetCCResultType (const DataLayout &DL, LLVMContext &Context, EVT VT) const
 Return the ValueType of the result of SETCC operations.
 
virtual MVT::SimpleValueType getCmpLibcallReturnType () const
 Return the ValueType for comparison libcalls.
 
BooleanContent getBooleanContents (bool isVec, bool isFloat) const
 For targets without i1 registers, this gives the nature of the high-bits of boolean values held in types wider than i1.
 
BooleanContent getBooleanContents (EVT Type) const
 
SDValue promoteTargetBoolean (SelectionDAG &DAG, SDValue Bool, EVT ValVT) const
 Promote the given target boolean to a target boolean of the given type.
 
Sched::Preference getSchedulingPreference () const
 Return target scheduling preference.
 
virtual Sched::Preference getSchedulingPreference (SDNode *) const
 Some scheduler, e.g.
 
virtual const TargetRegisterClassgetRegClassFor (MVT VT, bool isDivergent=false) const
 Return the register class that should be used for the specified value type.
 
virtual bool requiresUniformRegister (MachineFunction &MF, const Value *) const
 Allows target to decide about the register class of the specific value that is live outside the defining block.
 
virtual const TargetRegisterClassgetRepRegClassFor (MVT VT) const
 Return the 'representative' register class for the specified value type.
 
virtual uint8_t getRepRegClassCostFor (MVT VT) const
 Return the cost of the 'representative' register class for the specified value type.
 
virtual ShiftLegalizationStrategy preferredShiftLegalizationStrategy (SelectionDAG &DAG, SDNode *N, unsigned ExpansionFactor) const
 
bool isTypeLegal (EVT VT) const
 Return true if the target has native support for the specified value type.
 
const ValueTypeActionImplgetValueTypeActions () const
 
LegalizeKind getTypeConversion (LLVMContext &Context, EVT VT) const
 Return pair that represents the legalization kind (first) that needs to happen to EVT (second) in order to type-legalize it.
 
LegalizeTypeAction getTypeAction (LLVMContext &Context, EVT VT) const
 Return how we should legalize values of this type, either it is already legal (return 'Legal') or we need to promote it to a larger type (return 'Promote'), or we need to expand it into multiple registers of smaller integer type (return 'Expand').
 
LegalizeTypeAction getTypeAction (MVT VT) const
 
virtual EVT getTypeToTransformTo (LLVMContext &Context, EVT VT) const
 For types supported by the target, this is an identity function.
 
EVT getTypeToExpandTo (LLVMContext &Context, EVT VT) const
 For types supported by the target, this is an identity function.
 
unsigned getVectorTypeBreakdown (LLVMContext &Context, EVT VT, EVT &IntermediateVT, unsigned &NumIntermediates, MVT &RegisterVT) const
 Vector types are broken down into some number of legal first class types.
 
virtual unsigned getVectorTypeBreakdownForCallingConv (LLVMContext &Context, CallingConv::ID CC, EVT VT, EVT &IntermediateVT, unsigned &NumIntermediates, MVT &RegisterVT) const
 Certain targets such as MIPS require that some types such as vectors are always broken down into scalars in some contexts.
 
virtual bool getTgtMemIntrinsic (IntrinsicInfo &, const CallInst &, MachineFunction &, unsigned) const
 Given an intrinsic, checks if on the target the intrinsic will need to map to a MemIntrinsicNode (touches memory).
 
virtual bool isFPImmLegal (const APFloat &, EVT, bool ForCodeSize=false) const
 Returns true if the target can instruction select the specified FP immediate natively.
 
virtual bool isShuffleMaskLegal (ArrayRef< int >, EVT) const
 Targets can use this to indicate that they only support some VECTOR_SHUFFLE operations, those with specific masks.
 
virtual bool canOpTrap (unsigned Op, EVT VT) const
 Returns true if the operation can trap for the value type.
 
virtual bool isVectorClearMaskLegal (ArrayRef< int >, EVT) const
 Similar to isShuffleMaskLegal.
 
virtual LegalizeAction getCustomOperationAction (SDNode &Op) const
 How to legalize this custom operation?
 
LegalizeAction getOperationAction (unsigned Op, EVT VT) const
 Return how this operation should be treated: either it is legal, needs to be promoted to a larger size, needs to be expanded to some other code sequence, or the target has a custom expander for it.
 
virtual bool isSupportedFixedPointOperation (unsigned Op, EVT VT, unsigned Scale) const
 Custom method defined by each target to indicate if an operation which may require a scale is supported natively by the target.
 
LegalizeAction getFixedPointOperationAction (unsigned Op, EVT VT, unsigned Scale) const
 Some fixed point operations may be natively supported by the target but only for specific scales.
 
LegalizeAction getStrictFPOperationAction (unsigned Op, EVT VT) const
 
bool isOperationLegalOrCustom (unsigned Op, EVT VT, bool LegalOnly=false) const
 Return true if the specified operation is legal on this target or can be made legal with custom lowering.
 
bool isOperationLegalOrPromote (unsigned Op, EVT VT, bool LegalOnly=false) const
 Return true if the specified operation is legal on this target or can be made legal using promotion.
 
bool isOperationLegalOrCustomOrPromote (unsigned Op, EVT VT, bool LegalOnly=false) const
 Return true if the specified operation is legal on this target or can be made legal with custom lowering or using promotion.
 
bool isOperationCustom (unsigned Op, EVT VT) const
 Return true if the operation uses custom lowering, regardless of whether the type is legal or not.
 
virtual bool areJTsAllowed (const Function *Fn) const
 Return true if lowering to a jump table is allowed.
 
bool rangeFitsInWord (const APInt &Low, const APInt &High, const DataLayout &DL) const
 Check whether the range [Low,High] fits in a machine word.
 
virtual bool isSuitableForJumpTable (const SwitchInst *SI, uint64_t NumCases, uint64_t Range, ProfileSummaryInfo *PSI, BlockFrequencyInfo *BFI) const
 Return true if lowering to a jump table is suitable for a set of case clusters which may contain NumCases cases, Range range of values.
 
virtual MVT getPreferredSwitchConditionType (LLVMContext &Context, EVT ConditionVT) const
 Returns preferred type for switch condition.
 
bool isSuitableForBitTests (unsigned NumDests, unsigned NumCmps, const APInt &Low, const APInt &High, const DataLayout &DL) const
 Return true if lowering to a bit test is suitable for a set of case clusters which contains NumDests unique destinations, Low and High as its lowest and highest case values, and expects NumCmps case value comparisons.
 
bool isOperationExpand (unsigned Op, EVT VT) const
 Return true if the specified operation is illegal on this target or unlikely to be made legal with custom lowering.
 
bool isOperationLegal (unsigned Op, EVT VT) const
 Return true if the specified operation is legal on this target.
 
LegalizeAction getLoadExtAction (unsigned ExtType, EVT ValVT, EVT MemVT) const
 Return how this load with extension should be treated: either it is legal, needs to be promoted to a larger size, needs to be expanded to some other code sequence, or the target has a custom expander for it.
 
bool isLoadExtLegal (unsigned ExtType, EVT ValVT, EVT MemVT) const
 Return true if the specified load with extension is legal on this target.
 
bool isLoadExtLegalOrCustom (unsigned ExtType, EVT ValVT, EVT MemVT) const
 Return true if the specified load with extension is legal or custom on this target.
 
LegalizeAction getAtomicLoadExtAction (unsigned ExtType, EVT ValVT, EVT MemVT) const
 Same as getLoadExtAction, but for atomic loads.
 
bool isAtomicLoadExtLegal (unsigned ExtType, EVT ValVT, EVT MemVT) const
 Return true if the specified atomic load with extension is legal on this target.
 
LegalizeAction getTruncStoreAction (EVT ValVT, EVT MemVT) const
 Return how this store with truncation should be treated: either it is legal, needs to be promoted to a larger size, needs to be expanded to some other code sequence, or the target has a custom expander for it.
 
bool isTruncStoreLegal (EVT ValVT, EVT MemVT) const
 Return true if the specified store with truncation is legal on this target.
 
bool isTruncStoreLegalOrCustom (EVT ValVT, EVT MemVT) const
 Return true if the specified store with truncation has solution on this target.
 
virtual bool canCombineTruncStore (EVT ValVT, EVT MemVT, bool LegalOnly) const
 
LegalizeAction getIndexedLoadAction (unsigned IdxMode, MVT VT) const
 Return how the indexed load should be treated: either it is legal, needs to be promoted to a larger size, needs to be expanded to some other code sequence, or the target has a custom expander for it.
 
bool isIndexedLoadLegal (unsigned IdxMode, EVT VT) const
 Return true if the specified indexed load is legal on this target.
 
LegalizeAction getIndexedStoreAction (unsigned IdxMode, MVT VT) const
 Return how the indexed store should be treated: either it is legal, needs to be promoted to a larger size, needs to be expanded to some other code sequence, or the target has a custom expander for it.
 
bool isIndexedStoreLegal (unsigned IdxMode, EVT VT) const
 Return true if the specified indexed load is legal on this target.
 
LegalizeAction getIndexedMaskedLoadAction (unsigned IdxMode, MVT VT) const
 Return how the indexed load should be treated: either it is legal, needs to be promoted to a larger size, needs to be expanded to some other code sequence, or the target has a custom expander for it.
 
bool isIndexedMaskedLoadLegal (unsigned IdxMode, EVT VT) const
 Return true if the specified indexed load is legal on this target.
 
LegalizeAction getIndexedMaskedStoreAction (unsigned IdxMode, MVT VT) const
 Return how the indexed store should be treated: either it is legal, needs to be promoted to a larger size, needs to be expanded to some other code sequence, or the target has a custom expander for it.
 
bool isIndexedMaskedStoreLegal (unsigned IdxMode, EVT VT) const
 Return true if the specified indexed load is legal on this target.
 
virtual bool shouldExtendGSIndex (EVT VT, EVT &EltTy) const
 Returns true if the index type for a masked gather/scatter requires extending.
 
virtual bool shouldRemoveExtendFromGSIndex (SDValue Extend, EVT DataVT) const
 
virtual bool isLegalScaleForGatherScatter (uint64_t Scale, uint64_t ElemSize) const
 
LegalizeAction getCondCodeAction (ISD::CondCode CC, MVT VT) const
 Return how the condition code should be treated: either it is legal, needs to be expanded to some other code sequence, or the target has a custom expander for it.
 
bool isCondCodeLegal (ISD::CondCode CC, MVT VT) const
 Return true if the specified condition code is legal for a comparison of the specified types on this target.
 
bool isCondCodeLegalOrCustom (ISD::CondCode CC, MVT VT) const
 Return true if the specified condition code is legal or custom for a comparison of the specified types on this target.
 
MVT getTypeToPromoteTo (unsigned Op, MVT VT) const
 If the action for this operation is to promote, this method returns the ValueType to promote to.
 
virtual EVT getAsmOperandValueType (const DataLayout &DL, Type *Ty, bool AllowUnknown=false) const
 
EVT getValueType (const DataLayout &DL, Type *Ty, bool AllowUnknown=false) const
 Return the EVT corresponding to this LLVM type.
 
EVT getMemValueType (const DataLayout &DL, Type *Ty, bool AllowUnknown=false) const
 
MVT getSimpleValueType (const DataLayout &DL, Type *Ty, bool AllowUnknown=false) const
 Return the MVT corresponding to this LLVM type. See getValueType.
 
virtual Align getByValTypeAlignment (Type *Ty, const DataLayout &DL) const
 Returns the desired alignment for ByVal or InAlloca aggregate function arguments in the caller parameter area.
 
MVT getRegisterType (MVT VT) const
 Return the type of registers that this ValueType will eventually require.
 
MVT getRegisterType (LLVMContext &Context, EVT VT) const
 Return the type of registers that this ValueType will eventually require.
 
virtual unsigned getNumRegisters (LLVMContext &Context, EVT VT, std::optional< MVT > RegisterVT=std::nullopt) const
 Return the number of registers that this ValueType will eventually require.
 
virtual MVT getRegisterTypeForCallingConv (LLVMContext &Context, CallingConv::ID CC, EVT VT) const
 Certain combinations of ABIs, Targets and features require that types are legal for some operations and not for other operations.
 
virtual unsigned getNumRegistersForCallingConv (LLVMContext &Context, CallingConv::ID CC, EVT VT) const
 Certain targets require unusual breakdowns of certain types.
 
virtual Align getABIAlignmentForCallingConv (Type *ArgTy, const DataLayout &DL) const
 Certain targets have context sensitive alignment requirements, where one type has the alignment requirement of another type.
 
virtual bool ShouldShrinkFPConstant (EVT) const
 If true, then instruction selection should seek to shrink the FP constant of the specified type to a smaller type in order to save space and / or reduce runtime.
 
virtual bool shouldReduceLoadWidth (SDNode *Load, ISD::LoadExtType ExtTy, EVT NewVT) const
 Return true if it is profitable to reduce a load to a smaller type.
 
virtual bool shouldRemoveRedundantExtend (SDValue Op) const
 Return true (the default) if it is profitable to remove a sext_inreg(x) where the sext is redundant, and use x directly.
 
bool isPaddedAtMostSignificantBitsWhenStored (EVT VT) const
 Indicates if any padding is guaranteed to go at the most significant bits when storing the type to memory and the type size isn't equal to the store size.
 
bool hasBigEndianPartOrdering (EVT VT, const DataLayout &DL) const
 When splitting a value of the specified type into parts, does the Lo or Hi part come first? This usually follows the endianness, except for ppcf128, where the Hi part always comes first.
 
bool hasTargetDAGCombine (ISD::NodeType NT) const
 If true, the target has custom DAG combine transformations that it can perform for the specified node.
 
unsigned getGatherAllAliasesMaxDepth () const
 
virtual unsigned getVaListSizeInBits (const DataLayout &DL) const
 Returns the size of the platform's va_list object.
 
unsigned getMaxStoresPerMemset (bool OptSize) const
 Get maximum # of store operations permitted for llvm.memset.
 
unsigned getMaxStoresPerMemcpy (bool OptSize) const
 Get maximum # of store operations permitted for llvm.memcpy.
 
virtual unsigned getMaxGluedStoresPerMemcpy () const
 Get maximum # of store operations to be glued together.
 
unsigned getMaxExpandSizeMemcmp (bool OptSize) const
 Get maximum # of load operations permitted for memcmp.
 
unsigned getMaxStoresPerMemmove (bool OptSize) const
 Get maximum # of store operations permitted for llvm.memmove.
 
virtual bool allowsMisalignedMemoryAccesses (EVT, unsigned AddrSpace=0, Align Alignment=Align(1), MachineMemOperand::Flags Flags=MachineMemOperand::MONone, unsigned *=nullptr) const
 Determine if the target supports unaligned memory accesses.
 
virtual bool allowsMisalignedMemoryAccesses (LLT, unsigned AddrSpace=0, Align Alignment=Align(1), MachineMemOperand::Flags Flags=MachineMemOperand::MONone, unsigned *=nullptr) const
 LLT handling variant.
 
bool allowsMemoryAccessForAlignment (LLVMContext &Context, const DataLayout &DL, EVT VT, unsigned AddrSpace=0, Align Alignment=Align(1), MachineMemOperand::Flags Flags=MachineMemOperand::MONone, unsigned *Fast=nullptr) const
 This function returns true if the memory access is aligned or if the target allows this specific unaligned memory access.
 
bool allowsMemoryAccessForAlignment (LLVMContext &Context, const DataLayout &DL, EVT VT, const MachineMemOperand &MMO, unsigned *Fast=nullptr) const
 Return true if the memory access of this type is aligned or if the target allows this specific unaligned access for the given MachineMemOperand.
 
virtual bool allowsMemoryAccess (LLVMContext &Context, const DataLayout &DL, EVT VT, unsigned AddrSpace=0, Align Alignment=Align(1), MachineMemOperand::Flags Flags=MachineMemOperand::MONone, unsigned *Fast=nullptr) const
 Return true if the target supports a memory access of this type for the given address space and alignment.
 
bool allowsMemoryAccess (LLVMContext &Context, const DataLayout &DL, EVT VT, const MachineMemOperand &MMO, unsigned *Fast=nullptr) const
 Return true if the target supports a memory access of this type for the given MachineMemOperand.
 
bool allowsMemoryAccess (LLVMContext &Context, const DataLayout &DL, LLT Ty, const MachineMemOperand &MMO, unsigned *Fast=nullptr) const
 LLT handling variant.
 
virtual EVT getOptimalMemOpType (const MemOp &Op, const AttributeList &) const
 Returns the target specific optimal type for load and store operations as a result of memset, memcpy, and memmove lowering.
 
virtual LLT getOptimalMemOpLLT (const MemOp &Op, const AttributeList &) const
 LLT returning variant.
 
virtual bool isSafeMemOpType (MVT) const
 Returns true if it's safe to use load / store of the specified type to expand memcpy / memset inline.
 
virtual unsigned getMinimumJumpTableEntries () const
 Return lower limit for number of blocks in a jump table.
 
unsigned getMinimumJumpTableDensity (bool OptForSize) const
 Return lower limit of the density in a jump table.
 
unsigned getMaximumJumpTableSize () const
 Return upper limit for number of entries in a jump table.
 
virtual bool isJumpTableRelative () const
 
Register getStackPointerRegisterToSaveRestore () const
 If a physical register, this specifies the register that llvm.savestack/llvm.restorestack should save and restore.
 
virtual Register getExceptionPointerRegister (const Constant *PersonalityFn) const
 If a physical register, this returns the register that receives the exception address on entry to an EH pad.
 
virtual Register getExceptionSelectorRegister (const Constant *PersonalityFn) const
 If a physical register, this returns the register that receives the exception typeid on entry to a landing pad.
 
virtual bool needsFixedCatchObjects () const
 
Align getMinStackArgumentAlignment () const
 Return the minimum stack alignment of an argument.
 
Align getMinFunctionAlignment () const
 Return the minimum function alignment.
 
Align getPrefFunctionAlignment () const
 Return the preferred function alignment.
 
virtual Align getPrefLoopAlignment (MachineLoop *ML=nullptr) const
 Return the preferred loop alignment.
 
virtual unsigned getMaxPermittedBytesForAlignment (MachineBasicBlock *MBB) const
 Return the maximum amount of bytes allowed to be emitted when padding for alignment.
 
virtual bool alignLoopsWithOptSize () const
 Should loops be aligned even when the function is marked OptSize (but not MinSize).
 
virtual ValuegetIRStackGuard (IRBuilderBase &IRB) const
 If the target has a standard location for the stack protector guard, returns the address of that location.
 
virtual void insertSSPDeclarations (Module &M) const
 Inserts necessary declarations for SSP (stack protection) purpose.
 
virtual ValuegetSDagStackGuard (const Module &M) const
 Return the variable that's previously inserted by insertSSPDeclarations, if any, otherwise return nullptr.
 
virtual bool useStackGuardXorFP () const
 If this function returns true, stack protection checks should XOR the frame pointer (or whichever pointer is used to address locals) into the stack guard value before checking it.
 
virtual FunctiongetSSPStackGuardCheck (const Module &M) const
 If the target has a standard stack protection check function that performs validation and error handling, returns the function.
 
virtual ValuegetSafeStackPointerLocation (IRBuilderBase &IRB) const
 Returns the target-specific address of the unsafe stack pointer.
 
virtual bool hasStackProbeSymbol (const MachineFunction &MF) const
 Returns the name of the symbol used to emit stack probes or the empty string if not applicable.
 
virtual bool hasInlineStackProbe (const MachineFunction &MF) const
 
virtual StringRef getStackProbeSymbolName (const MachineFunction &MF) const
 
virtual bool isFreeAddrSpaceCast (unsigned SrcAS, unsigned DestAS) const
 Returns true if a cast from SrcAS to DestAS is "cheap", such that e.g.
 
virtual bool shouldAlignPointerArgs (CallInst *, unsigned &, Align &) const
 Return true if the pointer arguments to CI should be aligned by aligning the object whose address is being passed.
 
virtual void emitAtomicCmpXchgNoStoreLLBalance (IRBuilderBase &Builder) const
 
virtual bool shouldSignExtendTypeInLibCall (Type *Ty, bool IsSigned) const
 Returns true if arguments should be sign-extended in lib calls.
 
virtual bool shouldExtendTypeInLibCall (EVT Type) const
 Returns true if arguments should be extended in lib calls.
 
virtual AtomicExpansionKind shouldExpandAtomicLoadInIR (LoadInst *LI) const
 Returns how the given (atomic) load should be expanded by the IR-level AtomicExpand pass.
 
virtual AtomicExpansionKind shouldCastAtomicLoadInIR (LoadInst *LI) const
 Returns how the given (atomic) load should be cast by the IR-level AtomicExpand pass.
 
virtual AtomicExpansionKind shouldExpandAtomicStoreInIR (StoreInst *SI) const
 Returns how the given (atomic) store should be expanded by the IR-level AtomicExpand pass into.
 
virtual AtomicExpansionKind shouldCastAtomicStoreInIR (StoreInst *SI) const
 Returns how the given (atomic) store should be cast by the IR-level AtomicExpand pass into.
 
virtual AtomicExpansionKind shouldExpandAtomicCmpXchgInIR (AtomicCmpXchgInst *AI) const
 Returns how the given atomic cmpxchg should be expanded by the IR-level AtomicExpand pass.
 
virtual AtomicExpansionKind shouldExpandAtomicRMWInIR (AtomicRMWInst *RMW) const
 Returns how the IR-level AtomicExpand pass should expand the given AtomicRMW, if at all.
 
virtual AtomicExpansionKind shouldCastAtomicRMWIInIR (AtomicRMWInst *RMWI) const
 Returns how the given atomic atomicrmw should be cast by the IR-level AtomicExpand pass.
 
virtual LoadInstlowerIdempotentRMWIntoFencedLoad (AtomicRMWInst *RMWI) const
 On some platforms, an AtomicRMW that never actually modifies the value (such as fetch_add of 0) can be turned into a fence followed by an atomic load.
 
virtual ISD::NodeType getExtendForAtomicOps () const
 Returns how the platform's atomic operations are extended (ZERO_EXTEND, SIGN_EXTEND, or ANY_EXTEND).
 
virtual ISD::NodeType getExtendForAtomicCmpSwapArg () const
 Returns how the platform's atomic compare and swap expects its comparison value to be extended (ZERO_EXTEND, SIGN_EXTEND, or ANY_EXTEND).
 
virtual bool shouldNormalizeToSelectSequence (LLVMContext &Context, EVT VT) const
 Returns true if we should normalize select(N0&N1, X, Y) => select(N0, select(N1, X, Y), Y) and select(N0|N1, X, Y) => select(N0, select(N1, X, Y, Y)) if it is likely that it saves us from materializing N0 and N1 in an integer register.
 
virtual bool isProfitableToCombineMinNumMaxNum (EVT VT) const
 
virtual bool convertSelectOfConstantsToMath (EVT VT) const
 Return true if a select of constants (select Cond, C1, C2) should be transformed into simple math ops with the condition value.
 
virtual bool decomposeMulByConstant (LLVMContext &Context, EVT VT, SDValue C) const
 Return true if it is profitable to transform an integer multiplication-by-constant into simpler operations like shifts and adds.
 
virtual bool isMulAddWithConstProfitable (SDValue AddNode, SDValue ConstNode) const
 Return true if it may be profitable to transform (mul (add x, c1), c2) -> (add (mul x, c2), c1*c2).
 
virtual bool shouldUseStrictFP_TO_INT (EVT FpVT, EVT IntVT, bool IsSigned) const
 Return true if it is more correct/profitable to use strict FP_TO_INT conversion operations - canonicalizing the FP source value instead of converting all cases and then selecting based on value.
 
bool isBeneficialToExpandPowI (int64_t Exponent, bool OptForSize) const
 Return true if it is beneficial to expand an @llvm.powi.
 
virtual bool getAddrModeArguments (IntrinsicInst *, SmallVectorImpl< Value * > &, Type *&) const
 CodeGenPrepare sinks address calculations into the same BB as Load/Store instructions reading the address.
 
virtual bool isLegalAddressingMode (const DataLayout &DL, const AddrMode &AM, Type *Ty, unsigned AddrSpace, Instruction *I=nullptr) const
 Return true if the addressing mode represented by AM is legal for this target, for a load/store of the specified type.
 
virtual bool addressingModeSupportsTLS (const GlobalValue &) const
 Returns true if the targets addressing mode can target thread local storage (TLS).
 
virtual int64_t getPreferredLargeGEPBaseOffset (int64_t MinOffset, int64_t MaxOffset) const
 Return the prefered common base offset.
 
virtual bool isLegalICmpImmediate (int64_t) const
 Return true if the specified immediate is legal icmp immediate, that is the target has icmp instructions which can compare a register against the immediate without having to materialize the immediate into a register.
 
virtual bool isLegalAddImmediate (int64_t) const
 Return true if the specified immediate is legal add immediate, that is the target has add instructions which can add a register with the immediate without having to materialize the immediate into a register.
 
virtual bool isLegalAddScalableImmediate (int64_t) const
 Return true if adding the specified scalable immediate is legal, that is the target has add instructions which can add a register with the immediate (multiplied by vscale) without having to materialize the immediate into a register.
 
virtual bool isLegalStoreImmediate (int64_t Value) const
 Return true if the specified immediate is legal for the value input of a store instruction.
 
virtual TypeshouldConvertSplatType (ShuffleVectorInst *SVI) const
 Given a shuffle vector SVI representing a vector splat, return a new scalar type of size equal to SVI's scalar type if the new type is more profitable.
 
virtual bool shouldConvertPhiType (Type *From, Type *To) const
 Given a set in interconnected phis of type 'From' that are loaded/stored or bitcast to type 'To', return true if the set should be converted to 'To'.
 
virtual bool isCommutativeBinOp (unsigned Opcode) const
 Returns true if the opcode is a commutative binary operation.
 
virtual bool isBinOp (unsigned Opcode) const
 Return true if the node is a math/logic binary operator.
 
virtual bool isTruncateFree (Type *FromTy, Type *ToTy) const
 Return true if it's free to truncate a value of type FromTy to type ToTy.
 
virtual bool allowTruncateForTailCall (Type *FromTy, Type *ToTy) const
 Return true if a truncation from FromTy to ToTy is permitted when deciding whether a call is in tail position.
 
virtual bool isTruncateFree (EVT FromVT, EVT ToVT) const
 
virtual bool isTruncateFree (LLT FromTy, LLT ToTy, LLVMContext &Ctx) const
 
virtual bool isTruncateFree (SDValue Val, EVT VT2) const
 Return true if truncating the specific node Val to type VT2 is free.
 
virtual bool isProfitableToHoist (Instruction *I) const
 
bool isExtFree (const Instruction *I) const
 Return true if the extension represented by I is free.
 
bool isExtLoad (const LoadInst *Load, const Instruction *Ext, const DataLayout &DL) const
 Return true if Load and Ext can form an ExtLoad.
 
virtual bool isZExtFree (Type *FromTy, Type *ToTy) const
 Return true if any actual instruction that defines a value of type FromTy implicitly zero-extends the value to ToTy in the result register.
 
virtual bool isZExtFree (EVT FromTy, EVT ToTy) const
 
virtual bool isZExtFree (LLT FromTy, LLT ToTy, LLVMContext &Ctx) const
 
virtual bool isZExtFree (SDValue Val, EVT VT2) const
 Return true if zero-extending the specific node Val to type VT2 is free (either because it's implicitly zero-extended such as ARM ldrb / ldrh or because it's folded such as X86 zero-extending loads).
 
virtual bool isSExtCheaperThanZExt (EVT FromTy, EVT ToTy) const
 Return true if sign-extension from FromTy to ToTy is cheaper than zero-extension.
 
virtual bool signExtendConstant (const ConstantInt *C) const
 Return true if this constant should be sign extended when promoting to a larger type.
 
virtual bool optimizeExtendOrTruncateConversion (Instruction *I, Loop *L, const TargetTransformInfo &TTI) const
 Try to optimize extending or truncating conversion instructions (like zext, trunc, fptoui, uitofp) for the target.
 
virtual bool hasPairedLoad (EVT, Align &) const
 Return true if the target supplies and combines to a paired load two loaded values of type LoadedType next to each other in memory.
 
virtual bool hasVectorBlend () const
 Return true if the target has a vector blend instruction.
 
virtual unsigned getMaxSupportedInterleaveFactor () const
 Get the maximum supported factor for interleaved memory accesses.
 
virtual bool lowerInterleavedLoad (LoadInst *LI, ArrayRef< ShuffleVectorInst * > Shuffles, ArrayRef< unsigned > Indices, unsigned Factor) const
 Lower an interleaved load to target specific intrinsics.
 
virtual bool lowerInterleavedStore (StoreInst *SI, ShuffleVectorInst *SVI, unsigned Factor) const
 Lower an interleaved store to target specific intrinsics.
 
virtual bool lowerDeinterleaveIntrinsicToLoad (IntrinsicInst *DI, LoadInst *LI, SmallVectorImpl< Instruction * > &DeadInsts) const
 Lower a deinterleave intrinsic to a target specific load intrinsic.
 
virtual bool lowerInterleaveIntrinsicToStore (IntrinsicInst *II, StoreInst *SI, SmallVectorImpl< Instruction * > &DeadInsts) const
 Lower an interleave intrinsic to a target specific store intrinsic.
 
virtual bool isFPExtFree (EVT DestVT, EVT SrcVT) const
 Return true if an fpext operation is free (for instance, because single-precision floating-point numbers are implicitly extended to double-precision).
 
virtual bool isFPExtFoldable (const MachineInstr &MI, unsigned Opcode, LLT DestTy, LLT SrcTy) const
 Return true if an fpext operation input to an Opcode operation is free (for instance, because half-precision floating-point numbers are implicitly extended to float-precision) for an FMA instruction.
 
virtual bool isFPExtFoldable (const SelectionDAG &DAG, unsigned Opcode, EVT DestVT, EVT SrcVT) const
 Return true if an fpext operation input to an Opcode operation is free (for instance, because half-precision floating-point numbers are implicitly extended to float-precision) for an FMA instruction.
 
virtual bool isVectorLoadExtDesirable (SDValue ExtVal) const
 Return true if folding a vector load into ExtVal (a sign, zero, or any extend node) is profitable.
 
virtual bool isFNegFree (EVT VT) const
 Return true if an fneg operation is free to the point where it is never worthwhile to replace it with a bitwise operation.
 
virtual bool isFAbsFree (EVT VT) const
 Return true if an fabs operation is free to the point where it is never worthwhile to replace it with a bitwise operation.
 
virtual bool isFMAFasterThanFMulAndFAdd (const MachineFunction &MF, EVT) const
 Return true if an FMA operation is faster than a pair of fmul and fadd instructions.
 
virtual bool isFMAFasterThanFMulAndFAdd (const MachineFunction &MF, LLT) const
 Return true if an FMA operation is faster than a pair of fmul and fadd instructions.
 
virtual bool isFMAFasterThanFMulAndFAdd (const Function &F, Type *) const
 IR version.
 
virtual bool isFMADLegal (const MachineInstr &MI, LLT Ty) const
 Returns true if MI can be combined with another instruction to form TargetOpcode::G_FMAD.
 
virtual bool isFMADLegal (const SelectionDAG &DAG, const SDNode *N) const
 Returns true if be combined with to form an ISD::FMAD.
 
virtual bool generateFMAsInMachineCombiner (EVT VT, CodeGenOptLevel OptLevel) const
 
virtual bool isNarrowingProfitable (SDNode *N, EVT SrcVT, EVT DestVT) const
 Return true if it's profitable to narrow operations of type SrcVT to DestVT.
 
virtual bool shouldFoldSelectWithIdentityConstant (unsigned BinOpcode, EVT VT) const
 Return true if pulling a binary operation into a select with an identity constant is profitable.
 
virtual bool shouldConvertConstantLoadToIntImm (const APInt &Imm, Type *Ty) const
 Return true if it is beneficial to convert a load of a constant to just the constant itself.
 
virtual bool isExtractSubvectorCheap (EVT ResVT, EVT SrcVT, unsigned Index) const
 Return true if EXTRACT_SUBVECTOR is cheap for extracting this result type from this source type with this index.
 
virtual bool shouldScalarizeBinop (SDValue VecOp) const
 Try to convert an extract element of a vector binary operation into an extract element followed by a scalar operation.
 
virtual bool isExtractVecEltCheap (EVT VT, unsigned Index) const
 Return true if extraction of a scalar element from the given vector type at the given index is cheap.
 
virtual bool shouldFormOverflowOp (unsigned Opcode, EVT VT, bool MathUsed) const
 Try to convert math with an overflow comparison into the corresponding DAG node operation.
 
virtual bool aggressivelyPreferBuildVectorSources (EVT VecVT) const
 
virtual bool shouldConsiderGEPOffsetSplit () const
 
virtual bool shouldAvoidTransformToShift (EVT VT, unsigned Amount) const
 Return true if creating a shift of the type by the given amount is not profitable.
 
virtual bool shouldFoldSelectWithSingleBitTest (EVT VT, const APInt &AndMask) const
 
virtual bool shouldKeepZExtForFP16Conv () const
 Does this target require the clearing of high-order bits in a register passed to the fp16 to fp conversion library function.
 
virtual bool shouldConvertFpToSat (unsigned Op, EVT FPVT, EVT VT) const
 Should we generate fp_to_si_sat and fp_to_ui_sat from type FPVT to type VT from min(max(fptoi)) saturation patterns.
 
virtual bool shouldExpandCmpUsingSelects (EVT VT) const
 Should we expand [US]CMP nodes using two selects and two compares, or by doing arithmetic on boolean types.
 
virtual bool isComplexDeinterleavingSupported () const
 Does this target support complex deinterleaving.
 
virtual bool isComplexDeinterleavingOperationSupported (ComplexDeinterleavingOperation Operation, Type *Ty) const
 Does this target support complex deinterleaving with the given operation and type.
 
virtual ValuecreateComplexDeinterleavingIR (IRBuilderBase &B, ComplexDeinterleavingOperation OperationType, ComplexDeinterleavingRotation Rotation, Value *InputA, Value *InputB, Value *Accumulator=nullptr) const
 Create the IR node for the given complex deinterleaving operation.
 
void setLibcallName (RTLIB::Libcall Call, const char *Name)
 Rename the default libcall routine name for the specified libcall.
 
void setLibcallName (ArrayRef< RTLIB::Libcall > Calls, const char *Name)
 
const chargetLibcallName (RTLIB::Libcall Call) const
 Get the libcall routine name for the specified libcall.
 
void setCmpLibcallCC (RTLIB::Libcall Call, ISD::CondCode CC)
 Override the default CondCode to be used to test the result of the comparison libcall against zero.
 
ISD::CondCode getCmpLibcallCC (RTLIB::Libcall Call) const
 Get the CondCode that's to be used to test the result of the comparison libcall against zero.
 
void setLibcallCallingConv (RTLIB::Libcall Call, CallingConv::ID CC)
 Set the CallingConv that should be used for the specified libcall.
 
CallingConv::ID getLibcallCallingConv (RTLIB::Libcall Call) const
 Get the CallingConv that should be used for the specified libcall.
 
virtual void finalizeLowering (MachineFunction &MF) const
 Execute target specific actions to finalize target lowering.
 
virtual bool shouldLocalize (const MachineInstr &MI, const TargetTransformInfo *TTI) const
 Check whether or not MI needs to be moved close to its uses.
 
int InstructionOpcodeToISD (unsigned Opcode) const
 Get the ISD node that corresponds to the Instruction class opcode.
 
unsigned getMaxAtomicSizeInBitsSupported () const
 Returns the maximum atomic operation size (in bits) supported by the backend.
 
unsigned getMaxDivRemBitWidthSupported () const
 Returns the size in bits of the maximum div/rem the backend supports.
 
unsigned getMaxLargeFPConvertBitWidthSupported () const
 Returns the size in bits of the maximum larget fp convert the backend supports.
 
unsigned getMinCmpXchgSizeInBits () const
 Returns the size of the smallest cmpxchg or ll/sc instruction the backend supports.
 
bool supportsUnalignedAtomics () const
 Whether the target supports unaligned atomic operations.
 
virtual bool shouldInsertFencesForAtomic (const Instruction *I) const
 Whether AtomicExpandPass should automatically insert fences and reduce ordering for this atomic.
 
virtual bool shouldInsertTrailingFenceForAtomicStore (const Instruction *I) const
 Whether AtomicExpandPass should automatically insert a trailing fence without reducing the ordering for this atomic.
 
virtual ValueemitLoadLinked (IRBuilderBase &Builder, Type *ValueTy, Value *Addr, AtomicOrdering Ord) const
 Perform a load-linked operation on Addr, returning a "Value *" with the corresponding pointee type.
 
virtual ValueemitStoreConditional (IRBuilderBase &Builder, Value *Val, Value *Addr, AtomicOrdering Ord) const
 Perform a store-conditional operation to Addr.
 
virtual ValueemitMaskedAtomicRMWIntrinsic (IRBuilderBase &Builder, AtomicRMWInst *AI, Value *AlignedAddr, Value *Incr, Value *Mask, Value *ShiftAmt, AtomicOrdering Ord) const
 Perform a masked atomicrmw using a target-specific intrinsic.
 
virtual void emitBitTestAtomicRMWIntrinsic (AtomicRMWInst *AI) const
 Perform a bit test atomicrmw using a target-specific intrinsic.
 
virtual void emitCmpArithAtomicRMWIntrinsic (AtomicRMWInst *AI) const
 Perform a atomicrmw which the result is only used by comparison, using a target-specific intrinsic.
 
virtual ValueemitMaskedAtomicCmpXchgIntrinsic (IRBuilderBase &Builder, AtomicCmpXchgInst *CI, Value *AlignedAddr, Value *CmpVal, Value *NewVal, Value *Mask, AtomicOrdering Ord) const
 Perform a masked cmpxchg using a target-specific intrinsic.
 
virtual MachineInstrEmitKCFICheck (MachineBasicBlock &MBB, MachineBasicBlock::instr_iterator &MBBI, const TargetInstrInfo *TII) const
 
virtual InstructionemitLeadingFence (IRBuilderBase &Builder, Instruction *Inst, AtomicOrdering Ord) const
 Inserts in the IR a target-specific intrinsic specifying a fence.
 
virtual InstructionemitTrailingFence (IRBuilderBase &Builder, Instruction *Inst, AtomicOrdering Ord) const
 

Static Public Member Functions

static bool shouldExpandVectorDynExt (unsigned EltSize, unsigned NumElem, bool IsDivergentIdx, const GCNSubtarget *Subtarget)
 Check if EXTRACT_VECTOR_ELT/INSERT_VECTOR_ELT (<n x e>, var-idx) should be expanded into a set of cmp/select instructions.
 
static bool isNonGlobalAddrSpace (unsigned AS)
 
- Static Public Member Functions inherited from llvm::AMDGPUTargetLowering
static unsigned numBitsUnsigned (SDValue Op, SelectionDAG &DAG)
 
static unsigned numBitsSigned (SDValue Op, SelectionDAG &DAG)
 
static SDValue stripBitcast (SDValue Val)
 
static bool shouldFoldFNegIntoSrc (SDNode *FNeg, SDValue FNegSrc)
 
static bool allUsesHaveSourceMods (const SDNode *N, unsigned CostThreshold=4)
 
static CCAssignFnCCAssignFnForCall (CallingConv::ID CC, bool IsVarArg)
 Selects the correct CCAssignFn for a given CallingConvention value.
 
static CCAssignFnCCAssignFnForReturn (CallingConv::ID CC, bool IsVarArg)
 
- Static Public Member Functions inherited from llvm::TargetLoweringBase
static ISD::NodeType getExtendForContent (BooleanContent Content)
 

Additional Inherited Members

- Public Types inherited from llvm::AMDGPUTargetLowering
enum  ImplicitParameter { FIRST_IMPLICIT , PRIVATE_BASE , SHARED_BASE , QUEUE_PTR }
 
- Public Types inherited from llvm::TargetLowering
enum  ConstraintType {
  C_Register , C_RegisterClass , C_Memory , C_Address ,
  C_Immediate , C_Other , C_Unknown
}
 
enum  ConstraintWeight {
  CW_Invalid = -1 , CW_Okay = 0 , CW_Good = 1 , CW_Better = 2 ,
  CW_Best = 3 , CW_SpecificReg = CW_Okay , CW_Register = CW_Good , CW_Memory = CW_Better ,
  CW_Constant = CW_Best , CW_Default = CW_Okay
}
 
using AsmOperandInfoVector = std::vector< AsmOperandInfo >
 
using ConstraintPair = std::pair< StringRef, TargetLowering::ConstraintType >
 
using ConstraintGroup = SmallVector< ConstraintPair >
 
- Public Types inherited from llvm::TargetLoweringBase
enum  LegalizeAction : uint8_t {
  Legal , Promote , Expand , LibCall ,
  Custom
}
 This enum indicates whether operations are valid for a target, and if not, what action should be used to make them valid. More...
 
enum  LegalizeTypeAction : uint8_t {
  TypeLegal , TypePromoteInteger , TypeExpandInteger , TypeSoftenFloat ,
  TypeExpandFloat , TypeScalarizeVector , TypeSplitVector , TypeWidenVector ,
  TypePromoteFloat , TypeSoftPromoteHalf , TypeScalarizeScalableVector
}
 This enum indicates whether a types are legal for a target, and if not, what action should be used to make them valid. More...
 
enum  BooleanContent { UndefinedBooleanContent , ZeroOrOneBooleanContent , ZeroOrNegativeOneBooleanContent }
 Enum that describes how the target represents true/false values. More...
 
enum  SelectSupportKind { ScalarValSelect , ScalarCondVectorVal , VectorMaskSelect }
 Enum that describes what type of support for selects the target has. More...
 
enum class  AtomicExpansionKind {
  None , CastToInteger , LLSC , LLOnly ,
  CmpXChg , MaskedIntrinsic , BitTestIntrinsic , CmpArithIntrinsic ,
  Expand , NotAtomic
}
 Enum that specifies what an atomic load/AtomicRMWInst is expanded to, if at all. More...
 
enum class  MulExpansionKind { Always , OnlyLegalOrCustom }
 Enum that specifies when a multiplication should be expanded. More...
 
enum class  NegatibleCost { Cheaper = 0 , Neutral = 1 , Expensive = 2 }
 Enum that specifies when a float negation is beneficial. More...
 
enum  AndOrSETCCFoldKind : uint8_t { None = 0 , AddAnd = 1 , NotAnd = 2 , ABS = 4 }
 Enum of different potentially desirable ways to fold (and/or (setcc ...), (setcc ...)). More...
 
enum  ReciprocalEstimate : int { Unspecified = -1 , Disabled = 0 , Enabled = 1 }
 Reciprocal estimate status values used by the functions below. More...
 
enum class  ShiftLegalizationStrategy { ExpandToParts , ExpandThroughStack , LowerToLibcall }
 Return the preferred strategy to legalize tihs SHIFT instruction, with ExpansionFactor being the recursion depth - how many expansion needed. More...
 
using LegalizeKind = std::pair< LegalizeTypeAction, EVT >
 LegalizeKind holds the legalization kind that needs to happen to EVT in order to type-legalize it.
 
using ArgListTy = std::vector< ArgListEntry >
 
- Protected Member Functions inherited from llvm::AMDGPUTargetLowering
SDValue LowerEXTRACT_SUBVECTOR (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerCONCAT_VECTORS (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerFREM (SDValue Op, SelectionDAG &DAG) const
 Split a vector store into multiple scalar stores.
 
SDValue LowerFCEIL (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerFTRUNC (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerFRINT (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerFNEARBYINT (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerFROUNDEVEN (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerFROUND (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerFFLOOR (SDValue Op, SelectionDAG &DAG) const
 
SDValue getIsLtSmallestNormal (SelectionDAG &DAG, SDValue Op, SDNodeFlags Flags) const
 
SDValue getIsFinite (SelectionDAG &DAG, SDValue Op, SDNodeFlags Flags) const
 
std::pair< SDValue, SDValuegetScaledLogInput (SelectionDAG &DAG, const SDLoc SL, SDValue Op, SDNodeFlags Flags) const
 If denormal handling is required return the scaled input to FLOG2, and the check for denormal range.
 
SDValue LowerFLOG2 (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerFLOGCommon (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerFLOG10 (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerFLOGUnsafe (SDValue Op, const SDLoc &SL, SelectionDAG &DAG, bool IsLog10, SDNodeFlags Flags) const
 
SDValue lowerFEXP2 (SDValue Op, SelectionDAG &DAG) const
 
SDValue lowerFEXPUnsafe (SDValue Op, const SDLoc &SL, SelectionDAG &DAG, SDNodeFlags Flags) const
 
SDValue lowerFEXP10Unsafe (SDValue Op, const SDLoc &SL, SelectionDAG &DAG, SDNodeFlags Flags) const
 Emit approx-funcs appropriate lowering for exp10.
 
SDValue lowerFEXP (SDValue Op, SelectionDAG &DAG) const
 
SDValue lowerCTLZResults (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerCTLZ_CTTZ (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerINT_TO_FP32 (SDValue Op, SelectionDAG &DAG, bool Signed) const
 
SDValue LowerINT_TO_FP64 (SDValue Op, SelectionDAG &DAG, bool Signed) const
 
SDValue LowerUINT_TO_FP (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerSINT_TO_FP (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerFP_TO_INT64 (SDValue Op, SelectionDAG &DAG, bool Signed) const
 
SDValue LowerFP_TO_FP16 (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerFP_TO_INT (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerSIGN_EXTEND_INREG (SDValue Op, SelectionDAG &DAG) const
 
bool shouldCombineMemoryType (EVT VT) const
 
SDValue performLoadCombine (SDNode *N, DAGCombinerInfo &DCI) const
 
SDValue performStoreCombine (SDNode *N, DAGCombinerInfo &DCI) const
 
SDValue performAssertSZExtCombine (SDNode *N, DAGCombinerInfo &DCI) const
 
SDValue performIntrinsicWOChainCombine (SDNode *N, DAGCombinerInfo &DCI) const
 
SDValue splitBinaryBitConstantOpImpl (DAGCombinerInfo &DCI, const SDLoc &SL, unsigned Opc, SDValue LHS, uint32_t ValLo, uint32_t ValHi) const
 Split the 64-bit value LHS into two 32-bit components, and perform the binary operation Opc to it with the corresponding constant operands.
 
SDValue performShlCombine (SDNode *N, DAGCombinerInfo &DCI) const
 
SDValue performSraCombine (SDNode *N, DAGCombinerInfo &DCI) const
 
SDValue performSrlCombine (SDNode *N, DAGCombinerInfo &DCI) const
 
SDValue performTruncateCombine (SDNode *N, DAGCombinerInfo &DCI) const
 
SDValue performMulCombine (SDNode *N, DAGCombinerInfo &DCI) const
 
SDValue performMulLoHiCombine (SDNode *N, DAGCombinerInfo &DCI) const
 
SDValue performMulhsCombine (SDNode *N, DAGCombinerInfo &DCI) const
 
SDValue performMulhuCombine (SDNode *N, DAGCombinerInfo &DCI) const
 
SDValue performCtlz_CttzCombine (const SDLoc &SL, SDValue Cond, SDValue LHS, SDValue RHS, DAGCombinerInfo &DCI) const
 
SDValue foldFreeOpFromSelect (TargetLowering::DAGCombinerInfo &DCI, SDValue N) const
 
SDValue performSelectCombine (SDNode *N, DAGCombinerInfo &DCI) const
 
TargetLowering::NegatibleCost getConstantNegateCost (const ConstantFPSDNode *C) const
 
bool isConstantCostlierToNegate (SDValue N) const
 
bool isConstantCheaperToNegate (SDValue N) const
 
SDValue performFNegCombine (SDNode *N, DAGCombinerInfo &DCI) const
 
SDValue performFAbsCombine (SDNode *N, DAGCombinerInfo &DCI) const
 
SDValue performRcpCombine (SDNode *N, DAGCombinerInfo &DCI) const
 
virtual SDValue LowerGlobalAddress (AMDGPUMachineFunction *MFI, SDValue Op, SelectionDAG &DAG) const
 
std::pair< SDValue, SDValuesplit64BitValue (SDValue Op, SelectionDAG &DAG) const
 Return 64-bit value Op as two 32-bit integers.
 
SDValue getLoHalf64 (SDValue Op, SelectionDAG &DAG) const
 
SDValue getHiHalf64 (SDValue Op, SelectionDAG &DAG) const
 
std::pair< EVT, EVTgetSplitDestVTs (const EVT &VT, SelectionDAG &DAG) const
 Split a vector type into two parts.
 
std::pair< SDValue, SDValuesplitVector (const SDValue &N, const SDLoc &DL, const EVT &LoVT, const EVT &HighVT, SelectionDAG &DAG) const
 Split a vector value into two parts of types LoVT and HiVT.
 
SDValue SplitVectorLoad (SDValue Op, SelectionDAG &DAG) const
 Split a vector load into 2 loads of half the vector.
 
SDValue WidenOrSplitVectorLoad (SDValue Op, SelectionDAG &DAG) const
 Widen a suitably aligned v3 load.
 
SDValue SplitVectorStore (SDValue Op, SelectionDAG &DAG) const
 Split a vector store into 2 stores of half the vector.
 
SDValue LowerSTORE (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerSDIVREM (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerUDIVREM (SDValue Op, SelectionDAG &DAG) const
 
SDValue LowerDIVREM24 (SDValue Op, SelectionDAG &DAG, bool sign) const
 
void LowerUDIVREM64 (SDValue Op, SelectionDAG &DAG, SmallVectorImpl< SDValue > &Results) const
 
void analyzeFormalArgumentsCompute (CCState &State, const SmallVectorImpl< ISD::InputArg > &Ins) const
 The SelectionDAGBuilder will automatically promote function arguments with illegal types.
 
- Protected Member Functions inherited from llvm::TargetLoweringBase
void initActions ()
 Initialize all of the actions to default values.
 
ValuegetDefaultSafeStackPointerLocation (IRBuilderBase &IRB, bool UseTLS) const
 
void setBooleanContents (BooleanContent Ty)
 Specify how the target extends the result of integer and floating point boolean values from i1 to a wider type.
 
void setBooleanContents (BooleanContent IntTy, BooleanContent FloatTy)
 Specify how the target extends the result of integer and floating point boolean values from i1 to a wider type.
 
void setBooleanVectorContents (BooleanContent Ty)
 Specify how the target extends the result of a vector boolean value from a vector of i1 to a wider type.
 
void setSchedulingPreference (Sched::Preference Pref)
 Specify the target scheduling preference.
 
void setMinimumJumpTableEntries (unsigned Val)
 Indicate the minimum number of blocks to generate jump tables.
 
void setMaximumJumpTableSize (unsigned)
 Indicate the maximum number of entries in jump tables.
 
void setStackPointerRegisterToSaveRestore (Register R)
 If set to a physical register, this specifies the register that llvm.savestack/llvm.restorestack should save and restore.
 
void setHasMultipleConditionRegisters (bool hasManyRegs=true)
 Tells the code generator that the target has multiple (allocatable) condition registers that can be used to store the results of comparisons for use by selects and conditional branches.
 
void setHasExtractBitsInsn (bool hasExtractInsn=true)
 Tells the code generator that the target has BitExtract instructions.
 
void setJumpIsExpensive (bool isExpensive=true)
 Tells the code generator not to expand logic operations on comparison predicates into separate sequences that increase the amount of flow control.
 
void addBypassSlowDiv (unsigned int SlowBitWidth, unsigned int FastBitWidth)
 Tells the code generator which bitwidths to bypass.
 
void addRegisterClass (MVT VT, const TargetRegisterClass *RC)
 Add the specified register class as an available regclass for the specified value type.
 
virtual std::pair< const TargetRegisterClass *, uint8_tfindRepresentativeClass (const TargetRegisterInfo *TRI, MVT VT) const
 Return the largest legal super-reg register class of the register class for the specified type and its associated "cost".
 
void computeRegisterProperties (const TargetRegisterInfo *TRI)
 Once all of the register classes are added, this allows us to compute derived properties we expose.
 
void setOperationAction (unsigned Op, MVT VT, LegalizeAction Action)
 Indicate that the specified operation does not work with the specified type and indicate what to do about it.
 
void setOperationAction (ArrayRef< unsigned > Ops, MVT VT, LegalizeAction Action)
 
void setOperationAction (ArrayRef< unsigned > Ops, ArrayRef< MVT > VTs, LegalizeAction Action)
 
void setLoadExtAction (unsigned ExtType, MVT ValVT, MVT MemVT, LegalizeAction Action)
 Indicate that the specified load with extension does not work with the specified type and indicate what to do about it.
 
void setLoadExtAction (ArrayRef< unsigned > ExtTypes, MVT ValVT, MVT MemVT, LegalizeAction Action)
 
void setLoadExtAction (ArrayRef< unsigned > ExtTypes, MVT ValVT, ArrayRef< MVT > MemVTs, LegalizeAction Action)
 
void setAtomicLoadExtAction (unsigned ExtType, MVT ValVT, MVT MemVT, LegalizeAction Action)
 Let target indicate that an extending atomic load of the specified type is legal.
 
void setAtomicLoadExtAction (ArrayRef< unsigned > ExtTypes, MVT ValVT, MVT MemVT, LegalizeAction Action)
 
void setAtomicLoadExtAction (ArrayRef< unsigned > ExtTypes, MVT ValVT, ArrayRef< MVT > MemVTs, LegalizeAction Action)
 
void setTruncStoreAction (MVT ValVT, MVT MemVT, LegalizeAction Action)
 Indicate that the specified truncating store does not work with the specified type and indicate what to do about it.
 
void setIndexedLoadAction (ArrayRef< unsigned > IdxModes, MVT VT, LegalizeAction Action)
 Indicate that the specified indexed load does or does not work with the specified type and indicate what to do abort it.
 
void setIndexedLoadAction (ArrayRef< unsigned > IdxModes, ArrayRef< MVT > VTs, LegalizeAction Action)
 
void setIndexedStoreAction (ArrayRef< unsigned > IdxModes, MVT VT, LegalizeAction Action)
 Indicate that the specified indexed store does or does not work with the specified type and indicate what to do about it.
 
void setIndexedStoreAction (ArrayRef< unsigned > IdxModes, ArrayRef< MVT > VTs, LegalizeAction Action)
 
void setIndexedMaskedLoadAction (unsigned IdxMode, MVT VT, LegalizeAction Action)
 Indicate that the specified indexed masked load does or does not work with the specified type and indicate what to do about it.
 
void setIndexedMaskedStoreAction (unsigned IdxMode, MVT VT, LegalizeAction Action)
 Indicate that the specified indexed masked store does or does not work with the specified type and indicate what to do about it.
 
void setCondCodeAction (ArrayRef< ISD::CondCode > CCs, MVT VT, LegalizeAction Action)
 Indicate that the specified condition code is or isn't supported on the target and indicate what to do about it.
 
void setCondCodeAction (ArrayRef< ISD::CondCode > CCs, ArrayRef< MVT > VTs, LegalizeAction Action)
 
void AddPromotedToType (unsigned Opc, MVT OrigVT, MVT DestVT)
 If Opc/OrigVT is specified as being promoted, the promotion code defaults to trying a larger integer/fp until it can find one that works.
 
void setOperationPromotedToType (unsigned Opc, MVT OrigVT, MVT DestVT)
 Convenience method to set an operation to Promote and specify the type in a single call.
 
void setOperationPromotedToType (ArrayRef< unsigned > Ops, MVT OrigVT, MVT DestVT)
 
void setTargetDAGCombine (ArrayRef< ISD::NodeType > NTs)
 Targets should invoke this method for each target independent node that they want to provide a custom DAG combiner for by implementing the PerformDAGCombine virtual method.
 
void setMinFunctionAlignment (Align Alignment)
 Set the target's minimum function alignment.
 
void setPrefFunctionAlignment (Align Alignment)
 Set the target's preferred function alignment.
 
void setPrefLoopAlignment (Align Alignment)
 Set the target's preferred loop alignment.
 
void setMaxBytesForAlignment (unsigned MaxBytes)
 
void setMinStackArgumentAlignment (Align Alignment)
 Set the minimum stack alignment of an argument.
 
void setMaxAtomicSizeInBitsSupported (unsigned SizeInBits)
 Set the maximum atomic operation size supported by the backend.
 
void setMaxDivRemBitWidthSupported (unsigned SizeInBits)
 Set the size in bits of the maximum div/rem the backend supports.
 
void setMaxLargeFPConvertBitWidthSupported (unsigned SizeInBits)
 Set the size in bits of the maximum fp convert the backend supports.
 
void setMinCmpXchgSizeInBits (unsigned SizeInBits)
 Sets the minimum cmpxchg or ll/sc size supported by the backend.
 
void setSupportsUnalignedAtomics (bool UnalignedSupported)
 Sets whether unaligned atomic operations are supported.
 
virtual bool isExtFreeImpl (const Instruction *I) const
 Return true if the extension represented by I is free.
 
bool isLegalRC (const TargetRegisterInfo &TRI, const TargetRegisterClass &RC) const
 Return true if the value types that can be represented by the specified register class are all legal.
 
MachineBasicBlockemitPatchPoint (MachineInstr &MI, MachineBasicBlock *MBB) const
 Replace/modify any TargetFrameIndex operands with a targte-dependent sequence of memory operands that is recognized by PrologEpilogInserter.
 
- Static Protected Member Functions inherited from llvm::AMDGPUTargetLowering
static bool allowApproxFunc (const SelectionDAG &DAG, SDNodeFlags Flags)
 
static bool needsDenormHandlingF32 (const SelectionDAG &DAG, SDValue Src, SDNodeFlags Flags)
 
static EVT getEquivalentMemType (LLVMContext &Context, EVT VT)
 
- Protected Attributes inherited from llvm::TargetLoweringBase
unsigned GatherAllAliasesMaxDepth
 Depth that GatherAllAliases should continue looking for chain dependencies when trying to find a more preferable chain.
 
unsigned MaxStoresPerMemset
 Specify maximum number of store instructions per memset call.
 
unsigned MaxStoresPerMemsetOptSize
 Likewise for functions with the OptSize attribute.
 
unsigned MaxStoresPerMemcpy
 Specify maximum number of store instructions per memcpy call.
 
unsigned MaxStoresPerMemcpyOptSize
 Likewise for functions with the OptSize attribute.
 
unsigned MaxGluedStoresPerMemcpy = 0
 Specify max number of store instructions to glue in inlined memcpy.
 
unsigned MaxLoadsPerMemcmp
 Specify maximum number of load instructions per memcmp call.
 
unsigned MaxLoadsPerMemcmpOptSize
 Likewise for functions with the OptSize attribute.
 
unsigned MaxStoresPerMemmove
 Specify maximum number of store instructions per memmove call.
 
unsigned MaxStoresPerMemmoveOptSize
 Likewise for functions with the OptSize attribute.
 
bool PredictableSelectIsExpensive
 Tells the code generator that select is more expensive than a branch if the branch is usually predicted right.
 
bool EnableExtLdPromotion
 
bool IsStrictFPEnabled
 

Detailed Description

Definition at line 31 of file SIISelLowering.h.

Constructor & Destructor Documentation

◆ SITargetLowering()

SITargetLowering::SITargetLowering ( const TargetMachine tm,
const GCNSubtarget STI 
)

Definition at line 84 of file SIISelLowering.cpp.

References llvm::ISD::ABS, llvm::ISD::ADD, llvm::TargetLoweringBase::AddPromotedToType(), llvm::TargetLoweringBase::addRegisterClass(), llvm::ISD::ADDRSPACECAST, llvm::ISD::AND, llvm::ISD::ANY_EXTEND, llvm::ISD::ATOMIC_CMP_SWAP, llvm::ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS, llvm::ISD::ATOMIC_LOAD, llvm::ISD::ATOMIC_LOAD_ADD, llvm::ISD::ATOMIC_LOAD_AND, llvm::ISD::ATOMIC_LOAD_FADD, llvm::ISD::ATOMIC_LOAD_FMAX, llvm::ISD::ATOMIC_LOAD_FMIN, llvm::ISD::ATOMIC_LOAD_MAX, llvm::ISD::ATOMIC_LOAD_MIN, llvm::ISD::ATOMIC_LOAD_NAND, llvm::ISD::ATOMIC_LOAD_OR, llvm::ISD::ATOMIC_LOAD_SUB, llvm::ISD::ATOMIC_LOAD_UDEC_WRAP, llvm::ISD::ATOMIC_LOAD_UINC_WRAP, llvm::ISD::ATOMIC_LOAD_UMAX, llvm::ISD::ATOMIC_LOAD_UMIN, llvm::ISD::ATOMIC_LOAD_XOR, llvm::ISD::ATOMIC_STORE, llvm::ISD::ATOMIC_SWAP, llvm::ISD::BF16_TO_FP, llvm::ISD::BITCAST, llvm::ISD::BITREVERSE, llvm::ISD::BR_CC, llvm::ISD::BRCOND, llvm::ISD::BSWAP, llvm::ISD::BUILD_VECTOR, llvm::ISD::BUILTIN_OP_END, llvm::TargetLoweringBase::computeRegisterProperties(), llvm::ISD::CONCAT_VECTORS, llvm::ISD::Constant, llvm::ISD::ConstantFP, llvm::ISD::CTLZ, llvm::ISD::CTLZ_ZERO_UNDEF, llvm::ISD::CTPOP, llvm::ISD::CTTZ, llvm::ISD::CTTZ_ZERO_UNDEF, llvm::TargetLoweringBase::Custom, llvm::ISD::DEBUGTRAP, llvm::TargetLoweringBase::Expand, llvm::ISD::EXTRACT_SUBVECTOR, llvm::ISD::EXTRACT_VECTOR_ELT, llvm::ISD::FABS, llvm::ISD::FADD, llvm::ISD::FCANONICALIZE, llvm::ISD::FCBRT, llvm::ISD::FCEIL, llvm::ISD::FCOPYSIGN, llvm::ISD::FCOS, llvm::ISD::FDIV, llvm::ISD::FEXP, llvm::ISD::FEXP10, llvm::ISD::FEXP2, llvm::ISD::FFLOOR, llvm::ISD::FFREXP, llvm::ISD::FLDEXP, llvm::ISD::FLOG, llvm::ISD::FLOG10, llvm::ISD::FLOG2, llvm::ISD::FMA, llvm::ISD::FMAD, llvm::ISD::FMAXIMUM, llvm::ISD::FMAXIMUMNUM, llvm::ISD::FMAXNUM, llvm::ISD::FMAXNUM_IEEE, llvm::ISD::FMINIMUM, llvm::ISD::FMINIMUMNUM, llvm::ISD::FMINNUM, llvm::ISD::FMINNUM_IEEE, llvm::ISD::FMUL, llvm::ISD::FNEARBYINT, llvm::ISD::FNEG, llvm::ISD::FP16_TO_FP, llvm::ISD::FP_EXTEND, llvm::ISD::FP_ROUND, llvm::ISD::FP_TO_BF16, llvm::ISD::FP_TO_FP16, llvm::ISD::FP_TO_SINT, llvm::ISD::FP_TO_UINT, llvm::ISD::FPOW, llvm::ISD::FPOWI, llvm::ISD::FREM, llvm::ISD::FRINT, llvm::ISD::FROUND, llvm::ISD::FROUNDEVEN, llvm::ISD::FSHR, llvm::ISD::FSIN, llvm::ISD::FSQRT, llvm::ISD::FSUB, llvm::ISD::FTRUNC, llvm::ISD::GET_FPENV, llvm::ISD::GET_FPMODE, llvm::ISD::GET_ROUNDING, llvm::GCNSubtarget::getGeneration(), llvm::GCNSubtarget::getRegisterInfo(), llvm::AMDGPUSubtarget::GFX11, llvm::ISD::GlobalAddress, llvm::AMDGPUSubtarget::has16BitInsts(), llvm::GCNSubtarget::hasAddNoCarry(), llvm::GCNSubtarget::hasBCNT(), llvm::AMDGPUSubtarget::hasBF16ConversionInsts(), llvm::GCNSubtarget::hasBFE(), llvm::GCNSubtarget::hasBFI(), llvm::AMDGPUSubtarget::hasCvtPkF16F32Inst(), llvm::GCNSubtarget::hasFFBH(), llvm::GCNSubtarget::hasFFBL(), llvm::GCNSubtarget::hasIEEEMinMax(), llvm::GCNSubtarget::hasIntClamp(), llvm::GCNSubtarget::hasMad64_32(), llvm::GCNSubtarget::hasMadF16(), llvm::AMDGPUSubtarget::hasMadMacF32Insts(), llvm::GCNSubtarget::hasMed3_16(), llvm::GCNSubtarget::hasMinimum3Maximum3F32(), llvm::GCNSubtarget::hasMinimum3Maximum3PKF16(), llvm::GCNSubtarget::hasPackedFP32Ops(), llvm::GCNSubtarget::hasPrefetch(), llvm::GCNSubtarget::hasScalarSMulU64(), llvm::GCNSubtarget::hasSMemRealTime(), llvm::AMDGPUSubtarget::hasVOP3PInsts(), llvm::GCNSubtarget::haveRoundOpsF64(), llvm::ISD::INSERT_SUBVECTOR, llvm::ISD::INSERT_VECTOR_ELT, llvm::ISD::INTRINSIC_VOID, llvm::ISD::INTRINSIC_W_CHAIN, llvm::ISD::INTRINSIC_WO_CHAIN, llvm::ISD::IS_FPCLASS, llvm::TargetLoweringBase::isTypeLegal(), llvm::IRSimilarity::Legal, llvm::ISD::LOAD, llvm::ISD::MUL, llvm::ISD::OR, llvm::ISD::PREFETCH, llvm::TargetLoweringBase::Promote, llvm::ISD::READCYCLECOUNTER, llvm::ISD::READSTEADYCOUNTER, llvm::Sched::RegPressure, llvm::ISD::ROTL, llvm::ISD::ROTR, llvm::ISD::SADDSAT, llvm::ISD::SCALAR_TO_VECTOR, llvm::ISD::SDIV, llvm::ISD::SELECT, llvm::ISD::SELECT_CC, llvm::ISD::SET_FPENV, llvm::ISD::SET_ROUNDING, llvm::TargetLoweringBase::setBooleanContents(), llvm::TargetLoweringBase::setBooleanVectorContents(), llvm::ISD::SETCC, llvm::TargetLoweringBase::setHasExtractBitsInsn(), llvm::TargetLoweringBase::setOperationAction(), llvm::TargetLoweringBase::setSchedulingPreference(), llvm::TargetLoweringBase::setStackPointerRegisterToSaveRestore(), llvm::TargetLoweringBase::setTargetDAGCombine(), llvm::TargetLoweringBase::setTruncStoreAction(), llvm::ISD::SHL, llvm::ISD::SHL_PARTS, llvm::ISD::SIGN_EXTEND, llvm::ISD::SIGN_EXTEND_INREG, llvm::ISD::SINT_TO_FP, llvm::ISD::SMAX, llvm::ISD::SMIN, llvm::ISD::SMUL_LOHI, llvm::ISD::SMULO, llvm::ISD::SRA, llvm::ISD::SRA_PARTS, llvm::ISD::SREM, llvm::ISD::SRL, llvm::ISD::SRL_PARTS, llvm::ISD::SSUBSAT, llvm::ISD::STACKSAVE, llvm::ISD::STORE, llvm::ISD::STRICT_FLDEXP, llvm::ISD::STRICT_FP_EXTEND, llvm::ISD::STRICT_FP_ROUND, llvm::ISD::SUB, llvm::ISD::TRAP, TRI, llvm::ISD::TRUNCATE, llvm::ISD::UADDO, llvm::ISD::UADDO_CARRY, llvm::ISD::UADDSAT, llvm::ISD::UDIV, llvm::ISD::UINT_TO_FP, llvm::ISD::UMAX, llvm::ISD::UMIN, llvm::ISD::UMUL_LOHI, llvm::ISD::UMULO, llvm::ISD::UNDEF, llvm::ISD::UREM, llvm::AMDGPUSubtarget::useRealTrue16Insts(), llvm::ISD::USUBO, llvm::ISD::USUBO_CARRY, llvm::ISD::USUBSAT, llvm::ISD::VECTOR_SHUFFLE, llvm::ISD::XOR, llvm::ISD::ZERO_EXTEND, and llvm::TargetLoweringBase::ZeroOrOneBooleanContent.

Member Function Documentation

◆ AddMemOpInit()

void SITargetLowering::AddMemOpInit ( MachineInstr MI) const

◆ AdjustInstrPostInstrSelection()

void SITargetLowering::AdjustInstrPostInstrSelection ( MachineInstr MI,
SDNode Node 
) const
overridevirtual

Assign the register class depending on the number of bits set in the writemask.

Reimplemented from llvm::TargetLowering.

Definition at line 15516 of file SIISelLowering.cpp.

References llvm::MachineFunction::getInfo(), llvm::GCNSubtarget::getInstrInfo(), llvm::AMDGPU::getNamedOperandIdx(), llvm::MachineFunction::getRegInfo(), llvm::GCNSubtarget::getRegisterInfo(), getSubtarget(), I, Info, MI, MRI, TII, and TRI.

◆ allocateHSAUserSGPRs()

void SITargetLowering::allocateHSAUserSGPRs ( CCState CCInfo,
MachineFunction MF,
const SIRegisterInfo TRI,
SIMachineFunctionInfo Info 
) const

◆ allocateLDSKernelId()

void SITargetLowering::allocateLDSKernelId ( CCState CCInfo,
MachineFunction MF,
const SIRegisterInfo TRI,
SIMachineFunctionInfo Info 
) const

◆ allocatePreloadKernArgSGPRs()

void SITargetLowering::allocatePreloadKernArgSGPRs ( CCState CCInfo,
SmallVectorImpl< CCValAssign > &  ArgLocs,
const SmallVectorImpl< ISD::InputArg > &  Ins,
MachineFunction MF,
const SIRegisterInfo TRI,
SIMachineFunctionInfo Info 
) const

◆ allocateSpecialEntryInputVGPRs()

void SITargetLowering::allocateSpecialEntryInputVGPRs ( CCState CCInfo,
MachineFunction MF,
const SIRegisterInfo TRI,
SIMachineFunctionInfo Info 
) const

◆ allocateSpecialInputSGPRs()

void SITargetLowering::allocateSpecialInputSGPRs ( CCState CCInfo,
MachineFunction MF,
const SIRegisterInfo TRI,
SIMachineFunctionInfo Info 
) const

◆ allocateSpecialInputVGPRs()

void SITargetLowering::allocateSpecialInputVGPRs ( CCState CCInfo,
MachineFunction MF,
const SIRegisterInfo TRI,
SIMachineFunctionInfo Info 
) const

Allocate implicit function VGPR arguments at the end of allocated user arguments.

Definition at line 2360 of file SIISelLowering.cpp.

References allocateVGPR32Input(), and Info.

◆ allocateSpecialInputVGPRsFixed()

void SITargetLowering::allocateSpecialInputVGPRsFixed ( CCState CCInfo,
MachineFunction MF,
const SIRegisterInfo TRI,
SIMachineFunctionInfo Info 
) const

Allocate implicit function VGPR arguments in fixed registers.

Definition at line 2381 of file SIISelLowering.cpp.

References llvm::CCState::AllocateReg(), llvm::ArgDescriptor::createRegister(), Info, and llvm::report_fatal_error().

Referenced by llvm::AMDGPUCallLowering::lowerFormalArguments(), and LowerFormalArguments().

◆ allocateSystemSGPRs()

void SITargetLowering::allocateSystemSGPRs ( CCState CCInfo,
MachineFunction MF,
SIMachineFunctionInfo Info,
CallingConv::ID  CallConv,
bool  IsShader 
) const

◆ allowsMisalignedMemoryAccesses() [1/2]

bool SITargetLowering::allowsMisalignedMemoryAccesses ( EVT  ,
unsigned  AddrSpace,
Align  Alignment,
MachineMemOperand::Flags  Flags = MachineMemOperand::MONone,
unsigned = nullptr 
) const
overridevirtual

Determine if the target supports unaligned memory accesses.

This function returns true if the target allows unaligned memory accesses of the specified type in the given address space. If true, it also returns a relative speed of the unaligned memory access in the last argument by reference. The higher the speed number the faster the operation comparing to a number returned by another such call. This is used, for example, in situations where an array copy/move/set is converted to a sequence of store operations. Its use helps to ensure that such replacements don't generate code that causes an alignment error (trap) on the target machine.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 1881 of file SIISelLowering.cpp.

References allowsMisalignedMemoryAccessesImpl(), and llvm::EVT::getSizeInBits().

◆ allowsMisalignedMemoryAccesses() [2/2]

bool llvm::SITargetLowering::allowsMisalignedMemoryAccesses ( LLT  ,
unsigned  AddrSpace,
Align  Alignment,
MachineMemOperand::Flags  Flags = MachineMemOperand::MONone,
unsigned = nullptr 
) const
inlineoverridevirtual

LLT handling variant.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 333 of file SIISelLowering.h.

References allowsMisalignedMemoryAccessesImpl(), and llvm::LLT::getSizeInBits().

◆ allowsMisalignedMemoryAccessesImpl()

bool SITargetLowering::allowsMisalignedMemoryAccessesImpl ( unsigned  Size,
unsigned  AddrSpace,
Align  Alignment,
MachineMemOperand::Flags  Flags = MachineMemOperand::MONone,
unsigned IsFast = nullptr 
) const

◆ buildRSRC()

MachineSDNode * SITargetLowering::buildRSRC ( SelectionDAG DAG,
const SDLoc DL,
SDValue  Ptr,
uint32_t  RsrcDword1,
uint64_t  RsrcDword2And3 
) const

Return a resource descriptor with the 'Add TID' bit enabled The TID (Thread ID) is multiplied by the stride value (bits [61:48] of the resource descriptor) to create an offset, which is added to the resource pointer.

Definition at line 15637 of file SIISelLowering.cpp.

References buildSMovImm32(), DL, llvm::SelectionDAG::getConstant(), llvm::SelectionDAG::getMachineNode(), llvm::SelectionDAG::getTargetConstant(), llvm::SelectionDAG::getTargetExtractSubreg(), and Ptr.

◆ bundleInstWithWaitcnt()

void SITargetLowering::bundleInstWithWaitcnt ( MachineInstr MI) const

Insert MI into a BUNDLE with an S_WAITCNT 0 immediately following it.

Definition at line 4457 of file SIISelLowering.cpp.

References llvm::MachineInstrBuilder::addImm(), llvm::MIBundleBuilder::begin(), llvm::BuildMI(), llvm::finalizeBundle(), llvm::GCNSubtarget::getInstrInfo(), getSubtarget(), I, MBB, MI, and TII.

Referenced by emitGWSMemViolTestLoop(), and EmitInstrWithCustomInserter().

◆ CanLowerReturn()

bool SITargetLowering::CanLowerReturn ( CallingConv::ID  ,
MachineFunction ,
bool  ,
const SmallVectorImpl< ISD::OutputArg > &  ,
LLVMContext  
) const
overridevirtual

This hook should be implemented to check whether the return values described by the Outs array can fit into the return registers.

If false is returned, an sret-demotion is performed.

Reimplemented from llvm::TargetLowering.

Definition at line 3147 of file SIISelLowering.cpp.

References llvm::AMDGPUTargetLowering::CCAssignFnForReturn(), llvm::CCState::CheckReturn(), llvm::GCNSubtarget::getMaxNumVGPRs(), llvm::CCState::isAllocated(), and llvm::AMDGPU::isEntryFunctionCC().

◆ canMergeStoresTo()

bool SITargetLowering::canMergeStoresTo ( unsigned  AS,
EVT  MemVT,
const MachineFunction MF 
) const
overridevirtual

◆ checkAsmConstraintVal()

bool SITargetLowering::checkAsmConstraintVal ( SDValue  Op,
StringRef  Constraint,
uint64_t  Val 
) const

◆ checkAsmConstraintValA()

bool SITargetLowering::checkAsmConstraintValA ( SDValue  Op,
uint64_t  Val,
unsigned  MaxSize = 64 
) const

◆ checkForPhysRegDependency()

bool SITargetLowering::checkForPhysRegDependency ( SDNode Def,
SDNode User,
unsigned  Op,
const TargetRegisterInfo TRI,
const TargetInstrInfo TII,
unsigned PhysReg,
int &  Cost 
) const
overridevirtual

Allows the target to handle physreg-carried dependency in target-specific way.

Used from the ScheduleDAGSDNodes to decide whether to add the edge to the dependency graph. Def - input: Selection DAG node defininfg physical register User - input: Selection DAG node using physical register Op - input: Number of User operand PhysReg - inout: set to the physical register if the edge is necessary, unchanged otherwise Cost - inout: physical register copy cost. Returns 'true' is the edge is necessary, 'false' otherwise

Reimplemented from llvm::TargetLowering.

Definition at line 16915 of file SIISelLowering.cpp.

References llvm::ISD::CopyToReg, llvm::TargetRegisterClass::getCopyCost(), llvm::SDNode::getMachineOpcode(), llvm::User::getOperand(), II, TII, and TRI.

◆ CollectTargetIntrinsicOperands()

void SITargetLowering::CollectTargetIntrinsicOperands ( const CallInst I,
SmallVectorImpl< SDValue > &  Ops,
SelectionDAG DAG 
) const
overridevirtual

◆ combineRepeatedFPDivisors()

unsigned llvm::SITargetLowering::combineRepeatedFPDivisors ( ) const
inlineoverridevirtual

Indicate whether this target prefers to combine FDIVs with the same divisor.

If the transform should never be done, return zero. If the transform should be done, return the minimum number of divisor uses that must exist.

Reimplemented from llvm::TargetLowering.

Definition at line 370 of file SIISelLowering.h.

◆ computeKnownAlignForTargetInstr()

Align SITargetLowering::computeKnownAlignForTargetInstr ( GISelKnownBits Analysis,
Register  R,
const MachineRegisterInfo MRI,
unsigned  Depth = 0 
) const
overridevirtual

Determine the known alignment for the pointer value R.

This is can typically be inferred from the number of low known 0 bits. However, for a pointer with a non-integral address space, the alignment value may be independent from the known low bits.

Reimplemented from llvm::TargetLowering.

Definition at line 16180 of file SIISelLowering.cpp.

References llvm::Intrinsic::getAttributes(), llvm::Function::getContext(), llvm::MachineFunction::getFunction(), llvm::GISelKnownBits::getMachineFunction(), MI, and MRI.

◆ computeKnownBitsForFrameIndex()

void SITargetLowering::computeKnownBitsForFrameIndex ( int  FIOp,
KnownBits Known,
const MachineFunction MF 
) const
overridevirtual

Determine which of the bits of FrameIndex FIOp are known to be 0.

Default implementation computes low bits based on alignment information. This should preserve known bits passed into it.

Reimplemented from llvm::TargetLowering.

Definition at line 16088 of file SIISelLowering.cpp.

References llvm::TargetLowering::computeKnownBitsForFrameIndex(), getSubtarget(), llvm::APInt::setHighBits(), and llvm::KnownBits::Zero.

◆ computeKnownBitsForTargetInstr()

void SITargetLowering::computeKnownBitsForTargetInstr ( GISelKnownBits Analysis,
Register  R,
KnownBits Known,
const APInt DemandedElts,
const MachineRegisterInfo MRI,
unsigned  Depth = 0 
) const
overridevirtual

Determine which of the bits specified in Mask are known to be either zero or one and return them in the KnownZero/KnownOne bitsets.

The DemandedElts argument allows us to only collect the known bits that are shared by the requested vector elements. This is for GISel.

Reimplemented from llvm::TargetLowering.

Definition at line 16105 of file SIISelLowering.cpp.

References llvm::KnownBits::add(), llvm::GISelKnownBits::computeKnownBitsImpl(), llvm::countl_zero(), llvm::Depth, getSubtarget(), llvm::KnownBits::isUnknown(), knownBitsForWorkitemID(), MI, MRI, llvm::KnownBits::One, llvm::APInt::setBitsFrom(), llvm::APInt::setHighBits(), and llvm::KnownBits::Zero.

◆ computeKnownBitsForTargetNode()

void SITargetLowering::computeKnownBitsForTargetNode ( const SDValue  Op,
KnownBits Known,
const APInt DemandedElts,
const SelectionDAG DAG,
unsigned  Depth = 0 
) const
overridevirtual

◆ copyToM0()

SDValue SITargetLowering::copyToM0 ( SelectionDAG DAG,
SDValue  Chain,
const SDLoc DL,
SDValue  V 
) const

Definition at line 7783 of file SIISelLowering.cpp.

References DL, llvm::SelectionDAG::getMachineNode(), and llvm::M0().

◆ denormalsEnabledForType() [1/2]

bool SITargetLowering::denormalsEnabledForType ( const SelectionDAG DAG,
EVT  VT 
) const

◆ denormalsEnabledForType() [2/2]

bool SITargetLowering::denormalsEnabledForType ( LLT  Ty,
const MachineFunction MF 
) const

◆ emitExpandAtomicAddrSpacePredicate()

void SITargetLowering::emitExpandAtomicAddrSpacePredicate ( Instruction AI) const

TODO: Only need to check private, then emit flat-known-not private (no need for shared block, or cast to global).

Definition at line 16940 of file SIISelLowering.cpp.

References llvm::PHINode::addIncoming(), Addr, llvm::buildAtomicRMWValue(), llvm::buildCmpXchgValue(), llvm::Instruction::clone(), llvm::BasicBlock::Create(), llvm::IRBuilderBase::CreateAddrSpaceCast(), llvm::IRBuilderBase::CreateAlignedLoad(), llvm::IRBuilderBase::CreateAlignedStore(), llvm::IRBuilderBase::CreateBr(), llvm::IRBuilderBase::CreateCondBr(), llvm::IRBuilderBase::CreateInsertValue(), llvm::IRBuilderBase::CreateIntrinsic(), llvm::IRBuilderBase::CreatePHI(), llvm::MDBuilder::createRange(), llvm::BasicBlock::end(), F, llvm::AtomicRMWInst::FAdd, llvm::PointerType::get(), llvm::PoisonValue::get(), llvm::AtomicCmpXchgInst::getAlign(), llvm::AtomicCmpXchgInst::getCompareOperand(), llvm::IRBuilderBase::getContext(), llvm::IRBuilderBase::GetInsertBlock(), llvm::IRBuilderBase::GetInsertPoint(), llvm::AtomicCmpXchgInst::getNewValOperand(), llvm::User::getOperand(), llvm::User::getOperandUse(), llvm::BasicBlock::getParent(), llvm::AtomicCmpXchgInst::getPointerOperandIndex(), llvm::AtomicRMWInst::getPointerOperandIndex(), llvm::Value::getType(), llvm::AMDGPUAS::GLOBAL_ADDRESS, llvm::GCNSubtarget::hasAtomicFaddInsts(), llvm::Instruction::insertInto(), llvm_unreachable, llvm::AMDGPUAS::LOCAL_ADDRESS, llvm::AMDGPUAS::PRIVATE_ADDRESS, llvm::Instruction::removeFromParent(), llvm::Value::replaceAllUsesWith(), llvm::Use::set(), llvm::IRBuilderBase::SetInsertPoint(), llvm::Instruction::setMetadata(), llvm::BasicBlock::splitBasicBlock(), llvm::Value::takeName(), and llvm::Value::use_empty().

Referenced by emitExpandAtomicCmpXchg(), and emitExpandAtomicRMW().

◆ emitExpandAtomicCmpXchg()

void SITargetLowering::emitExpandAtomicCmpXchg ( AtomicCmpXchgInst CI) const
overridevirtual

Perform a cmpxchg expansion using a target-specific method.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 17155 of file SIISelLowering.cpp.

References emitExpandAtomicAddrSpacePredicate().

◆ emitExpandAtomicRMW()

void SITargetLowering::emitExpandAtomicRMW ( AtomicRMWInst AI) const
overridevirtual

Perform a atomicrmw expansion using a target-specific way.

This is expected to be called when masked atomicrmw and bit test atomicrmw don't work, and the target supports another way to lower atomicrmw.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 17131 of file SIISelLowering.cpp.

References llvm::AtomicRMWInst::Add, emitExpandAtomicAddrSpacePredicate(), llvm::AMDGPUAS::FLAT_ADDRESS, llvm::AtomicRMWInst::getOperation(), llvm::AtomicRMWInst::getPointerAddressSpace(), llvm::AtomicRMWInst::getValOperand(), llvm::AtomicRMWInst::Or, llvm::AtomicRMWInst::setOperation(), llvm::AtomicRMWInst::Sub, and llvm::AtomicRMWInst::Xor.

◆ emitGWSMemViolTestLoop()

MachineBasicBlock * SITargetLowering::emitGWSMemViolTestLoop ( MachineInstr MI,
MachineBasicBlock BB 
) const

◆ EmitInstrWithCustomInserter()

MachineBasicBlock * SITargetLowering::EmitInstrWithCustomInserter ( MachineInstr MI,
MachineBasicBlock MBB 
) const
overridevirtual

This method should be implemented by targets that mark instructions with the 'usesCustomInserter' flag.

These instructions are special in various ways, which require special support to insert. The specified MachineInstr is created but not inserted into any basic blocks, and this method is called to expand it into a sequence of instructions, potentially also creating new basic blocks and control flow. As long as the returned basic block is different (i.e., we created a new one), the custom inserter is free to modify the rest of MBB.

Reimplemented from llvm::TargetLowering.

Definition at line 5036 of file SIISelLowering.cpp.

References llvm::Add, llvm::MachineInstrBuilder::add(), llvm::MachineInstrBuilder::addImm(), llvm::MachineInstrBuilder::addMBB(), AddMemOpInit(), llvm::MachineInstrBuilder::addReg(), llvm::MachineBasicBlock::addSuccessor(), llvm::Triple::AMDHSA, llvm::Triple::AMDPAL, assert(), llvm::BuildMI(), bundleInstWithWaitcnt(), llvm::MachineInstrBuilder::cloneMemRefs(), llvm::MachineOperand::CreateImm(), llvm::MachineFunction::CreateMachineBasicBlock(), llvm::RegState::Dead, llvm::AMDGPU::EncodingFields< Fields >::decode(), llvm::RegState::Define, DL, emitGWSMemViolTestLoop(), emitIndirectDst(), emitIndirectSrc(), llvm::TargetLowering::EmitInstrWithCustomInserter(), llvm::MachineBasicBlock::end(), llvm::AMDGPU::Hwreg::FP_DENORM_MASK, llvm::AMDGPU::Hwreg::FP_ROUND_MASK, llvm::MachineFunction::getInfo(), llvm::GCNSubtarget::getInstrInfo(), llvm::AMDGPUMachineFunction::getLDSSize(), llvm::MachineInstr::getOperand(), llvm::MachineBasicBlock::getParent(), llvm::MachineOperand::getReg(), llvm::MachineFunction::getRegInfo(), llvm::MachineFunction::getSubtarget(), getSubtarget(), llvm::TargetLoweringBase::getTargetMachine(), llvm::AMDGPU::getVOPe64(), llvm::GCNSubtarget::hasGWSAutoReplay(), llvm::GCNSubtarget::hasPrivEnabledTrap2NopBug(), llvm::GCNSubtarget::hasScalarAddSub64(), llvm::GCNSubtarget::hasShaderCyclesHiLoRegisters(), I, llvm::AMDGPU::Hwreg::ID_MODE, llvm::RegState::Implicit, llvm::RegState::ImplicitDefine, Info, llvm::MachineOperand::isReg(), llvm::RegState::Kill, lowerWaveReduce(), MI, MRI, llvm::Offset, llvm::MachineFunction::push_back(), llvm::MachineOperand::setIsUndef(), llvm::MachineOperand::setReg(), llvm::MachineBasicBlock::splitAt(), splitKillBlock(), llvm::MachineBasicBlock::succ_empty(), TII, and TRI.

◆ enableAggressiveFMAFusion() [1/2]

bool SITargetLowering::enableAggressiveFMAFusion ( EVT  VT) const
overridevirtual

Return true if target always benefits from combining into FMA for a given value type.

This must typically return false on targets where FMA takes more cycles to execute than FADD.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 5632 of file SIISelLowering.cpp.

◆ enableAggressiveFMAFusion() [2/2]

bool SITargetLowering::enableAggressiveFMAFusion ( LLT  Ty) const
overridevirtual

Return true if target always benefits from combining into FMA for a given value type.

This must typically return false on targets where FMA takes more cycles to execute than FADD.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 5643 of file SIISelLowering.cpp.

◆ finalizeLowering()

void SITargetLowering::finalizeLowering ( MachineFunction MF) const
overridevirtual

Execute target specific actions to finalize target lowering.

This is used to set extra flags in MachineFrameInformation and freezing the set of reserved registers. The default implementation just freezes the set of reserved registers.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 15993 of file SIISelLowering.cpp.

References assert(), llvm::MachineFunction::empty(), llvm::TargetLoweringBase::finalizeLowering(), getAlignedAGPRClassID(), llvm::TargetRegisterClass::getID(), llvm::MachineFunction::getInfo(), llvm::MachineFunction::getRegInfo(), llvm::GCNSubtarget::getRegisterInfo(), llvm::MachineFunction::getSubtarget(), llvm::TargetLoweringBase::getTargetMachine(), I, llvm::Register::index2VirtReg(), Info, MBB, MI, MRI, reservePrivateMemoryRegs(), TII, and TRI.

◆ getAddrModeArguments()

bool SITargetLowering::getAddrModeArguments ( IntrinsicInst ,
SmallVectorImpl< Value * > &  ,
Type *&   
) const
overridevirtual

CodeGenPrepare sinks address calculations into the same BB as Load/Store instructions reading the address.

This allows as much computation as possible to be done in the address mode for that operand. This hook lets targets also pass back when this should be done on intrinsics which load/store.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 1488 of file SIISelLowering.cpp.

References II, Ptr, and llvm::SmallVectorTemplateBase< T, bool >::push_back().

◆ getAsmOperandConstVal()

bool SITargetLowering::getAsmOperandConstVal ( SDValue  Op,
uint64_t Val 
) const

◆ getConstraintType()

SITargetLowering::ConstraintType SITargetLowering::getConstraintType ( StringRef  Constraint) const
overridevirtual

Given a constraint, return the type of constraint it is for this target.

Reimplemented from llvm::TargetLowering.

Definition at line 15806 of file SIISelLowering.cpp.

References llvm::TargetLowering::C_Other, llvm::TargetLowering::C_RegisterClass, llvm::TargetLowering::getConstraintType(), isImmConstraint(), and llvm::StringRef::size().

◆ getNumRegistersForCallingConv()

unsigned SITargetLowering::getNumRegistersForCallingConv ( LLVMContext Context,
CallingConv::ID  CC,
EVT  VT 
) const
overridevirtual

Certain targets require unusual breakdowns of certain types.

For MIPS, this occurs when a vector type is used, as vector are passed through the integer register set.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 1058 of file SIISelLowering.cpp.

References llvm::CallingConv::AMDGPU_KERNEL, CC, llvm::TargetLoweringBase::getNumRegistersForCallingConv(), llvm::EVT::getScalarType(), llvm::EVT::getSizeInBits(), llvm::EVT::getVectorNumElements(), llvm::AMDGPUSubtarget::has16BitInsts(), llvm::EVT::isVector(), and Size.

Referenced by adjustInliningThresholdUsingCallee().

◆ getOptimalMemOpType()

EVT SITargetLowering::getOptimalMemOpType ( const MemOp Op,
const AttributeList  
) const
overridevirtual

Returns the target specific optimal type for load and store operations as a result of memset, memcpy, and memmove lowering.

It returns EVT::Other if the type should be determined using generic target-independent logic.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 1888 of file SIISelLowering.cpp.

◆ getPointerMemTy()

MVT SITargetLowering::getPointerMemTy ( const DataLayout DL,
unsigned  AS 
) const
override

Similarly, the in-memory representation of a p7 is {p8, i32}, aka v8i32 when padding is added.

The in-memory representation of a p9 is {p8, i32, i32}, which is also v8i32 with padding.

Definition at line 1188 of file SIISelLowering.cpp.

References llvm::AMDGPUAS::BUFFER_FAT_POINTER, llvm::AMDGPUAS::BUFFER_STRIDED_POINTER, DL, and llvm::TargetLoweringBase::getPointerMemTy().

◆ getPointerTy()

MVT SITargetLowering::getPointerTy ( const DataLayout DL,
unsigned  AS 
) const
override

Map address space 7 to MVT::v5i32 because that's its in-memory representation.

This return value is vector-typed because there is no MVT::i160 and it is not clear if one can be added. While this could cause issues during codegen, these address space 7 pointers will be rewritten away by then. Therefore, we can return MVT::v5i32 in order to allow pre-codegen passes that query TargetTransformInfo, often for cost modeling, to work.

Definition at line 1176 of file SIISelLowering.cpp.

References llvm::AMDGPUAS::BUFFER_FAT_POINTER, llvm::AMDGPUAS::BUFFER_STRIDED_POINTER, DL, and llvm::TargetLoweringBase::getPointerTy().

◆ getPreferredShiftAmountTy()

LLT SITargetLowering::getPreferredShiftAmountTy ( LLT  ShiftValueTy) const
overridevirtual

Return the preferred type to use for a shift opcode, given the shifted amount type is ShiftValueTy.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 5659 of file SIISelLowering.cpp.

References llvm::LLT::changeElementSize(), llvm::LLT::getScalarSizeInBits(), and llvm::AMDGPUSubtarget::has16BitInsts().

◆ getPreferredVectorAction()

TargetLoweringBase::LegalizeTypeAction SITargetLowering::getPreferredVectorAction ( MVT  VT) const
overridevirtual

◆ getPrefLoopAlignment()

Align SITargetLowering::getPrefLoopAlignment ( MachineLoop ML) const
overridevirtual

◆ getRegClassFor()

const TargetRegisterClass * SITargetLowering::getRegClassFor ( MVT  VT,
bool  isDivergent 
) const
overridevirtual

Return the register class that should be used for the specified value type.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 16779 of file SIISelLowering.cpp.

References llvm::TargetLoweringBase::getRegClassFor(), llvm::GCNSubtarget::getRegisterInfo(), llvm::GCNSubtarget::isWave64(), and TRI.

Referenced by PostISelFolding().

◆ getRegForInlineAsmConstraint()

std::pair< unsigned, const TargetRegisterClass * > SITargetLowering::getRegForInlineAsmConstraint ( const TargetRegisterInfo TRI,
StringRef  Constraint,
MVT  VT 
) const
overridevirtual

Given a physical register constraint (e.g.

{edx}), return the register number and the register class for the register.

Given a register class constraint, like 'r', if this corresponds directly to an LLVM register class, return a register of 0 and the register class pointer.

This should only be used for C_Register constraints. On error, this returns a register number of 0 and a null register class pointer.

Reimplemented from llvm::TargetLowering.

Definition at line 15672 of file SIISelLowering.cpp.

References llvm::BitWidth, llvm::StringRef::data(), End, llvm::StringRef::ends_with(), llvm::Failed(), llvm::TargetLowering::getRegForInlineAsmConstraint(), llvm::TargetRegisterClass::getRegister(), llvm::SIRegisterInfo::getSGPRClassForBitWidth(), llvm::MVT::getSizeInBits(), llvm::GCNSubtarget::hasMAIInsts(), Idx, llvm::SIRegisterInfo::isAGPRClass(), llvm::SIRegisterInfo::isSGPRClass(), llvm::TargetLoweringBase::isTypeLegal(), llvm::MVT::isVector(), llvm::SIRegisterInfo::isVGPRClass(), RegName, llvm::MVT::SimpleTy, llvm::StringRef::size(), llvm::StringRef::starts_with(), and TRI.

Referenced by llvm::GCNTTIImpl::isInlineAsmSourceOfDivergence(), and requiresUniformRegister().

◆ getRegisterByName()

Register SITargetLowering::getRegisterByName ( const char RegName,
LLT  Ty,
const MachineFunction MF 
) const
overridevirtual

Return the register ID of the name passed in.

Used by named register global variables extension. There is no target-independent behaviour so the default action is to bail.

Reimplemented from llvm::TargetLowering.

Definition at line 4357 of file SIISelLowering.cpp.

References llvm::StringSwitch< T, R >::Case(), llvm::StringSwitch< T, R >::Default(), llvm::GCNSubtarget::getRegisterInfo(), llvm::LLT::getSizeInBits(), llvm::GCNSubtarget::hasFlatScrRegister(), llvm_unreachable, RegName, and llvm::report_fatal_error().

◆ getRegisterTypeForCallingConv()

MVT SITargetLowering::getRegisterTypeForCallingConv ( LLVMContext Context,
CallingConv::ID  CC,
EVT  VT 
) const
overridevirtual

Certain combinations of ABIs, Targets and features require that types are legal for some operations and not for other operations.

For MIPS all vector types must be passed through the integer register set.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 1029 of file SIISelLowering.cpp.

References llvm::CallingConv::AMDGPU_KERNEL, CC, llvm::TargetLoweringBase::getRegisterTypeForCallingConv(), llvm::EVT::getScalarType(), llvm::EVT::getSimpleVT(), llvm::EVT::getSizeInBits(), llvm::AMDGPUSubtarget::has16BitInsts(), llvm::EVT::isInteger(), llvm::EVT::isVector(), and Size.

◆ getRoundingControlRegisters()

ArrayRef< MCPhysReg > SITargetLowering::getRoundingControlRegisters ( ) const
overridevirtual

Returns a 0 terminated array of rounding control registers that can be attached into strict FP call.

Reimplemented from llvm::TargetLowering.

Definition at line 990 of file SIISelLowering.cpp.

◆ getScalarShiftAmountTy()

MVT SITargetLowering::getScalarShiftAmountTy ( const DataLayout DL,
EVT   
) const
overridevirtual

Return the type to use for a scalar shift opcode, given the shifted amount type.

Targets should return a legal type if the input type is legal. Targets can return a type that is too small if the input type is illegal.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 5653 of file SIISelLowering.cpp.

◆ getSetCCResultType()

EVT SITargetLowering::getSetCCResultType ( const DataLayout DL,
LLVMContext Context,
EVT  VT 
) const
overridevirtual

Return the ValueType of the result of SETCC operations.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 5645 of file SIISelLowering.cpp.

References llvm::EVT::getVectorNumElements(), llvm::EVT::getVectorVT(), and llvm::EVT::isVector().

◆ getSubtarget()

const GCNSubtarget * SITargetLowering::getSubtarget ( ) const

◆ getTargetMMOFlags()

MachineMemOperand::Flags SITargetLowering::getTargetMMOFlags ( const Instruction I) const
overridevirtual

This callback is used to inspect load/store instructions and add target-specific MachineMemOperand flags to them.

The default implementation does nothing.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 16905 of file SIISelLowering.cpp.

References I, llvm::MOLastUse, llvm::MONoClobber, and llvm::MachineMemOperand::MONone.

◆ getTgtMemIntrinsic()

bool SITargetLowering::getTgtMemIntrinsic ( IntrinsicInfo ,
const CallInst ,
MachineFunction ,
unsigned   
) const
overridevirtual

Given an intrinsic, checks if on the target the intrinsic will need to map to a MemIntrinsicNode (touches memory).

If this is the case, it returns true and store the intrinsic information into the IntrinsicInfo that was passed to the function.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 1197 of file SIISelLowering.cpp.

References llvm::CallBase::arg_size(), llvm::AMDGPUAS::BUFFER_RESOURCE, llvm::MemoryEffectsBase< LocationEnum >::doesNotAccessMemory(), llvm::AMDGPU::MIMGBaseOpcodeInfo::Gather4, llvm::CallBase::getArgOperand(), llvm::Intrinsic::getAttributes(), llvm::Value::getContext(), llvm::MachineFunction::getDataLayout(), llvm::SIMachineFunctionInfo::getGWSPSV(), llvm::AMDGPU::getImageDimIntrinsicInfo(), llvm::MachineFunction::getInfo(), llvm::EVT::getIntegerVT(), llvm::AttributeList::getMemoryEffects(), llvm::AMDGPU::getMIMGBaseOpcodeInfo(), llvm::User::getOperand(), llvm::TargetLoweringBase::getTargetMachine(), llvm::Value::getType(), llvm::TargetLoweringBase::getValueType(), llvm::MVT::getVT(), llvm::Instruction::hasMetadata(), Info, Intr, llvm::ISD::INTRINSIC_VOID, llvm::ISD::INTRINSIC_W_CHAIN, llvm::Type::isVoidTy(), llvm::ConstantInt::isZero(), llvm::AMDGPU::lookupRsrcIntrinsic(), memVTFromLoadIntrData(), memVTFromLoadIntrReturn(), llvm::MachineMemOperand::MODereferenceable, llvm::MachineMemOperand::MOInvariant, llvm::MachineMemOperand::MOLoad, llvm::MachineMemOperand::MONone, llvm::MachineMemOperand::MOStore, llvm::MachineMemOperand::MOVolatile, llvm::AMDGPU::MIMGBaseOpcodeInfo::NoReturn, llvm::MemoryEffectsBase< LocationEnum >::onlyReadsMemory(), llvm::MemoryEffectsBase< LocationEnum >::onlyWritesMemory(), llvm::popcount(), llvm::AMDGPUAS::STREAMOUT_REGISTER, and llvm::AMDGPU::CPol::VOLATILE.

◆ getVectorTypeBreakdownForCallingConv()

unsigned SITargetLowering::getVectorTypeBreakdownForCallingConv ( LLVMContext Context,
CallingConv::ID  CC,
EVT  VT,
EVT IntermediateVT,
unsigned NumIntermediates,
MVT RegisterVT 
) const
overridevirtual

Certain targets such as MIPS require that some types such as vectors are always broken down into scalars in some contexts.

This occurs even if the vector type is legal.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 1084 of file SIISelLowering.cpp.

References llvm::CallingConv::AMDGPU_KERNEL, CC, llvm::EVT::getScalarType(), llvm::EVT::getSimpleVT(), llvm::EVT::getSizeInBits(), llvm::EVT::getVectorNumElements(), llvm::TargetLoweringBase::getVectorTypeBreakdownForCallingConv(), llvm::AMDGPUSubtarget::has16BitInsts(), llvm::EVT::isInteger(), llvm::EVT::isVector(), and Size.

◆ hasMemSDNodeUser()

bool SITargetLowering::hasMemSDNodeUser ( SDNode N) const

◆ initializeSplitCSR()

void SITargetLowering::initializeSplitCSR ( MachineBasicBlock Entry) const
overridevirtual

Perform necessary initialization to handle a subset of CSRs explicitly via copies.

This function is called at the beginning of instruction selection.

Reimplemented from llvm::TargetLowering.

Definition at line 2759 of file SIISelLowering.cpp.

◆ insertCopiesSplitCSR()

void SITargetLowering::insertCopiesSplitCSR ( MachineBasicBlock Entry,
const SmallVectorImpl< MachineBasicBlock * > &  Exits 
) const
overridevirtual

Insert explicit copies in entry and exit blocks.

We copy a subset of CSRs to virtual registers in the entry block, and copy them back to physical registers in the exit blocks. This function is called at the end of instruction selection.

Reimplemented from llvm::TargetLowering.

Definition at line 2761 of file SIISelLowering.cpp.

References llvm::MachineInstrBuilder::addReg(), llvm::BuildMI(), contains(), llvm::GCNSubtarget::getInstrInfo(), llvm::GCNSubtarget::getRegisterInfo(), getSubtarget(), I, llvm_unreachable, MBBI, MRI, TII, and TRI.

◆ isCanonicalized() [1/2]

bool SITargetLowering::isCanonicalized ( Register  Reg,
const MachineFunction MF,
unsigned  MaxDepth = 5 
) const

◆ isCanonicalized() [2/2]

bool SITargetLowering::isCanonicalized ( SelectionDAG DAG,
SDValue  Op,
unsigned  MaxDepth = 5 
) const

Definition at line 12800 of file SIISelLowering.cpp.

References llvm::ISD::AND, llvm::ISD::BF16_TO_FP, llvm::ISD::BITCAST, llvm::ISD::BUILD_VECTOR, llvm::AMDGPUISD::CLAMP, llvm::AMDGPUISD::COS_HW, llvm::AMDGPUISD::CVT_F32_UBYTE0, llvm::AMDGPUISD::CVT_F32_UBYTE1, llvm::AMDGPUISD::CVT_F32_UBYTE2, llvm::AMDGPUISD::CVT_F32_UBYTE3, llvm::AMDGPUISD::CVT_PKRTZ_F16_F32, denormalsEnabledForType(), llvm::AMDGPUISD::DIV_FIXUP, llvm::AMDGPUISD::DIV_FMAS, llvm::AMDGPUISD::DIV_SCALE, llvm::AMDGPUISD::EXP, llvm::ISD::EXTRACT_SUBVECTOR, llvm::ISD::EXTRACT_VECTOR_ELT, F, llvm::ISD::FABS, llvm::ISD::FADD, llvm::ISD::FCANONICALIZE, llvm::ISD::FCEIL, llvm::ISD::FCOPYSIGN, llvm::ISD::FCOS, llvm::ISD::FDIV, llvm::ISD::FFLOOR, llvm::ISD::FLDEXP, llvm::ISD::FMA, llvm::ISD::FMAD, llvm::AMDGPUISD::FMAD_FTZ, llvm::AMDGPUISD::FMAX3, llvm::ISD::FMAXIMUM, llvm::AMDGPUISD::FMAXIMUM3, llvm::ISD::FMAXNUM, llvm::ISD::FMAXNUM_IEEE, llvm::AMDGPUISD::FMED3, llvm::AMDGPUISD::FMIN3, llvm::ISD::FMINIMUM, llvm::AMDGPUISD::FMINIMUM3, llvm::ISD::FMINNUM, llvm::ISD::FMINNUM_IEEE, llvm::ISD::FMUL, llvm::AMDGPUISD::FMUL_LEGACY, llvm::ISD::FNEG, llvm::ISD::FP16_TO_FP, llvm::ISD::FP_EXTEND, llvm::ISD::FP_ROUND, llvm::ISD::FP_TO_BF16, llvm::ISD::FP_TO_FP16, llvm::AMDGPUISD::FP_TO_FP16, llvm::AMDGPUISD::FRACT, llvm::ISD::FREM, llvm::ISD::FSIN, llvm::ISD::FSINCOS, llvm::ISD::FSQRT, llvm::ISD::FSUB, llvm::MachineFunction::getDenormalMode(), llvm::DenormalMode::getIEEE(), llvm::SelectionDAG::getMachineFunction(), llvm::DWARFExpression::Operation::getNumOperands(), llvm::SDValue::getOpcode(), llvm::SDValue::getOperand(), llvm::SDValue::getValueType(), I, llvm::ISD::INSERT_VECTOR_ELT, llvm::ISD::INTRINSIC_WO_CHAIN, isCanonicalized(), llvm::SelectionDAG::isKnownNeverSNaN(), llvm::AMDGPUISD::LOG, MaxDepth, llvm::AMDGPUISD::RCP, llvm::AMDGPUISD::RCP_IFLAG, llvm::AMDGPUISD::RCP_LEGACY, RHS, llvm::AMDGPUISD::RSQ, llvm::AMDGPUISD::RSQ_CLAMP, llvm::ISD::SELECT, llvm::AMDGPUISD::SIN_HW, llvm::GCNSubtarget::supportsMinMaxDenormModes(), llvm::ISD::TRUNCATE, and llvm::ISD::UNDEF.

Referenced by isCanonicalized().

◆ isEligibleForTailCallOptimization()

bool SITargetLowering::isEligibleForTailCallOptimization ( SDValue  Callee,
CallingConv::ID  CalleeCC,
bool  isVarArg,
const SmallVectorImpl< ISD::OutputArg > &  Outs,
const SmallVectorImpl< SDValue > &  OutVals,
const SmallVectorImpl< ISD::InputArg > &  Ins,
SelectionDAG DAG 
) const

◆ isExtractSubvectorCheap()

bool SITargetLowering::isExtractSubvectorCheap ( EVT  ResVT,
EVT  SrcVT,
unsigned  Index 
) const
overridevirtual

Return true if EXTRACT_SUBVECTOR is cheap for extracting this result type from this source type with this index.

This is needed because EXTRACT_SUBVECTOR usually has custom lowering that depends on the index of the first element, and only the target knows which lowering is cheap.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 1942 of file SIISelLowering.cpp.

References llvm::ISD::EXTRACT_SUBVECTOR, and llvm::TargetLoweringBase::isOperationLegalOrCustom().

◆ isFMADLegal() [1/2]

bool SITargetLowering::isFMADLegal ( const MachineInstr MI,
const LLT  Ty 
) const
overridevirtual

Returns true if MI can be combined with another instruction to form TargetOpcode::G_FMAD.

N may be an TargetOpcode::G_FADD, TargetOpcode::G_FSUB, or an TargetOpcode::G_FMUL which will be distributed into an fadd/fsub.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 5726 of file SIISelLowering.cpp.

References denormalModeIsFlushAllF32(), denormalModeIsFlushAllF64F16(), llvm::LLT::getScalarSizeInBits(), llvm::GCNSubtarget::hasMadF16(), llvm::AMDGPUSubtarget::hasMadMacF32Insts(), llvm::LLT::isScalar(), and MI.

◆ isFMADLegal() [2/2]

bool SITargetLowering::isFMADLegal ( const SelectionDAG DAG,
const SDNode N 
) const
overridevirtual

Returns true if be combined with to form an ISD::FMAD.

N may be an ISD::FADD, ISD::FSUB, or an ISD::FMUL which will be distributed into an fadd/fsub.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 5739 of file SIISelLowering.cpp.

References denormalModeIsFlushAllF32(), denormalModeIsFlushAllF64F16(), llvm::SelectionDAG::getMachineFunction(), llvm::GCNSubtarget::hasMadF16(), llvm::AMDGPUSubtarget::hasMadMacF32Insts(), and N.

◆ isFMAFasterThanFMulAndFAdd() [1/2]

bool SITargetLowering::isFMAFasterThanFMulAndFAdd ( const MachineFunction MF,
const LLT   
) const
overridevirtual

Return true if an FMA operation is faster than a pair of fmul and fadd instructions.

fmuladd intrinsics will be expanded to FMAs when this method returns true, otherwise fmuladd is expanded to fmul + fadd.

NOTE: This may be called before legalization on types for which FMAs are not legal, but should return true if those types will eventually legalize to types that support FMAs. After legalization, it will only be called on types that support FMAs (via Legal or Custom actions)

Reimplemented from llvm::TargetLoweringBase.

Definition at line 5710 of file SIISelLowering.cpp.

References llvm::LLT::getScalarSizeInBits(), and isFMAFasterThanFMulAndFAdd().

◆ isFMAFasterThanFMulAndFAdd() [2/2]

bool SITargetLowering::isFMAFasterThanFMulAndFAdd ( const MachineFunction MF,
EVT   
) const
overridevirtual

Return true if an FMA operation is faster than a pair of fmul and fadd instructions.

fmuladd intrinsics will be expanded to FMAs when this method returns true, otherwise fmuladd is expanded to fmul + fadd.

NOTE: This may be called before legalization on types for which FMAs are not legal, but should return true if those types will eventually legalize to types that support FMAs. After legalization, it will only be called on types that support FMAs (via Legal or Custom actions)

Targets that care about soft float support should return false when soft float code is being generated (i.e. use-soft-float).

Reimplemented from llvm::TargetLoweringBase.

Definition at line 5680 of file SIISelLowering.cpp.

References denormalModeIsFlushAllF32(), denormalModeIsFlushAllF64F16(), llvm::EVT::getScalarType(), llvm::EVT::getSimpleVT(), llvm::AMDGPUSubtarget::has16BitInsts(), llvm::GCNSubtarget::hasDLInsts(), llvm::AMDGPUSubtarget::hasFastFMAF32(), llvm::AMDGPUSubtarget::hasMadMacF32Insts(), and llvm::MVT::SimpleTy.

Referenced by isFMAFasterThanFMulAndFAdd().

◆ isFPExtFoldable() [1/2]

bool SITargetLowering::isFPExtFoldable ( const MachineInstr MI,
unsigned  Opcode,
LLT  DestTy,
LLT  SrcTy 
) const
overridevirtual

Return true if an fpext operation input to an Opcode operation is free (for instance, because half-precision floating-point numbers are implicitly extended to float-precision) for an FMA instruction.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 1013 of file SIISelLowering.cpp.

References denormalModeIsFlushAllF32(), llvm::LLT::getScalarSizeInBits(), llvm::GCNSubtarget::hasFmaMixInsts(), llvm::AMDGPUSubtarget::hasMadMixInsts(), and MI.

◆ isFPExtFoldable() [2/2]

bool SITargetLowering::isFPExtFoldable ( const SelectionDAG DAG,
unsigned  Opcode,
EVT  DestVT,
EVT  SrcVT 
) const
overridevirtual

Return true if an fpext operation input to an Opcode operation is free (for instance, because half-precision floating-point numbers are implicitly extended to float-precision) for an FMA instruction.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 1003 of file SIISelLowering.cpp.

References denormalModeIsFlushAllF32(), llvm::ISD::FMA, llvm::ISD::FMAD, llvm::SelectionDAG::getMachineFunction(), llvm::EVT::getScalarType(), llvm::GCNSubtarget::hasFmaMixInsts(), and llvm::AMDGPUSubtarget::hasMadMixInsts().

◆ isFreeAddrSpaceCast()

bool SITargetLowering::isFreeAddrSpaceCast ( unsigned  SrcAS,
unsigned  DestAS 
) const
overridevirtual

Returns true if a cast from SrcAS to DestAS is "cheap", such that e.g.

we are happy to sink it into basic blocks. A cast may be free, but not necessarily a no-op. e.g. a free truncate from a 64-bit to 32-bit pointer.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 1916 of file SIISelLowering.cpp.

References llvm::AMDGPUAS::FLAT_ADDRESS, and llvm::TargetLoweringBase::getTargetMachine().

◆ isKnownNeverNaNForTargetNode()

bool SITargetLowering::isKnownNeverNaNForTargetNode ( SDValue  Op,
const SelectionDAG DAG,
bool  SNaN = false,
unsigned  Depth = 0 
) const
overridevirtual

If SNaN is false,.

Returns
true if Op is known to never be any NaN. If sNaN is true, returns if Op is known to never be a signaling NaN.

Reimplemented from llvm::AMDGPUTargetLowering.

Definition at line 16369 of file SIISelLowering.cpp.

References llvm::AMDGPUISD::CLAMP, llvm::Depth, llvm::MachineFunction::getInfo(), llvm::SelectionDAG::getMachineFunction(), Info, llvm::SelectionDAG::isKnownNeverNaN(), and llvm::AMDGPUTargetLowering::isKnownNeverNaNForTargetNode().

◆ isLegalAddressingMode()

bool SITargetLowering::isLegalAddressingMode ( const DataLayout DL,
const AddrMode AM,
Type Ty,
unsigned  AddrSpace,
Instruction I = nullptr 
) const
overridevirtual

Return true if the addressing mode represented by AM is legal for this target, for a load/store of the specified type.

isLegalAddressingMode - Return true if the addressing mode represented by AM is legal for this target, for a load/store of the specified type.

The type may be VoidTy, in which case only return true if the addressing mode is legal for a load/store of any legal type. TODO: Handle pre/postinc as well.

If the address space cannot be determined, it will be -1.

TODO: Remove default argument

Reimplemented from llvm::TargetLoweringBase.

Definition at line 1597 of file SIISelLowering.cpp.

References llvm::TargetLoweringBase::AddrMode::BaseGV, llvm::TargetLoweringBase::AddrMode::BaseOffs, llvm::AMDGPUAS::BUFFER_FAT_POINTER, llvm::AMDGPUAS::BUFFER_RESOURCE, llvm::AMDGPUAS::BUFFER_STRIDED_POINTER, llvm::AMDGPUAS::CONSTANT_ADDRESS, llvm::AMDGPUAS::CONSTANT_ADDRESS_32BIT, DL, llvm::GCNSubtarget::enableFlatScratch(), llvm::AMDGPUAS::FLAT_ADDRESS, llvm::GCNSubtarget::getGeneration(), llvm::AMDGPUSubtarget::GFX12, llvm::AMDGPUSubtarget::GFX9, llvm::AMDGPUAS::GLOBAL_ADDRESS, llvm::TargetLoweringBase::AddrMode::HasBaseReg, llvm::GCNSubtarget::hasGDS(), llvm::GCNSubtarget::hasScalarSubwordLoads(), isLegalFlatAddressingMode(), isLegalGlobalAddressingMode(), llvm::Type::isSized(), llvm::AMDGPUAS::LOCAL_ADDRESS, llvm::AMDGPUAS::PRIVATE_ADDRESS, llvm::AMDGPUAS::REGION_ADDRESS, llvm::TargetLoweringBase::AddrMode::Scale, llvm::AMDGPUSubtarget::SEA_ISLANDS, llvm::AMDGPUSubtarget::SOUTHERN_ISLANDS, and llvm::AMDGPUAS::UNKNOWN_ADDRESS_SPACE.

◆ isLegalFlatAddressingMode()

bool SITargetLowering::isLegalFlatAddressingMode ( const AddrMode AM,
unsigned  AddrSpace 
) const

◆ isLegalGlobalAddressingMode()

bool SITargetLowering::isLegalGlobalAddressingMode ( const AddrMode AM) const

◆ isMemOpHasNoClobberedMemOperand()

bool SITargetLowering::isMemOpHasNoClobberedMemOperand ( const SDNode N) const

◆ isNonGlobalAddrSpace()

bool SITargetLowering::isNonGlobalAddrSpace ( unsigned  AS)
static

◆ isOffsetFoldingLegal()

bool SITargetLowering::isOffsetFoldingLegal ( const GlobalAddressSDNode GA) const
overridevirtual

Return true if folding a constant offset with the given GlobalAddress is legal.

It is frequently not legal in PIC relocation models.

Reimplemented from llvm::TargetLowering.

Definition at line 7653 of file SIISelLowering.cpp.

References llvm::AMDGPUAS::CONSTANT_ADDRESS, llvm::AMDGPUAS::CONSTANT_ADDRESS_32BIT, llvm::GlobalAddressSDNode::getAddressSpace(), llvm::GlobalAddressSDNode::getGlobal(), llvm::AMDGPUAS::GLOBAL_ADDRESS, llvm::AMDGPUSubtarget::isAmdHsaOS(), and shouldEmitGOTReloc().

◆ isReassocProfitable() [1/2]

bool SITargetLowering::isReassocProfitable ( MachineRegisterInfo MRI,
Register  N0,
Register  N1 
) const
overridevirtual

Reimplemented from llvm::AMDGPUTargetLowering.

Definition at line 16899 of file SIISelLowering.cpp.

References MRI.

◆ isReassocProfitable() [2/2]

bool SITargetLowering::isReassocProfitable ( SelectionDAG DAG,
SDValue  N0,
SDValue  N1 
) const
overridevirtual

◆ isSDNodeSourceOfDivergence()

bool SITargetLowering::isSDNodeSourceOfDivergence ( const SDNode N,
FunctionLoweringInfo FLI,
UniformityInfo UA 
) const
overridevirtual

◆ isShuffleMaskLegal()

bool SITargetLowering::isShuffleMaskLegal ( ArrayRef< int >  ,
EVT   
) const
overridevirtual

Targets can use this to indicate that they only support some VECTOR_SHUFFLE operations, those with specific masks.

By default, if a target supports the VECTOR_SHUFFLE node, all mask values are assumed to be legal.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 1023 of file SIISelLowering.cpp.

◆ isTypeDesirableForOp()

bool SITargetLowering::isTypeDesirableForOp ( unsigned  ,
EVT  VT 
) const
overridevirtual

Return true if the target has native support for the specified value type and it is 'desirable' to use the type for the given node type.

e.g. On x86 i16 is legal, but undesirable since i16 instruction encodings are longer and some i16 instructions are slow.

Reimplemented from llvm::TargetLowering.

Definition at line 1951 of file SIISelLowering.cpp.

References llvm::AMDGPUSubtarget::has16BitInsts(), llvm::TargetLowering::isTypeDesirableForOp(), llvm::ISD::LOAD, llvm::ISD::SETCC, and llvm::ISD::STORE.

◆ legalizeTargetIndependentNode()

SDNode * SITargetLowering::legalizeTargetIndependentNode ( SDNode Node,
SelectionDAG DAG 
) const

◆ LowerAsmOperandForConstraint()

void SITargetLowering::LowerAsmOperandForConstraint ( SDValue  Op,
StringRef  Constraint,
std::vector< SDValue > &  Ops,
SelectionDAG DAG 
) const
overridevirtual

Lower the specified operand into the Ops vector.

If it is invalid, don't add anything to Ops.

Reimplemented from llvm::TargetLowering.

Definition at line 15830 of file SIISelLowering.cpp.

References checkAsmConstraintVal(), clearUnusedBits(), getAsmOperandConstVal(), llvm::SelectionDAG::getTargetConstant(), isImmConstraint(), and llvm::TargetLowering::LowerAsmOperandForConstraint().

◆ LowerCall()

SDValue SITargetLowering::LowerCall ( CallLoweringInfo ,
SmallVectorImpl< SDValue > &   
) const
overridevirtual

This hook must be implemented to lower calls into the specified DAG.

The outgoing arguments to the call are described by the Outs array, and the values to be returned by the call are described by the Ins array. The implementation should fill in the InVals array with legal-type return values from the call, and return the resulting token chain value.

Reimplemented from llvm::AMDGPUTargetLowering.

Definition at line 3634 of file SIISelLowering.cpp.

References llvm::ISD::ADD, llvm::AMDGPUTargetLowering::addTokenForArgument(), llvm::CCValAssign::AExt, llvm::CallingConv::AMDGPU_CS_Chain, llvm::CallingConv::AMDGPU_CS_ChainPreserve, llvm::CallingConv::AMDGPU_Gfx, llvm::CCState::AnalyzeCallOperands(), llvm::ISD::ANY_EXTEND, llvm::TargetLowering::CallLoweringInfo::Args, assert(), llvm::CCValAssign::BCvt, llvm::ISD::BITCAST, llvm::AMDGPUISD::CALL, llvm::TargetLowering::CallLoweringInfo::CallConv, llvm::TargetLowering::CallLoweringInfo::Callee, llvm::TargetLowering::CallLoweringInfo::CB, llvm::AMDGPUTargetLowering::CCAssignFnForCall(), llvm::TargetLowering::CallLoweringInfo::Chain, llvm::commonAlignment(), llvm::TargetLowering::CallLoweringInfo::ConvergenceControlToken, llvm::ISD::CONVERGENCECTRL_GLUE, llvm::MachineFrameInfo::CreateFixedObject(), llvm::TargetLowering::CallLoweringInfo::DAG, llvm::TargetLowering::CallLoweringInfo::DL, DL, llvm::SmallVectorBase< Size_T >::empty(), llvm::GCNSubtarget::enableFlatScratch(), llvm::ISD::FP_EXTEND, llvm::CCValAssign::FPExt, llvm::CCValAssign::Full, llvm::SelectionDAG::getCALLSEQ_END(), llvm::SelectionDAG::getCALLSEQ_START(), llvm::SelectionDAG::getConstant(), llvm::SelectionDAG::getContext(), llvm::SelectionDAG::getCopyFromReg(), llvm::SelectionDAG::getCopyToReg(), llvm::MachinePointerInfo::getFixedStack(), llvm::SelectionDAG::getFrameIndex(), llvm::MachineFunction::getFrameInfo(), llvm::MachineFunction::getInfo(), llvm::CCValAssign::getLocInfo(), llvm::CCValAssign::getLocMemOffset(), llvm::CCValAssign::getLocReg(), llvm::CCValAssign::getLocVT(), llvm::SelectionDAG::getMachineFunction(), llvm::SelectionDAG::getMachineNode(), llvm::SelectionDAG::getMemcpy(), llvm::SelectionDAG::getNode(), llvm::SelectionDAG::getRegister(), llvm::GCNSubtarget::getRegisterInfo(), llvm::SelectionDAG::getRegisterMask(), llvm::MachinePointerInfo::getStack(), llvm::GCNSubtarget::getStackAlignment(), llvm::CCState::getStackSize(), llvm::SelectionDAG::getStore(), llvm::MVT::getStoreSize(), llvm::MachineFunction::getTarget(), llvm::SelectionDAG::getTargetConstant(), llvm::SelectionDAG::getTargetGlobalAddress(), llvm::SelectionDAG::getTokenFactor(), llvm::SelectionDAG::getUNDEF(), llvm::SDValue::getValue(), llvm::SDValue::getValueType(), llvm::CCValAssign::getValVT(), llvm::AMDGPUSubtarget::getWavefrontSize(), llvm::TargetOptions::GuaranteedTailCallOpt, Info, llvm::TargetLowering::CallLoweringInfo::Ins, llvm::ISD::INTRINSIC_WO_CHAIN, llvm::AMDGPU::isChainCC(), llvm::SDNode::isDivergent(), isEligibleForTailCallOptimization(), llvm::Type::isIntegerTy(), llvm::CCValAssign::isMemLoc(), llvm::CallBase::isMustTailCall(), llvm::isNullConstant(), llvm::CCValAssign::isRegLoc(), llvm::TargetLowering::CallLoweringInfo::IsTailCall, llvm::TargetLowering::CallLoweringInfo::IsVarArg, llvm_unreachable, LowerCallResult(), llvm::AMDGPUTargetLowering::lowerUnhandledCall(), llvm::TargetLoweringBase::ArgListEntry::Node, llvm::Offset, llvm::TargetMachine::Options, llvm::TargetLowering::CallLoweringInfo::Outs, llvm::TargetLowering::CallLoweringInfo::OutVals, passSpecialInputs(), llvm::AMDGPUAS::PRIVATE_ADDRESS, llvm::SmallVectorTemplateBase< T, bool >::push_back(), llvm::report_fatal_error(), llvm::MachineFrameInfo::setHasTailCall(), llvm::CCValAssign::SExt, llvm::ISD::SIGN_EXTEND, llvm::SmallVectorBase< Size_T >::size(), llvm::AMDGPUISD::TC_RETURN, llvm::AMDGPUISD::TC_RETURN_CHAIN, llvm::AMDGPUISD::TC_RETURN_GFX, llvm::ISD::TokenFactor, TRI, llvm::TargetLoweringBase::ArgListEntry::Ty, llvm::ISD::InputArg::VT, llvm::ISD::ZERO_EXTEND, and llvm::CCValAssign::ZExt.

◆ LowerCallResult()

SDValue SITargetLowering::LowerCallResult ( SDValue  Chain,
SDValue  InGlue,
CallingConv::ID  CallConv,
bool  isVarArg,
const SmallVectorImpl< ISD::InputArg > &  Ins,
const SDLoc DL,
SelectionDAG DAG,
SmallVectorImpl< SDValue > &  InVals,
bool  isThisReturn,
SDValue  ThisVal 
) const

◆ LowerDYNAMIC_STACKALLOC()

SDValue SITargetLowering::LowerDYNAMIC_STACKALLOC ( SDValue  Op,
SelectionDAG DAG 
) const

◆ lowerDYNAMIC_STACKALLOCImpl()

SDValue SITargetLowering::lowerDYNAMIC_STACKALLOCImpl ( SDValue  Op,
SelectionDAG DAG 
) const

◆ LowerFormalArguments()

SDValue SITargetLowering::LowerFormalArguments ( SDValue  ,
CallingConv::ID  ,
bool  ,
const SmallVectorImpl< ISD::InputArg > &  ,
const SDLoc ,
SelectionDAG ,
SmallVectorImpl< SDValue > &   
) const
overridevirtual

This hook must be implemented to lower the incoming (formal) arguments, described by the Ins array, into the specified DAG.

The implementation should fill in the InVals array with legal-type argument values, and return the resulting token chain value.

Reimplemented from llvm::TargetLowering.

Definition at line 2796 of file SIISelLowering.cpp.

References llvm::MachineFunction::addLiveIn(), llvm::CCValAssign::AExt, llvm::alignDown(), allocateHSAUserSGPRs(), allocateLDSKernelId(), allocatePreloadKernArgSGPRs(), llvm::CCState::AllocateReg(), allocateSpecialEntryInputVGPRs(), allocateSpecialInputSGPRs(), allocateSpecialInputVGPRsFixed(), allocateSystemSGPRs(), llvm::CallingConv::AMDGPU_CS, llvm::CallingConv::AMDGPU_Gfx, llvm::CallingConv::AMDGPU_PS, llvm::CCState::AnalyzeFormalArguments(), llvm::AMDGPUTargetLowering::analyzeFormalArgumentsCompute(), llvm::SmallVectorImpl< T >::append(), assert(), llvm::ISD::AssertSext, llvm::ISD::AssertZext, llvm::CCValAssign::BCvt, llvm::ISD::BITCAST, llvm::EVT::bitsLT(), llvm::AMDGPUTargetLowering::CCAssignFnForCall(), llvm::EVT::changeTypeToInteger(), llvm::commonAlignment(), llvm::AMDGPUAS::CONSTANT_ADDRESS, contains(), llvm::countr_zero(), llvm::LLVMContext::diagnose(), DL, llvm::SmallVectorBase< Size_T >::empty(), llvm::GCNSubtarget::enableFlatScratch(), llvm::ISD::InputArg::Flags, llvm::CCValAssign::Full, llvm::SelectionDAG::getAddrSpaceCast(), llvm::Pass::getAnalysis(), llvm::Function::getArg(), llvm::SelectionDAG::getBitcast(), llvm::SelectionDAG::getBuildVector(), llvm::SelectionDAG::getConstant(), llvm::SelectionDAG::getContext(), llvm::SelectionDAG::getCopyFromReg(), llvm::SelectionDAG::getEntryNode(), llvm::MachineFunction::getFunction(), llvm::Function::getFunctionType(), llvm::GCNSubtarget::getGeneration(), llvm::MachineFunction::getInfo(), llvm::EVT::getIntegerVT(), llvm::GCNSubtarget::getKnownHighZeroBitsForFrameIndex(), llvm::CCValAssign::getLocInfo(), llvm::CCValAssign::getLocMemOffset(), llvm::CCValAssign::getLocReg(), llvm::CCValAssign::getLocVT(), llvm::SelectionDAG::getMachineFunction(), llvm::SelectionDAG::getMergeValues(), llvm::SelectionDAG::getNode(), llvm::ISD::InputArg::getOrigArgIndex(), llvm::FunctionType::getParamType(), llvm::Argument::getParent(), llvm::SelectionDAG::getPass(), llvm::ISD::ArgFlagsTy::getPointerAddrSpace(), llvm::MachineFunction::getRegInfo(), llvm::GCNSubtarget::getRegisterInfo(), llvm::SDValue::getSimpleValueType(), llvm::EVT::getSizeInBits(), llvm::CCState::getStackSize(), llvm::EVT::getStoreSize(), getSubtarget(), llvm::TargetLoweringBase::getTargetMachine(), llvm::SelectionDAG::getUNDEF(), llvm::SDValue::getValue(), llvm::SDValue::getValueType(), llvm::SelectionDAG::getValueType(), llvm::CCValAssign::getValVT(), llvm::EVT::getVectorVT(), llvm::GCNSubtarget::hasArchitectedSGPRs(), llvm::Argument::hasAttribute(), llvm::GCNUserSGPRUsageInfo::hasDispatchPtr(), llvm::GCNUserSGPRUsageInfo::hasFlatScratchInit(), llvm::GCNSubtarget::hasKernargPreload(), llvm::GCNUserSGPRUsageInfo::hasKernargSegmentPtr(), Info, llvm::AMDGPUSubtarget::isAmdHsaOS(), llvm::AMDGPUSubtarget::isAmdPalOS(), llvm::ISD::ArgFlagsTy::isByRef(), llvm::ISD::ArgFlagsTy::isByVal(), llvm::AMDGPU::isEntryFunctionCC(), llvm::AMDGPU::isGraphics(), llvm::AMDGPU::isKernel(), llvm::CCValAssign::isMemLoc(), llvm::ISD::InputArg::isOrigArg(), llvm::CCValAssign::isRegLoc(), llvm::ISD::ArgFlagsTy::isSRet(), llvm_unreachable, llvm::AMDGPUAS::LOCAL_ADDRESS, MRI, llvm::Offset, processPSInputArgs(), Ptr, llvm::SmallVectorTemplateBase< T, bool >::push_back(), llvm::AMDGPUAS::REGION_ADDRESS, llvm::AMDGPUArgumentUsageInfo::setFuncArgInfo(), llvm::CCValAssign::SExt, llvm::SmallVectorBase< Size_T >::size(), llvm::AMDGPUSubtarget::SOUTHERN_ISLANDS, llvm::ISD::SRL, llvm::ISD::TokenFactor, TRI, llvm::ISD::TRUNCATE, llvm::ISD::InputArg::VT, and llvm::CCValAssign::ZExt.

◆ lowerFP_EXTEND()

SDValue SITargetLowering::lowerFP_EXTEND ( SDValue  Op,
SelectionDAG DAG 
) const

◆ lowerGET_FPENV()

SDValue SITargetLowering::lowerGET_FPENV ( SDValue  Op,
SelectionDAG DAG 
) const

◆ lowerGET_ROUNDING()

SDValue SITargetLowering::lowerGET_ROUNDING ( SDValue  Op,
SelectionDAG DAG 
) const

◆ lowerIdempotentRMWIntoFencedLoad()

LoadInst * SITargetLowering::lowerIdempotentRMWIntoFencedLoad ( AtomicRMWInst RMWI) const
overridevirtual

On some platforms, an AtomicRMW that never actually modifies the value (such as fetch_add of 0) can be turned into a fence followed by an atomic load.

This may sound useless, but it makes it possible for the processor to keep the cacheline shared, dramatically improving performance. And such idempotent RMWs are useful for implementing some kinds of locks, see for example (justification + benchmarks): http://www.hpl.hp.com/techreports/2012/HPL-2012-68.pdf This method tries doing that transformation, returning the atomic load if it succeeds, and nullptr otherwise. If shouldExpandAtomicLoadInIR returns true on that load, it will undergo another round of expansion.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 17160 of file SIISelLowering.cpp.

References llvm::Instruction::copyMetadata(), llvm::IRBuilderBase::CreateAlignedLoad(), llvm::Instruction::eraseFromParent(), llvm::AtomicRMWInst::getAlign(), llvm::AtomicRMWInst::getOrdering(), llvm::AtomicRMWInst::getPointerOperand(), llvm::AtomicRMWInst::getSyncScopeID(), llvm::Value::getType(), llvm::isReleaseOrStronger(), llvm::Value::replaceAllUsesWith(), llvm::LoadInst::setAtomic(), and llvm::Value::takeName().

◆ LowerOperation()

SDValue SITargetLowering::LowerOperation ( SDValue  Op,
SelectionDAG DAG 
) const
overridevirtual

This callback is invoked for operations that are unsupported by the target, which are registered to use 'custom' lowering, and whose defined values are all legal.

If the target has no operations that require custom lowering, it need not implement this. The default implementation of this aborts.

Reimplemented from llvm::AMDGPUTargetLowering.

Definition at line 5833 of file SIISelLowering.cpp.

References llvm::ISD::ABS, llvm::ISD::ADD, llvm::ISD::ADDRSPACECAST, assert(), llvm::ISD::ATOMIC_CMP_SWAP, llvm::ISD::BRCOND, llvm::ISD::BSWAP, llvm::ISD::BUILD_VECTOR, llvm::ISD::DEBUGTRAP, llvm::ISD::DYNAMIC_STACKALLOC, llvm::ISD::EXTRACT_VECTOR_ELT, llvm::ISD::FABS, llvm::ISD::FADD, llvm::ISD::FCANONICALIZE, llvm::ISD::FCOS, llvm::ISD::FDIV, llvm::ISD::FFREXP, llvm::ISD::FLDEXP, llvm::ISD::FMA, llvm::ISD::FMAXIMUM, llvm::ISD::FMAXIMUMNUM, llvm::ISD::FMAXNUM, llvm::ISD::FMAXNUM_IEEE, llvm::ISD::FMINIMUM, llvm::ISD::FMINIMUMNUM, llvm::ISD::FMINNUM, llvm::ISD::FMINNUM_IEEE, llvm::ISD::FMUL, llvm::ISD::FNEG, llvm::ISD::FP_EXTEND, llvm::ISD::FP_ROUND, llvm::ISD::FP_TO_SINT, llvm::ISD::FP_TO_UINT, llvm::ISD::FSIN, llvm::ISD::FSQRT, llvm::ISD::GET_FPENV, llvm::ISD::GET_ROUNDING, llvm::MachineFunction::getInfo(), llvm::SelectionDAG::getMachineFunction(), llvm::ISD::GlobalAddress, llvm::ISD::INSERT_SUBVECTOR, llvm::ISD::INSERT_VECTOR_ELT, llvm::ISD::INTRINSIC_VOID, llvm::ISD::INTRINSIC_W_CHAIN, llvm::ISD::INTRINSIC_WO_CHAIN, llvm::ISD::LOAD, LowerDYNAMIC_STACKALLOC(), lowerFP_EXTEND(), llvm::AMDGPUTargetLowering::LowerFP_TO_INT(), lowerGET_FPENV(), lowerGET_ROUNDING(), llvm::AMDGPUTargetLowering::LowerOperation(), lowerPREFETCH(), lowerSET_FPENV(), lowerSET_ROUNDING(), LowerSTACKSAVE(), llvm::ISD::MUL, llvm::ISD::PREFETCH, llvm::ISD::RETURNADDR, llvm::ISD::SADDSAT, llvm::ISD::SCALAR_TO_VECTOR, llvm::ISD::SELECT, llvm::ISD::SET_FPENV, llvm::ISD::SET_ROUNDING, llvm::ISD::SHL, llvm::ISD::SMAX, llvm::ISD::SMIN, llvm::ISD::SMUL_LOHI, llvm::ISD::SMULO, splitBinaryVectorOp(), splitTernaryVectorOp(), splitUnaryVectorOp(), llvm::ISD::SRA, llvm::ISD::SRL, llvm::ISD::SSUBSAT, llvm::ISD::STACKSAVE, llvm::ISD::STORE, llvm::ISD::STRICT_FLDEXP, llvm::ISD::STRICT_FP_EXTEND, llvm::ISD::STRICT_FP_ROUND, llvm::ISD::SUB, llvm::ISD::TRAP, llvm::ISD::UADDSAT, llvm::ISD::UMAX, llvm::ISD::UMIN, llvm::ISD::UMUL_LOHI, llvm::ISD::UMULO, llvm::ISD::USUBSAT, and llvm::ISD::VECTOR_SHUFFLE.

◆ lowerPREFETCH()

SDValue SITargetLowering::lowerPREFETCH ( SDValue  Op,
SelectionDAG DAG 
) const

◆ LowerReturn()

SDValue SITargetLowering::LowerReturn ( SDValue  ,
CallingConv::ID  ,
bool  ,
const SmallVectorImpl< ISD::OutputArg > &  ,
const SmallVectorImpl< SDValue > &  ,
const SDLoc ,
SelectionDAG  
) const
overridevirtual

◆ lowerSET_FPENV()

SDValue SITargetLowering::lowerSET_FPENV ( SDValue  Op,
SelectionDAG DAG 
) const

◆ lowerSET_ROUNDING()

SDValue SITargetLowering::lowerSET_ROUNDING ( SDValue  Op,
SelectionDAG DAG 
) const

◆ LowerSTACKSAVE()

SDValue SITargetLowering::LowerSTACKSAVE ( SDValue  Op,
SelectionDAG DAG 
) const

◆ mayBeEmittedAsTailCall()

bool SITargetLowering::mayBeEmittedAsTailCall ( const CallInst ) const
overridevirtual

Return true if the target may be able emit the call instruction as a tail call.

This is used by optimization passes to determine if it's profitable to duplicate return instructions to enable tailcall optimization.

Reimplemented from llvm::TargetLowering.

Definition at line 3623 of file SIISelLowering.cpp.

References llvm::Function::getCallingConv(), llvm::ilist_detail::node_parent_access< NodeTy, ParentTy >::getParent(), llvm::AMDGPU::isEntryFunctionCC(), and llvm::CallInst::isTailCall().

◆ passSpecialInputs()

void SITargetLowering::passSpecialInputs ( CallLoweringInfo CLI,
CCState CCInfo,
const SIMachineFunctionInfo Info,
SmallVectorImpl< std::pair< unsigned, SDValue > > &  RegsToPass,
SmallVectorImpl< SDValue > &  MemOpChains,
SDValue  Chain 
) const

Definition at line 3324 of file SIISelLowering.cpp.

References llvm::CCState::AllocateReg(), llvm::CCState::AllocateStack(), assert(), llvm::TargetLowering::CallLoweringInfo::CB, llvm::ArgDescriptor::createArg(), llvm::TargetLowering::CallLoweringInfo::DAG, llvm::AMDGPUFunctionArgInfo::DISPATCH_ID, llvm::AMDGPUFunctionArgInfo::DISPATCH_PTR, llvm::TargetLowering::CallLoweringInfo::DL, DL, F, llvm::AMDGPUArgumentUsageInfo::FixedABIFunctionInfo, llvm::Pass::getAnalysis(), llvm::CallBase::getCalledFunction(), llvm::SelectionDAG::getConstant(), llvm::MachineFunction::getFunction(), llvm::AMDGPUMachineFunction::getLDSKernelIdMetadata(), llvm::SelectionDAG::getMachineFunction(), llvm::AMDGPUSubtarget::getMaxWorkitemID(), llvm::SDValue::getNode(), llvm::SelectionDAG::getNode(), llvm::SelectionDAG::getPass(), llvm::AMDGPUFunctionArgInfo::getPreloadedValue(), llvm::GCNSubtarget::getRegisterInfo(), llvm::SelectionDAG::getShiftAmountConstant(), llvm::EVT::getStoreSize(), llvm::SelectionDAG::getUNDEF(), llvm::CallBase::hasFnAttr(), llvm::AMDGPUFunctionArgInfo::IMPLICIT_ARG_PTR, ImplicitAttrs, Info, llvm::ArgDescriptor::isMasked(), llvm::AMDGPUFunctionArgInfo::LDS_KERNEL_ID, llvm::AMDGPUTargetLowering::loadInputValue(), llvm::ISD::OR, llvm::SmallVectorTemplateBase< T, bool >::push_back(), llvm::AMDGPUFunctionArgInfo::QUEUE_PTR, llvm::report_fatal_error(), llvm::ISD::SHL, llvm::AMDGPUTargetLowering::storeStackInputValue(), TRI, llvm::AMDGPUFunctionArgInfo::WORKGROUP_ID_X, llvm::AMDGPUFunctionArgInfo::WORKGROUP_ID_Y, llvm::AMDGPUFunctionArgInfo::WORKGROUP_ID_Z, llvm::AMDGPUFunctionArgInfo::WORKITEM_ID_X, llvm::AMDGPUFunctionArgInfo::WORKITEM_ID_Y, llvm::AMDGPUFunctionArgInfo::WORKITEM_ID_Z, llvm::AMDGPUFunctionArgInfo::WorkItemIDX, llvm::AMDGPUFunctionArgInfo::WorkItemIDY, llvm::AMDGPUFunctionArgInfo::WorkItemIDZ, and Y.

Referenced by LowerCall().

◆ PerformDAGCombine()

SDValue SITargetLowering::PerformDAGCombine ( SDNode N,
DAGCombinerInfo DCI 
) const
overridevirtual

This method will be invoked for all target nodes and for any target-independent nodes that the target has registered with invoke it for.

The semantics are as follows: Return Value: SDValue.Val == 0 - No change was made SDValue.Val == N - N was replaced, is dead, and is already handled. otherwise - N should be replaced by the returned Operand.

In addition, methods provided by DAGCombinerInfo may be used to perform more complex transformations.

Reimplemented from llvm::AMDGPUTargetLowering.

Definition at line 14932 of file SIISelLowering.cpp.

References llvm::ISD::ADD, llvm::ISD::AND, llvm::ISD::ANY_EXTEND, llvm::ISD::BITCAST, llvm::AMDGPUISD::CLAMP, llvm::AMDGPUISD::CVT_F32_UBYTE0, llvm::AMDGPUISD::CVT_F32_UBYTE1, llvm::AMDGPUISD::CVT_F32_UBYTE2, llvm::AMDGPUISD::CVT_F32_UBYTE3, llvm::AMDGPUISD::CVT_PKRTZ_F16_F32, llvm::TargetLowering::DAGCombinerInfo::DAG, llvm::ISD::EXTRACT_VECTOR_ELT, llvm::ISD::FADD, llvm::ISD::FCANONICALIZE, llvm::ISD::FCOPYSIGN, llvm::ISD::FDIV, llvm::ISD::FLDEXP, llvm::ISD::FMA, llvm::AMDGPUISD::FMAX_LEGACY, llvm::ISD::FMAXIMUM, llvm::ISD::FMAXNUM, llvm::ISD::FMAXNUM_IEEE, llvm::AMDGPUISD::FMED3, llvm::AMDGPUISD::FMIN_LEGACY, llvm::ISD::FMINIMUM, llvm::ISD::FMINNUM, llvm::ISD::FMINNUM_IEEE, llvm::ISD::FMUL, llvm::AMDGPUISD::FP_CLASS, llvm::ISD::FP_ROUND, llvm::AMDGPUISD::FRACT, llvm::ISD::FSHR, llvm::ISD::FSUB, llvm::GCNSubtarget::getInstrInfo(), llvm::SelectionDAG::getNode(), getSubtarget(), llvm::TargetLoweringBase::getTargetMachine(), llvm::ISD::INSERT_VECTOR_ELT, llvm::TargetLowering::DAGCombinerInfo::isBeforeLegalize(), llvm::ISD::LOAD, matchPERM(), llvm::ISD::MUL, N, llvm::None, llvm::ISD::OR, llvm::AMDGPUTargetLowering::PerformDAGCombine(), llvm::AMDGPUISD::RCP, llvm::AMDGPUISD::RCP_IFLAG, llvm::AMDGPUISD::RCP_LEGACY, llvm::AMDGPUISD::RSQ, llvm::AMDGPUISD::RSQ_CLAMP, llvm::ISD::SCALAR_TO_VECTOR, llvm::ISD::SELECT, llvm::ISD::SETCC, llvm::ISD::SHL, llvm::ISD::SIGN_EXTEND_INREG, llvm::ISD::SINT_TO_FP, llvm::ISD::SMAX, llvm::ISD::SMIN, llvm::ISD::SRA, llvm::ISD::SRL, llvm::ISD::SUB, TII, llvm::ISD::UADDO_CARRY, llvm::ISD::UINT_TO_FP, llvm::ISD::UMAX, llvm::ISD::UMIN, llvm::ISD::USUBO_CARRY, llvm::ISD::XOR, and llvm::ISD::ZERO_EXTEND.

◆ PostISelFolding()

SDNode * SITargetLowering::PostISelFolding ( MachineSDNode Node,
SelectionDAG DAG 
) const
overridevirtual

◆ ReplaceNodeResults()

void SITargetLowering::ReplaceNodeResults ( SDNode ,
SmallVectorImpl< SDValue > &  ,
SelectionDAG  
) const
overridevirtual

This callback is invoked when a node result type is illegal for the target, and the operation was registered to use 'custom' lowering for that result type.

The target places new result values for the node in Results (their number and types must exactly match those of the original return values of the node), or leaves Results empty, which indicates that the node is not to be custom lowered after all.

If the target has no operations that require custom lowering, it need not implement this. The default implementation aborts.

Reimplemented from llvm::AMDGPUTargetLowering.

Definition at line 6384 of file SIISelLowering.cpp.

References llvm::ISD::AND, llvm::ISD::ANY_EXTEND, assert(), llvm::ISD::BITCAST, llvm::EVT::bitsLT(), llvm::AMDGPUISD::CVT_PK_I16_I32, llvm::AMDGPUISD::CVT_PK_U16_U32, llvm::AMDGPUISD::CVT_PKNORM_I16_F32, llvm::AMDGPUISD::CVT_PKNORM_U16_F32, llvm::AMDGPUISD::CVT_PKRTZ_F16_F32, DL, llvm::ISD::EXTRACT_VECTOR_ELT, llvm::ISD::FABS, llvm::ISD::FNEG, llvm::ISD::FSQRT, llvm::DataLayout::getABITypeAlign(), llvm::SelectionDAG::getConstant(), llvm::SelectionDAG::getContext(), llvm::SelectionDAG::getDataLayout(), llvm::SelectionDAG::getEntryNode(), llvm::AMDGPUTargetLowering::getEquivalentMemType(), llvm::SelectionDAG::getMachineFunction(), llvm::MachineFunction::getMachineMemOperand(), llvm::SelectionDAG::getMemIntrinsicNode(), llvm::SelectionDAG::getNode(), llvm::EVT::getStoreSize(), llvm::SelectionDAG::getTargetConstant(), llvm::EVT::getTypeForEVT(), llvm::SelectionDAG::getVTList(), llvm::GCNSubtarget::hasScalarSubwordLoads(), I, llvm::ISD::INSERT_VECTOR_ELT, llvm::ISD::INTRINSIC_W_CHAIN, llvm::ISD::INTRINSIC_WO_CHAIN, llvm::TargetLoweringBase::isTypeLegal(), LHS, llvm::ISD::MERGE_VALUES, llvm::MachineMemOperand::MODereferenceable, llvm::MachineMemOperand::MOInvariant, llvm::MachineMemOperand::MOLoad, N, llvm::Offset, llvm::AMDGPUTargetLowering::ReplaceNodeResults(), Results, RHS, llvm::AMDGPUISD::SBUFFER_LOAD_UBYTE, llvm::ISD::SELECT, llvm::ISD::TRUNCATE, and llvm::ISD::XOR.

◆ requiresUniformRegister()

bool SITargetLowering::requiresUniformRegister ( MachineFunction MF,
const Value  
) const
overridevirtual

Allows target to decide about the register class of the specific value that is live outside the defining block.

Returns true if the value needs uniform register class.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 16847 of file SIISelLowering.cpp.

References llvm::TargetLowering::ComputeConstraintToUse(), llvm::MachineFunction::getDataLayout(), getRegForInlineAsmConstraint(), llvm::GCNSubtarget::getRegisterInfo(), llvm::AMDGPUSubtarget::getWavefrontSize(), hasCFUser(), llvm::InlineAsm::isOutput, llvm::SIRegisterInfo::isSGPRClass(), and llvm::TargetLowering::ParseConstraints().

◆ shouldConvertConstantLoadToIntImm()

bool SITargetLowering::shouldConvertConstantLoadToIntImm ( const APInt Imm,
Type Ty 
) const
overridevirtual

Return true if it is beneficial to convert a load of a constant to just the constant itself.

On some targets it might be more efficient to use a combination of arithmetic instructions to materialize the constant instead of loading it from a constant pool.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 1936 of file SIISelLowering.cpp.

◆ shouldEmitFixup()

bool SITargetLowering::shouldEmitFixup ( const GlobalValue GV) const

◆ shouldEmitGOTReloc()

bool SITargetLowering::shouldEmitGOTReloc ( const GlobalValue GV) const

◆ shouldEmitPCReloc()

bool SITargetLowering::shouldEmitPCReloc ( const GlobalValue GV) const
Returns
True if PC-relative relocation needs to be emitted for given global value GV, false otherwise.

Definition at line 6618 of file SIISelLowering.cpp.

References shouldEmitFixup(), and shouldEmitGOTReloc().

Referenced by llvm::AMDGPULegalizerInfo::legalizeGlobalValue().

◆ shouldExpandAtomicCmpXchgInIR()

TargetLowering::AtomicExpansionKind SITargetLowering::shouldExpandAtomicCmpXchgInIR ( AtomicCmpXchgInst AI) const
overridevirtual

◆ shouldExpandAtomicLoadInIR()

TargetLowering::AtomicExpansionKind SITargetLowering::shouldExpandAtomicLoadInIR ( LoadInst LI) const
overridevirtual

Returns how the given (atomic) load should be expanded by the IR-level AtomicExpand pass.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 16746 of file SIISelLowering.cpp.

References llvm::LoadInst::getPointerAddressSpace(), llvm::TargetLoweringBase::None, llvm::TargetLoweringBase::NotAtomic, and llvm::AMDGPUAS::PRIVATE_ADDRESS.

◆ shouldExpandAtomicRMWInIR()

TargetLowering::AtomicExpansionKind SITargetLowering::shouldExpandAtomicRMWInIR ( AtomicRMWInst RMW) const
overridevirtual

Returns how the IR-level AtomicExpand pass should expand the given AtomicRMW, if at all.

Default is to never expand.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 16518 of file SIISelLowering.cpp.

References llvm::AtomicRMWInst::Add, llvm::AtomicRMWInst::And, atomicIgnoresDenormalModeOrFPModeIsFTZ(), atomicSupportedIfLegalIntType(), llvm::AMDGPUAS::BUFFER_FAT_POINTER, llvm::TargetLoweringBase::CmpXChg, DL, llvm::OptimizationRemarkEmitter::emit(), emitAtomicRMWLegalRemark(), llvm::TargetLoweringBase::Expand, llvm::AtomicRMWInst::FAdd, llvm::AMDGPUAS::FLAT_ADDRESS, flatInstrMayAccessPrivate(), llvm::AtomicRMWInst::FMax, llvm::AtomicRMWInst::FMin, llvm::AtomicRMWInst::FSub, llvm::Value::getContext(), llvm::Function::getDataLayout(), llvm::Instruction::getFunction(), llvm::AtomicRMWInst::getOperation(), llvm::LLVMContext::getOrInsertSyncScopeID(), llvm::AtomicRMWInst::getPointerAddressSpace(), llvm::AtomicRMWInst::getSyncScopeID(), llvm::Value::getType(), llvm::AtomicRMWInst::getValOperand(), globalMemoryFPAtomicIsLegal(), llvm::GCNSubtarget::hasAtomicBufferGlobalPkAddF16Insts(), llvm::GCNSubtarget::hasAtomicBufferGlobalPkAddF16NoRtnInsts(), llvm::GCNSubtarget::hasAtomicBufferPkAddBF16Inst(), llvm::GCNSubtarget::hasAtomicDsPkAdd16Insts(), llvm::GCNSubtarget::hasAtomicFaddNoRtnInsts(), llvm::GCNSubtarget::hasAtomicFaddRtnInsts(), llvm::GCNSubtarget::hasAtomicFlatPkAdd16Insts(), llvm::GCNSubtarget::hasAtomicFMinFMaxF32FlatInsts(), llvm::GCNSubtarget::hasAtomicFMinFMaxF32GlobalInsts(), llvm::GCNSubtarget::hasAtomicFMinFMaxF64FlatInsts(), llvm::GCNSubtarget::hasAtomicFMinFMaxF64GlobalInsts(), llvm::GCNSubtarget::hasAtomicGlobalPkAddBF16Inst(), llvm::GCNSubtarget::hasFlatAtomicFaddF32Inst(), llvm::GCNSubtarget::hasFlatBufferGlobalAtomicFaddF64Inst(), llvm::GCNSubtarget::hasLDSFPAtomicAddF32(), llvm::GCNSubtarget::hasLDSFPAtomicAddF64(), llvm::GCNSubtarget::hasMemoryAtomicFaddF32DenormalSupport(), isAtomicRMWLegalXChgTy(), llvm::Type::isDoubleTy(), llvm::AMDGPU::isExtendedGlobalAddrSpace(), llvm::AMDGPU::isFlatGlobalAddrSpace(), llvm::Type::isFloatTy(), llvm::Constant::isNullValue(), isV2BF16(), isV2F16(), isV2F16OrV2BF16(), llvm_unreachable, llvm::AMDGPUAS::LOCAL_ADDRESS, llvm::AtomicRMWInst::Max, llvm::AtomicRMWInst::Min, llvm::AtomicRMWInst::Nand, llvm::TargetLoweringBase::None, llvm::TargetLoweringBase::NotAtomic, llvm::AtomicRMWInst::Or, llvm::AMDGPUAS::PRIVATE_ADDRESS, llvm::AtomicRMWInst::Sub, llvm::SyncScope::System, llvm::AtomicRMWInst::UDecWrap, llvm::AtomicRMWInst::UIncWrap, llvm::AtomicRMWInst::UMax, llvm::AtomicRMWInst::UMin, llvm::Value::use_empty(), llvm::AtomicRMWInst::Xchg, and llvm::AtomicRMWInst::Xor.

◆ shouldExpandAtomicStoreInIR()

TargetLowering::AtomicExpansionKind SITargetLowering::shouldExpandAtomicStoreInIR ( StoreInst SI) const
overridevirtual

Returns how the given (atomic) store should be expanded by the IR-level AtomicExpand pass into.

For instance AtomicExpansionKind::Expand will try to use an atomicrmw xchg.

Reimplemented from llvm::TargetLoweringBase.

Definition at line 16753 of file SIISelLowering.cpp.

References llvm::TargetLoweringBase::None, llvm::TargetLoweringBase::NotAtomic, and llvm::AMDGPUAS::PRIVATE_ADDRESS.

◆ shouldExpandVectorDynExt() [1/2]

bool SITargetLowering::shouldExpandVectorDynExt ( SDNode N) const

◆ shouldExpandVectorDynExt() [2/2]

bool SITargetLowering::shouldExpandVectorDynExt ( unsigned  EltSize,
unsigned  NumElem,
bool  IsDivergentIdx,
const GCNSubtarget Subtarget 
)
static

Check if EXTRACT_VECTOR_ELT/INSERT_VECTOR_ELT (<n x e>, var-idx) should be expanded into a set of cmp/select instructions.

Definition at line 13519 of file SIISelLowering.cpp.

References llvm::GCNSubtarget::hasMovrel(), UseDivergentRegisterIndexing, and llvm::GCNSubtarget::useVGPRIndexMode().

Referenced by shouldExpandVectorDynExt().

◆ shouldUseLDSConstAddress()

bool SITargetLowering::shouldUseLDSConstAddress ( const GlobalValue GV) const
Returns
true if this should use a literal constant for an LDS address, and not emit a relocation for an LDS global.

Definition at line 6622 of file SIISelLowering.cpp.

References llvm::Triple::AMDHSA, llvm::Triple::AMDPAL, llvm::Triple::getOS(), llvm::TargetLoweringBase::getTargetMachine(), llvm::TargetMachine::getTargetTriple(), llvm::GlobalValue::hasExternalLinkage(), and OS.

Referenced by llvm::AMDGPULegalizerInfo::legalizeGlobalValue().

◆ splitBinaryVectorOp()

SDValue SITargetLowering::splitBinaryVectorOp ( SDValue  Op,
SelectionDAG DAG 
) const

◆ splitKillBlock()

MachineBasicBlock * SITargetLowering::splitKillBlock ( MachineInstr MI,
MachineBasicBlock BB 
) const

◆ splitTernaryVectorOp()

SDValue SITargetLowering::splitTernaryVectorOp ( SDValue  Op,
SelectionDAG DAG 
) const

◆ splitUnaryVectorOp()

SDValue SITargetLowering::splitUnaryVectorOp ( SDValue  Op,
SelectionDAG DAG 
) const

◆ supportSplitCSR()

bool SITargetLowering::supportSplitCSR ( MachineFunction MF) const
overridevirtual

Return true if the target supports that a subset of CSRs for the given machine function is handled explicitly via copies.

Reimplemented from llvm::TargetLowering.

Definition at line 2754 of file SIISelLowering.cpp.

References llvm::MachineFunction::getInfo(), and Info.

◆ wrapAddr64Rsrc()

MachineSDNode * SITargetLowering::wrapAddr64Rsrc ( SelectionDAG DAG,
const SDLoc DL,
SDValue  Ptr 
) const

The documentation for this class was generated from the following files: