Go to the documentation of this file.
20 std::optional<RegOrConstant>
25 if (
MI.getOpcode() != AArch64::G_DUP)
34 std::optional<int64_t>
38 if (!Splat || Splat->isReg())
40 return Splat->getCst();
55 if (!MaybeSub || MaybeSub->
getOpcode() != TargetOpcode::G_SUB ||
60 return MaybeZero && MaybeZero->Value.getZExtValue() == 0;
66 assert(
MI.getOpcode() == TargetOpcode::G_MEMSET);
69 if (!TLI.getLibcallName(RTLIB::BZERO))
73 if (!Zero || Zero->Value.getSExtValue() != 0)
83 MI.getOperand(2).getReg(),
MRI)) {
84 if (Size->Value.getSExtValue() <= 256)
92 {
MI.getOperand(0),
MI.getOperand(2)})
93 .addImm(
MI.getOperand(3).getImm())
@ FCMP_ULE
1 1 0 1 True if unordered, less than, or equal
void changeVectorFCMPPredToAArch64CC(const CmpInst::Predicate P, AArch64CC::CondCode &CondCode, AArch64CC::CondCode &CondCode2, bool &Invert)
Find the AArch64 condition codes necessary to represent P for a vector floating point comparison.
This is an optimization pass for GlobalISel generic memory operations.
Predicate
This enumeration lists the possible predicates for CmpInst subclasses.
MachineRegisterInfo - Keep track of information for virtual and physical registers,...
This currently compiles esp xmm0 movsd esp eax eax esp ret We should use not the dag combiner This is because dagcombine2 needs to be able to see through the X86ISD::Wrapper which DAGCombine can t really do The code for turning x load into a single vector load is target independent and should be moved to the dag combiner The code for turning x load into a vector load can only handle a direct load from a global or a direct load from the stack It should be generalized to handle any load from P
MachineRegisterInfo * getMRI()
Getter for MRI.
std::optional< ValueAndVReg > getIConstantVRegValWithLookThrough(Register VReg, const MachineRegisterInfo &MRI, bool LookThroughInstrs=true)
If VReg is defined by a statically evaluable chain of instructions rooted on a G_CONSTANT returns its...
@ FCMP_ONE
0 1 1 0 True if ordered and operands are unequal
void changeFCMPPredToAArch64CC(const CmpInst::Predicate P, AArch64CC::CondCode &CondCode, AArch64CC::CondCode &CondCode2)
Find the AArch64 condition codes necessary to represent P for a scalar floating point comparison.
Predicate getInversePredicate() const
For example, EQ -> NE, UGT -> ULE, SLT -> SGE, OEQ -> UNE, UGT -> OLE, OLT -> UGE,...
bool isCMN(const MachineInstr *MaybeSub, const CmpInst::Predicate &Pred, const MachineRegisterInfo &MRI)
Represents a value which can be a Register or a constant.
@ FCMP_OGT
0 0 1 0 True if ordered and greater than
std::optional< RegOrConstant > getVectorSplat(const MachineInstr &MI, const MachineRegisterInfo &MRI)
@ FCMP_ULT
1 1 0 0 True if unordered or less than
const MachineOperand & getOperand(unsigned i) const
@ FCMP_UGE
1 0 1 1 True if unordered, greater than, or equal
@ FCMP_UNO
1 0 0 0 True if unordered: isnan(X) | isnan(Y)
MachineFunction & getMF()
Getter for the function we currently build.
@ FCMP_OEQ
0 0 0 1 True if ordered and equal
@ FCMP_OLT
0 1 0 0 True if ordered and less than
std::optional< RegOrConstant > getAArch64VectorSplat(const MachineInstr &MI, const MachineRegisterInfo &MRI)
const TargetSubtargetInfo & getSubtarget() const
getSubtarget - Return the subtarget for which this machine code is being compiled.
bool tryEmitBZero(MachineInstr &MI, MachineIRBuilder &MIRBuilder, bool MinSize)
Replace a G_MEMSET with a value of 0 with a G_BZERO instruction if it is supported and beneficial to ...
Helper class to build MachineInstr.
Representation of each machine instruction.
void setInstrAndDebugLoc(MachineInstr &MI)
Set the insertion point to before MI, and set the debug loc to MI's loc.
assert(ImpDefSCC.getReg()==AMDGPU::SCC &&ImpDefSCC.isDef())
@ FCMP_OGE
0 0 1 1 True if ordered and greater than or equal
const MachineInstrBuilder & addMemOperand(MachineMemOperand *MMO) const
CondCode
ISD::CondCode enum - These are ordered carefully to make the bitfields below work out,...
Register getReg() const
getReg - Returns the register number.
MachineInstrBuilder buildInstr(unsigned Opcode)
Build and insert <empty> = Opcode <empty>.
unsigned getOpcode() const
Returns the opcode of this MachineInstr.
#define llvm_unreachable(msg)
Marks that the current location is not supposed to be reachable.
unsigned const MachineRegisterInfo * MRI
Wrapper class representing virtual and physical registers.
@ FCMP_UGT
1 0 1 0 True if unordered or greater than
std::optional< int64_t > getAArch64VectorSplatScalar(const MachineInstr &MI, const MachineRegisterInfo &MRI)
bool isEquality() const
Determine if this is an equals/not equals predicate.
virtual const TargetLowering * getTargetLowering() const
@ FCMP_UNE
1 1 1 0 True if unordered or not equal
@ FCMP_OLE
0 1 0 1 True if ordered and less than or equal
std::optional< ValueAndVReg > getAnyConstantVRegValWithLookThrough(Register VReg, const MachineRegisterInfo &MRI, bool LookThroughInstrs=true, bool LookThroughAnyExt=false)
If VReg is defined by a statically evaluable chain of instructions rooted on a G_CONSTANT or G_FCONST...
@ FCMP_ORD
0 1 1 1 True if ordered (no nans)
@ FCMP_UEQ
1 0 0 1 True if unordered or equal