LLVM  7.0.0svn
SystemZISelLowering.h
Go to the documentation of this file.
1 //===-- SystemZISelLowering.h - SystemZ DAG lowering interface --*- C++ -*-===//
2 //
3 // The LLVM Compiler Infrastructure
4 //
5 // This file is distributed under the University of Illinois Open Source
6 // License. See LICENSE.TXT for details.
7 //
8 //===----------------------------------------------------------------------===//
9 //
10 // This file defines the interfaces that SystemZ uses to lower LLVM code into a
11 // selection DAG.
12 //
13 //===----------------------------------------------------------------------===//
14 
15 #ifndef LLVM_LIB_TARGET_SYSTEMZ_SYSTEMZISELLOWERING_H
16 #define LLVM_LIB_TARGET_SYSTEMZ_SYSTEMZISELLOWERING_H
17 
18 #include "SystemZ.h"
22 
23 namespace llvm {
24 namespace SystemZISD {
25 enum NodeType : unsigned {
27 
28  // Return with a flag operand. Operand 0 is the chain operand.
30 
31  // Calls a function. Operand 0 is the chain operand and operand 1
32  // is the target address. The arguments start at operand 2.
33  // There is an optional glue operand at the end.
36 
37  // TLS calls. Like regular calls, except operand 1 is the TLS symbol.
38  // (The call target is implicitly __tls_get_offset.)
41 
42  // Wraps a TargetGlobalAddress that should be loaded using PC-relative
43  // accesses (LARL). Operand 0 is the address.
45 
46  // Used in cases where an offset is applied to a TargetGlobalAddress.
47  // Operand 0 is the full TargetGlobalAddress and operand 1 is a
48  // PCREL_WRAPPER for an anchor point. This is used so that we can
49  // cheaply refer to either the full address or the anchor point
50  // as a register base.
52 
53  // Integer absolute.
55 
56  // Integer comparisons. There are three operands: the two values
57  // to compare, and an integer of type SystemZICMP.
59 
60  // Floating-point comparisons. The two operands are the values to compare.
62 
63  // Test under mask. The first operand is ANDed with the second operand
64  // and the condition codes are set on the result. The third operand is
65  // a boolean that is true if the condition codes need to distinguish
66  // between CCMASK_TM_MIXED_MSB_0 and CCMASK_TM_MIXED_MSB_1 (which the
67  // register forms do but the memory forms don't).
68  TM,
69 
70  // Branches if a condition is true. Operand 0 is the chain operand;
71  // operand 1 is the 4-bit condition-code mask, with bit N in
72  // big-endian order meaning "branch if CC=N"; operand 2 is the
73  // target block and operand 3 is the flag operand.
75 
76  // Selects between operand 0 and operand 1. Operand 2 is the
77  // mask of condition-code values for which operand 0 should be
78  // chosen over operand 1; it has the same form as BR_CCMASK.
79  // Operand 3 is the flag operand.
81 
82  // Evaluates to the gap between the stack pointer and the
83  // base of the dynamically-allocatable area.
85 
86  // Count number of bits set in operand 0 per byte.
88 
89  // Wrappers around the ISD opcodes of the same name. The output is GR128.
90  // Input operands may be GR64 or GR32, depending on the instruction.
95 
96  // Add/subtract with overflow/carry. These have the same operands as
97  // the corresponding standard operations, except with the carry flag
98  // replaced by a condition code value.
100 
101  // Set the condition code from a boolean value in operand 0.
102  // Operand 1 is a mask of all condition-code values that may result of this
103  // operation, operand 2 is a mask of condition-code values that may result
104  // if the boolean is true.
105  // Note that this operation is always optimized away, we will never
106  // generate any code for it.
108 
109  // Use a series of MVCs to copy bytes from one memory location to another.
110  // The operands are:
111  // - the target address
112  // - the source address
113  // - the constant length
114  //
115  // This isn't a memory opcode because we'd need to attach two
116  // MachineMemOperands rather than one.
118 
119  // Like MVC, but implemented as a loop that handles X*256 bytes
120  // followed by straight-line code to handle the rest (if any).
121  // The value of X is passed as an additional operand.
123 
124  // Similar to MVC and MVC_LOOP, but for logic operations (AND, OR, XOR).
125  NC,
127  OC,
129  XC,
131 
132  // Use CLC to compare two blocks of memory, with the same comments
133  // as for MVC and MVC_LOOP.
136 
137  // Use an MVST-based sequence to implement stpcpy().
139 
140  // Use a CLST-based sequence to implement strcmp(). The two input operands
141  // are the addresses of the strings to compare.
143 
144  // Use an SRST-based sequence to search a block of memory. The first
145  // operand is the end address, the second is the start, and the third
146  // is the character to search for. CC is set to 1 on success and 2
147  // on failure.
149 
150  // Store the CC value in bits 29 and 28 of an integer.
152 
153  // Compiler barrier only; generate a no-op.
155 
156  // Transaction begin. The first operand is the chain, the second
157  // the TDB pointer, and the third the immediate control field.
158  // Returns CC value and chain.
161 
162  // Transaction end. Just the chain operand. Returns CC value and chain.
164 
165  // Create a vector constant by filling byte N of the result with bit
166  // 15-N of the single operand.
168 
169  // Create a vector constant by replicating an element-sized RISBG-style mask.
170  // The first operand specifies the starting set bit and the second operand
171  // specifies the ending set bit. Both operands count from the MSB of the
172  // element.
174 
175  // Replicate a GPR scalar value into all elements of a vector.
177 
178  // Create a vector from two i64 GPRs.
180 
181  // Replicate one element of a vector into all elements. The first operand
182  // is the vector and the second is the index of the element to replicate.
184 
185  // Interleave elements from the high half of operand 0 and the high half
186  // of operand 1.
188 
189  // Likewise for the low halves.
191 
192  // Concatenate the vectors in the first two operands, shift them left
193  // by the third operand, and take the first half of the result.
195 
196  // Take one element of the first v2i64 operand and the one element of
197  // the second v2i64 operand and concatenate them to form a v2i64 result.
198  // The third operand is a 4-bit value of the form 0A0B, where A and B
199  // are the element selectors for the first operand and second operands
200  // respectively.
202 
203  // Perform a general vector permute on vector operands 0 and 1.
204  // Each byte of operand 2 controls the corresponding byte of the result,
205  // in the same way as a byte-level VECTOR_SHUFFLE mask.
207 
208  // Pack vector operands 0 and 1 into a single vector with half-sized elements.
210 
211  // Likewise, but saturate the result and set CC. PACKS_CC does signed
212  // saturation and PACKLS_CC does unsigned saturation.
215 
216  // Unpack the first half of vector operand 0 into double-sized elements.
217  // UNPACK_HIGH sign-extends and UNPACKL_HIGH zero-extends.
220 
221  // Likewise for the second half.
224 
225  // Shift each element of vector operand 0 by the number of bits specified
226  // by scalar operand 1.
230 
231  // For each element of the output type, sum across all sub-elements of
232  // operand 0 belonging to the corresponding element, and add in the
233  // rightmost sub-element of the corresponding element of operand 1.
235 
236  // Compare integer vector operands 0 and 1 to produce the usual 0/-1
237  // vector result. VICMPE is for equality, VICMPH for "signed greater than"
238  // and VICMPHL for "unsigned greater than".
242 
243  // Likewise, but also set the condition codes on the result.
247 
248  // Compare floating-point vector operands 0 and 1 to preoduce the usual 0/-1
249  // vector result. VFCMPE is for "ordered and equal", VFCMPH for "ordered and
250  // greater than" and VFCMPHE for "ordered and greater than or equal to".
254 
255  // Likewise, but also set the condition codes on the result.
259 
260  // Test floating-point data class for vectors.
262 
263  // Extend the even f32 elements of vector operand 0 to produce a vector
264  // of f64 elements.
266 
267  // Round the f64 elements of vector operand 0 to f32s and store them in the
268  // even elements of the result.
270 
271  // AND the two vector operands together and set CC based on the result.
273 
274  // String operations that set CC as a side-effect.
284 
285  // Test Data Class.
286  //
287  // Operand 0: the value to test
288  // Operand 1: the bit mask
290 
291  // Wrappers around the inner loop of an 8- or 16-bit ATOMIC_SWAP or
292  // ATOMIC_LOAD_<op>.
293  //
294  // Operand 0: the address of the containing 32-bit-aligned field
295  // Operand 1: the second operand of <op>, in the high bits of an i32
296  // for everything except ATOMIC_SWAPW
297  // Operand 2: how many bits to rotate the i32 left to bring the first
298  // operand into the high bits
299  // Operand 3: the negative of operand 2, for rotating the other way
300  // Operand 4: the width of the field in bits (8 or 16)
312 
313  // A wrapper around the inner loop of an ATOMIC_CMP_SWAP.
314  //
315  // Operand 0: the address of the containing 32-bit-aligned field
316  // Operand 1: the compare value, in the low bits of an i32
317  // Operand 2: the swap value, in the low bits of an i32
318  // Operand 3: how many bits to rotate the i32 left to bring the first
319  // operand into the high bits
320  // Operand 4: the negative of operand 2, for rotating the other way
321  // Operand 5: the width of the field in bits (8 or 16)
323 
324  // Atomic compare-and-swap returning CC value.
325  // Val, CC, OUTCHAIN = ATOMIC_CMP_SWAP(INCHAIN, ptr, cmp, swap)
327 
328  // 128-bit atomic load.
329  // Val, OUTCHAIN = ATOMIC_LOAD_128(INCHAIN, ptr)
331 
332  // 128-bit atomic store.
333  // OUTCHAIN = ATOMIC_STORE_128(INCHAIN, val, ptr)
335 
336  // 128-bit atomic compare-and-swap.
337  // Val, CC, OUTCHAIN = ATOMIC_CMP_SWAP(INCHAIN, ptr, cmp, swap)
339 
340  // Byte swapping load.
341  //
342  // Operand 0: the address to load from
343  // Operand 1: the type of load (i16, i32, i64)
345 
346  // Byte swapping store.
347  //
348  // Operand 0: the value to store
349  // Operand 1: the address to store to
350  // Operand 2: the type of store (i16, i32, i64)
352 
353  // Prefetch from the second operand using the 4-bit control code in
354  // the first operand. The code is 1 for a load prefetch and 2 for
355  // a store prefetch.
357 };
358 
359 // Return true if OPCODE is some kind of PC-relative address.
360 inline bool isPCREL(unsigned Opcode) {
361  return Opcode == PCREL_WRAPPER || Opcode == PCREL_OFFSET;
362 }
363 } // end namespace SystemZISD
364 
365 namespace SystemZICMP {
366 // Describes whether an integer comparison needs to be signed or unsigned,
367 // or whether either type is OK.
368 enum {
372 };
373 } // end namespace SystemZICMP
374 
375 class SystemZSubtarget;
377 
379 public:
380  explicit SystemZTargetLowering(const TargetMachine &TM,
381  const SystemZSubtarget &STI);
382 
383  // Override TargetLowering.
384  MVT getScalarShiftAmountTy(const DataLayout &, EVT) const override {
385  return MVT::i32;
386  }
387  MVT getVectorIdxTy(const DataLayout &DL) const override {
388  // Only the lower 12 bits of an element index are used, so we don't
389  // want to clobber the upper 32 bits of a GPR unnecessarily.
390  return MVT::i32;
391  }
393  const override {
394  // Widen subvectors to the full width rather than promoting integer
395  // elements. This is better because:
396  //
397  // (a) it means that we can handle the ABI for passing and returning
398  // sub-128 vectors without having to handle them as legal types.
399  //
400  // (b) we don't have instructions to extend on load and truncate on store,
401  // so promoting the integers is less efficient.
402  //
403  // (c) there are no multiplication instructions for the widest integer
404  // type (v2i64).
405  if (VT.getScalarSizeInBits() % 8 == 0)
406  return TypeWidenVector;
408  }
409  EVT getSetCCResultType(const DataLayout &DL, LLVMContext &,
410  EVT) const override;
411  bool isFMAFasterThanFMulAndFAdd(EVT VT) const override;
412  bool isFPImmLegal(const APFloat &Imm, EVT VT) const override;
413  bool isLegalICmpImmediate(int64_t Imm) const override;
414  bool isLegalAddImmediate(int64_t Imm) const override;
415  bool isLegalAddressingMode(const DataLayout &DL, const AddrMode &AM, Type *Ty,
416  unsigned AS,
417  Instruction *I = nullptr) const override;
418  bool allowsMisalignedMemoryAccesses(EVT VT, unsigned AS,
419  unsigned Align,
420  bool *Fast) const override;
421  bool isTruncateFree(Type *, Type *) const override;
422  bool isTruncateFree(EVT, EVT) const override;
423  const char *getTargetNodeName(unsigned Opcode) const override;
424  std::pair<unsigned, const TargetRegisterClass *>
425  getRegForInlineAsmConstraint(const TargetRegisterInfo *TRI,
426  StringRef Constraint, MVT VT) const override;
428  getConstraintType(StringRef Constraint) const override;
430  getSingleConstraintMatchWeight(AsmOperandInfo &info,
431  const char *constraint) const override;
432  void LowerAsmOperandForConstraint(SDValue Op,
433  std::string &Constraint,
434  std::vector<SDValue> &Ops,
435  SelectionDAG &DAG) const override;
436 
437  unsigned getInlineAsmMemConstraint(StringRef ConstraintCode) const override {
438  if (ConstraintCode.size() == 1) {
439  switch(ConstraintCode[0]) {
440  default:
441  break;
442  case 'o':
444  case 'Q':
446  case 'R':
448  case 'S':
450  case 'T':
452  }
453  }
454  return TargetLowering::getInlineAsmMemConstraint(ConstraintCode);
455  }
456 
457  /// If a physical register, this returns the register that receives the
458  /// exception address on entry to an EH pad.
459  unsigned
460  getExceptionPointerRegister(const Constant *PersonalityFn) const override {
461  return SystemZ::R6D;
462  }
463 
464  /// If a physical register, this returns the register that receives the
465  /// exception typeid on entry to a landing pad.
466  unsigned
467  getExceptionSelectorRegister(const Constant *PersonalityFn) const override {
468  return SystemZ::R7D;
469  }
470 
471  /// Override to support customized stack guard loading.
472  bool useLoadStackGuardNode() const override {
473  return true;
474  }
475  void insertSSPDeclarations(Module &M) const override {
476  }
477 
479  EmitInstrWithCustomInserter(MachineInstr &MI,
480  MachineBasicBlock *BB) const override;
481  SDValue LowerOperation(SDValue Op, SelectionDAG &DAG) const override;
482  void LowerOperationWrapper(SDNode *N, SmallVectorImpl<SDValue> &Results,
483  SelectionDAG &DAG) const override;
484  void ReplaceNodeResults(SDNode *N, SmallVectorImpl<SDValue>&Results,
485  SelectionDAG &DAG) const override;
486  const MCPhysReg *getScratchRegisters(CallingConv::ID CC) const override;
487  bool allowTruncateForTailCall(Type *, Type *) const override;
488  bool mayBeEmittedAsTailCall(const CallInst *CI) const override;
489  SDValue LowerFormalArguments(SDValue Chain, CallingConv::ID CallConv,
490  bool isVarArg,
492  const SDLoc &DL, SelectionDAG &DAG,
493  SmallVectorImpl<SDValue> &InVals) const override;
494  SDValue LowerCall(CallLoweringInfo &CLI,
495  SmallVectorImpl<SDValue> &InVals) const override;
496 
497  bool CanLowerReturn(CallingConv::ID CallConv, MachineFunction &MF,
498  bool isVarArg,
500  LLVMContext &Context) const override;
501  SDValue LowerReturn(SDValue Chain, CallingConv::ID CallConv, bool IsVarArg,
503  const SmallVectorImpl<SDValue> &OutVals, const SDLoc &DL,
504  SelectionDAG &DAG) const override;
505  SDValue PerformDAGCombine(SDNode *N, DAGCombinerInfo &DCI) const override;
506 
507  /// Determine which of the bits specified in Mask are known to be either
508  /// zero or one and return them in the KnownZero/KnownOne bitsets.
509  void computeKnownBitsForTargetNode(const SDValue Op,
510  KnownBits &Known,
511  const APInt &DemandedElts,
512  const SelectionDAG &DAG,
513  unsigned Depth = 0) const override;
514 
515  /// Determine the number of bits in the operation that are sign bits.
516  unsigned ComputeNumSignBitsForTargetNode(SDValue Op,
517  const APInt &DemandedElts,
518  const SelectionDAG &DAG,
519  unsigned Depth) const override;
520 
522  return ISD::ANY_EXTEND;
523  }
524 
525  bool supportSwiftError() const override {
526  return true;
527  }
528 
529 private:
530  const SystemZSubtarget &Subtarget;
531 
532  // Implement LowerOperation for individual opcodes.
533  SDValue getVectorCmp(SelectionDAG &DAG, unsigned Opcode,
534  const SDLoc &DL, EVT VT,
535  SDValue CmpOp0, SDValue CmpOp1) const;
536  SDValue lowerVectorSETCC(SelectionDAG &DAG, const SDLoc &DL,
537  EVT VT, ISD::CondCode CC,
538  SDValue CmpOp0, SDValue CmpOp1) const;
539  SDValue lowerSETCC(SDValue Op, SelectionDAG &DAG) const;
540  SDValue lowerBR_CC(SDValue Op, SelectionDAG &DAG) const;
541  SDValue lowerSELECT_CC(SDValue Op, SelectionDAG &DAG) const;
542  SDValue lowerGlobalAddress(GlobalAddressSDNode *Node,
543  SelectionDAG &DAG) const;
544  SDValue lowerTLSGetOffset(GlobalAddressSDNode *Node,
545  SelectionDAG &DAG, unsigned Opcode,
546  SDValue GOTOffset) const;
547  SDValue lowerThreadPointer(const SDLoc &DL, SelectionDAG &DAG) const;
548  SDValue lowerGlobalTLSAddress(GlobalAddressSDNode *Node,
549  SelectionDAG &DAG) const;
550  SDValue lowerBlockAddress(BlockAddressSDNode *Node,
551  SelectionDAG &DAG) const;
552  SDValue lowerJumpTable(JumpTableSDNode *JT, SelectionDAG &DAG) const;
553  SDValue lowerConstantPool(ConstantPoolSDNode *CP, SelectionDAG &DAG) const;
554  SDValue lowerFRAMEADDR(SDValue Op, SelectionDAG &DAG) const;
555  SDValue lowerRETURNADDR(SDValue Op, SelectionDAG &DAG) const;
556  SDValue lowerVASTART(SDValue Op, SelectionDAG &DAG) const;
557  SDValue lowerVACOPY(SDValue Op, SelectionDAG &DAG) const;
558  SDValue lowerDYNAMIC_STACKALLOC(SDValue Op, SelectionDAG &DAG) const;
559  SDValue lowerGET_DYNAMIC_AREA_OFFSET(SDValue Op, SelectionDAG &DAG) const;
560  SDValue lowerSMUL_LOHI(SDValue Op, SelectionDAG &DAG) const;
561  SDValue lowerUMUL_LOHI(SDValue Op, SelectionDAG &DAG) const;
562  SDValue lowerSDIVREM(SDValue Op, SelectionDAG &DAG) const;
563  SDValue lowerUDIVREM(SDValue Op, SelectionDAG &DAG) const;
564  SDValue lowerXALUO(SDValue Op, SelectionDAG &DAG) const;
565  SDValue lowerADDSUBCARRY(SDValue Op, SelectionDAG &DAG) const;
566  SDValue lowerBITCAST(SDValue Op, SelectionDAG &DAG) const;
567  SDValue lowerOR(SDValue Op, SelectionDAG &DAG) const;
568  SDValue lowerCTPOP(SDValue Op, SelectionDAG &DAG) const;
569  SDValue lowerATOMIC_FENCE(SDValue Op, SelectionDAG &DAG) const;
570  SDValue lowerATOMIC_LOAD(SDValue Op, SelectionDAG &DAG) const;
571  SDValue lowerATOMIC_STORE(SDValue Op, SelectionDAG &DAG) const;
572  SDValue lowerATOMIC_LOAD_OP(SDValue Op, SelectionDAG &DAG,
573  unsigned Opcode) const;
574  SDValue lowerATOMIC_LOAD_SUB(SDValue Op, SelectionDAG &DAG) const;
575  SDValue lowerATOMIC_CMP_SWAP(SDValue Op, SelectionDAG &DAG) const;
576  SDValue lowerSTACKSAVE(SDValue Op, SelectionDAG &DAG) const;
577  SDValue lowerSTACKRESTORE(SDValue Op, SelectionDAG &DAG) const;
578  SDValue lowerPREFETCH(SDValue Op, SelectionDAG &DAG) const;
579  SDValue lowerINTRINSIC_W_CHAIN(SDValue Op, SelectionDAG &DAG) const;
580  SDValue lowerINTRINSIC_WO_CHAIN(SDValue Op, SelectionDAG &DAG) const;
581  SDValue lowerBUILD_VECTOR(SDValue Op, SelectionDAG &DAG) const;
582  SDValue lowerVECTOR_SHUFFLE(SDValue Op, SelectionDAG &DAG) const;
583  SDValue lowerSCALAR_TO_VECTOR(SDValue Op, SelectionDAG &DAG) const;
584  SDValue lowerINSERT_VECTOR_ELT(SDValue Op, SelectionDAG &DAG) const;
585  SDValue lowerEXTRACT_VECTOR_ELT(SDValue Op, SelectionDAG &DAG) const;
586  SDValue lowerExtendVectorInreg(SDValue Op, SelectionDAG &DAG,
587  unsigned UnpackHigh) const;
588  SDValue lowerShift(SDValue Op, SelectionDAG &DAG, unsigned ByScalar) const;
589 
590  bool canTreatAsByteVector(EVT VT) const;
591  SDValue combineExtract(const SDLoc &DL, EVT ElemVT, EVT VecVT, SDValue OrigOp,
592  unsigned Index, DAGCombinerInfo &DCI,
593  bool Force) const;
594  SDValue combineTruncateExtract(const SDLoc &DL, EVT TruncVT, SDValue Op,
595  DAGCombinerInfo &DCI) const;
596  SDValue combineZERO_EXTEND(SDNode *N, DAGCombinerInfo &DCI) const;
597  SDValue combineSIGN_EXTEND(SDNode *N, DAGCombinerInfo &DCI) const;
598  SDValue combineSIGN_EXTEND_INREG(SDNode *N, DAGCombinerInfo &DCI) const;
599  SDValue combineMERGE(SDNode *N, DAGCombinerInfo &DCI) const;
600  SDValue combineSTORE(SDNode *N, DAGCombinerInfo &DCI) const;
601  SDValue combineEXTRACT_VECTOR_ELT(SDNode *N, DAGCombinerInfo &DCI) const;
602  SDValue combineJOIN_DWORDS(SDNode *N, DAGCombinerInfo &DCI) const;
603  SDValue combineFP_ROUND(SDNode *N, DAGCombinerInfo &DCI) const;
604  SDValue combineBSWAP(SDNode *N, DAGCombinerInfo &DCI) const;
605  SDValue combineSHIFTROT(SDNode *N, DAGCombinerInfo &DCI) const;
606  SDValue combineBR_CCMASK(SDNode *N, DAGCombinerInfo &DCI) const;
607  SDValue combineSELECT_CCMASK(SDNode *N, DAGCombinerInfo &DCI) const;
608  SDValue combineGET_CCMASK(SDNode *N, DAGCombinerInfo &DCI) const;
609 
610  // If the last instruction before MBBI in MBB was some form of COMPARE,
611  // try to replace it with a COMPARE AND BRANCH just before MBBI.
612  // CCMask and Target are the BRC-like operands for the branch.
613  // Return true if the change was made.
614  bool convertPrevCompareToBranch(MachineBasicBlock *MBB,
616  unsigned CCMask,
617  MachineBasicBlock *Target) const;
618 
619  // Implement EmitInstrWithCustomInserter for individual operation types.
620  MachineBasicBlock *emitSelect(MachineInstr &MI, MachineBasicBlock *BB) const;
621  MachineBasicBlock *emitCondStore(MachineInstr &MI, MachineBasicBlock *BB,
622  unsigned StoreOpcode, unsigned STOCOpcode,
623  bool Invert) const;
624  MachineBasicBlock *emitPair128(MachineInstr &MI,
625  MachineBasicBlock *MBB) const;
626  MachineBasicBlock *emitExt128(MachineInstr &MI, MachineBasicBlock *MBB,
627  bool ClearEven) const;
628  MachineBasicBlock *emitAtomicLoadBinary(MachineInstr &MI,
629  MachineBasicBlock *BB,
630  unsigned BinOpcode, unsigned BitSize,
631  bool Invert = false) const;
632  MachineBasicBlock *emitAtomicLoadMinMax(MachineInstr &MI,
633  MachineBasicBlock *MBB,
634  unsigned CompareOpcode,
635  unsigned KeepOldMask,
636  unsigned BitSize) const;
637  MachineBasicBlock *emitAtomicCmpSwapW(MachineInstr &MI,
638  MachineBasicBlock *BB) const;
639  MachineBasicBlock *emitMemMemWrapper(MachineInstr &MI, MachineBasicBlock *BB,
640  unsigned Opcode) const;
641  MachineBasicBlock *emitStringWrapper(MachineInstr &MI, MachineBasicBlock *BB,
642  unsigned Opcode) const;
643  MachineBasicBlock *emitTransactionBegin(MachineInstr &MI,
644  MachineBasicBlock *MBB,
645  unsigned Opcode, bool NoFloat) const;
646  MachineBasicBlock *emitLoadAndTestCmp0(MachineInstr &MI,
647  MachineBasicBlock *MBB,
648  unsigned Opcode) const;
649 
650  const TargetRegisterClass *getRepRegClassFor(MVT VT) const override;
651 };
652 } // end namespace llvm
653 
654 #endif
BUILTIN_OP_END - This must be the last enum value in this list.
Definition: ISDOpcodes.h:829
A parsed version of the target data layout string in and methods for querying it. ...
Definition: DataLayout.h:111
constexpr char Align[]
Key for Kernel::Arg::Metadata::mAlign.
This represents an addressing mode of: BaseGV + BaseOffs + BaseReg + Scale*ScaleReg If BaseGV is null...
LLVMContext & Context
Compute iterated dominance frontiers using a linear time algorithm.
Definition: AllocatorList.h:24
A Module instance is used to store all the information related to an LLVM module. ...
Definition: Module.h:63
LLVM_NODISCARD LLVM_ATTRIBUTE_ALWAYS_INLINE size_t size() const
size - Get the string size.
Definition: StringRef.h:138
This class represents a function call, abstracting a target machine&#39;s calling convention.
Function Alias Analysis Results
unsigned const TargetRegisterInfo * TRI
NodeType
ISD::NodeType enum - This enum defines the target-independent operators for a SelectionDAG.
Definition: ISDOpcodes.h:39
MVT getVectorIdxTy(const DataLayout &DL) const override
Returns the type to be used for the index operand of: ISD::INSERT_VECTOR_ELT, ISD::EXTRACT_VECTOR_ELT...
unsigned getExceptionSelectorRegister(const Constant *PersonalityFn) const override
If a physical register, this returns the register that receives the exception typeid on entry to a la...
bool useLoadStackGuardNode() const override
Override to support customized stack guard loading.
This class consists of common code factored out of the SmallVector class to reduce code duplication b...
Definition: APFloat.h:42
This class defines information used to lower LLVM code to legal SelectionDAG operators that the targe...
unsigned getScalarSizeInBits() const
Definition: ValueTypes.h:298
virtual TargetLoweringBase::LegalizeTypeAction getPreferredVectorAction(EVT VT) const
Return the preferred vector type legalization action.
This contains information for each constraint that we are lowering.
CondCode
ISD::CondCode enum - These are ordered carefully to make the bitfields below work out...
Definition: ISDOpcodes.h:911
virtual unsigned getInlineAsmMemConstraint(StringRef ConstraintCode) const
uint16_t MCPhysReg
An unsigned integer type large enough to represent all physical registers, but not necessarily virtua...
Machine Value Type.
The instances of the Type class are immutable: once they are created, they are never changed...
Definition: Type.h:46
This is an important class for using LLVM in a threaded context.
Definition: LLVMContext.h:69
This is an important base class in LLVM.
Definition: Constant.h:42
bool isPCREL(unsigned Opcode)
lazy value info
Extended Value Type.
Definition: ValueTypes.h:34
const AMDGPUAS & AS
TargetRegisterInfo base class - We assume that the target defines a static array of TargetRegisterDes...
This structure contains all information that is necessary for lowering calls.
TargetLoweringBase::LegalizeTypeAction getPreferredVectorAction(EVT VT) const override
Return the preferred vector type legalization action.
unsigned getInlineAsmMemConstraint(StringRef ConstraintCode) const override
This is used to represent a portion of an LLVM function in a low-level Data Dependence DAG representa...
Definition: SelectionDAG.h:222
Wrapper class for IR location info (IR ordering and DebugLoc) to be passed into SDNode creation funct...
static const int FIRST_TARGET_MEMORY_OPCODE
FIRST_TARGET_MEMORY_OPCODE - Target-specific pre-isel operations which do not reference a specific me...
Definition: ISDOpcodes.h:836
Represents one node in the SelectionDAG.
Target - Wrapper for Target specific information.
Class for arbitrary precision integers.
Definition: APInt.h:69
ANY_EXTEND - Used for integer types. The high bits are undefined.
Definition: ISDOpcodes.h:441
Fast - This calling convention attempts to make calls as fast as possible (e.g.
Definition: CallingConv.h:43
MVT getScalarShiftAmountTy(const DataLayout &, EVT) const override
EVT is not used in-tree, but is used by out-of-tree target.
Representation of each machine instruction.
Definition: MachineInstr.h:60
ISD::NodeType getExtendForAtomicOps() const override
Returns how the platform&#39;s atomic operations are extended (ZERO_EXTEND, SIGN_EXTEND, or ANY_EXTEND).
#define I(x, y, z)
Definition: MD5.cpp:58
#define N
void insertSSPDeclarations(Module &M) const override
Inserts necessary declarations for SSP (stack protection) purpose.
LegalizeTypeAction
This enum indicates whether a types are legal for a target, and if not, what action should be used to...
unsigned getExceptionPointerRegister(const Constant *PersonalityFn) const override
If a physical register, this returns the register that receives the exception address on entry to an ...
Primary interface to the complete machine description for the target machine.
Definition: TargetMachine.h:59
IRTranslator LLVM IR MI
StringRef - Represent a constant reference to a string, i.e.
Definition: StringRef.h:49
Unlike LLVM values, Selection DAG nodes may return multiple values as the result of a computation...
This file describes how to lower LLVM code to machine code.
bool supportSwiftError() const override
Return true if the target supports swifterror attribute.