LLVM  9.0.0svn
SystemZISelLowering.h
Go to the documentation of this file.
1 //===-- SystemZISelLowering.h - SystemZ DAG lowering interface --*- C++ -*-===//
2 //
3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4 // See https://llvm.org/LICENSE.txt for license information.
5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6 //
7 //===----------------------------------------------------------------------===//
8 //
9 // This file defines the interfaces that SystemZ uses to lower LLVM code into a
10 // selection DAG.
11 //
12 //===----------------------------------------------------------------------===//
13 
14 #ifndef LLVM_LIB_TARGET_SYSTEMZ_SYSTEMZISELLOWERING_H
15 #define LLVM_LIB_TARGET_SYSTEMZ_SYSTEMZISELLOWERING_H
16 
17 #include "SystemZ.h"
21 
22 namespace llvm {
23 namespace SystemZISD {
24 enum NodeType : unsigned {
26 
27  // Return with a flag operand. Operand 0 is the chain operand.
29 
30  // Calls a function. Operand 0 is the chain operand and operand 1
31  // is the target address. The arguments start at operand 2.
32  // There is an optional glue operand at the end.
35 
36  // TLS calls. Like regular calls, except operand 1 is the TLS symbol.
37  // (The call target is implicitly __tls_get_offset.)
40 
41  // Wraps a TargetGlobalAddress that should be loaded using PC-relative
42  // accesses (LARL). Operand 0 is the address.
44 
45  // Used in cases where an offset is applied to a TargetGlobalAddress.
46  // Operand 0 is the full TargetGlobalAddress and operand 1 is a
47  // PCREL_WRAPPER for an anchor point. This is used so that we can
48  // cheaply refer to either the full address or the anchor point
49  // as a register base.
51 
52  // Integer absolute.
54 
55  // Integer comparisons. There are three operands: the two values
56  // to compare, and an integer of type SystemZICMP.
58 
59  // Floating-point comparisons. The two operands are the values to compare.
61 
62  // Test under mask. The first operand is ANDed with the second operand
63  // and the condition codes are set on the result. The third operand is
64  // a boolean that is true if the condition codes need to distinguish
65  // between CCMASK_TM_MIXED_MSB_0 and CCMASK_TM_MIXED_MSB_1 (which the
66  // register forms do but the memory forms don't).
67  TM,
68 
69  // Branches if a condition is true. Operand 0 is the chain operand;
70  // operand 1 is the 4-bit condition-code mask, with bit N in
71  // big-endian order meaning "branch if CC=N"; operand 2 is the
72  // target block and operand 3 is the flag operand.
74 
75  // Selects between operand 0 and operand 1. Operand 2 is the
76  // mask of condition-code values for which operand 0 should be
77  // chosen over operand 1; it has the same form as BR_CCMASK.
78  // Operand 3 is the flag operand.
80 
81  // Evaluates to the gap between the stack pointer and the
82  // base of the dynamically-allocatable area.
84 
85  // Count number of bits set in operand 0 per byte.
87 
88  // Wrappers around the ISD opcodes of the same name. The output is GR128.
89  // Input operands may be GR64 or GR32, depending on the instruction.
94 
95  // Add/subtract with overflow/carry. These have the same operands as
96  // the corresponding standard operations, except with the carry flag
97  // replaced by a condition code value.
99 
100  // Set the condition code from a boolean value in operand 0.
101  // Operand 1 is a mask of all condition-code values that may result of this
102  // operation, operand 2 is a mask of condition-code values that may result
103  // if the boolean is true.
104  // Note that this operation is always optimized away, we will never
105  // generate any code for it.
107 
108  // Use a series of MVCs to copy bytes from one memory location to another.
109  // The operands are:
110  // - the target address
111  // - the source address
112  // - the constant length
113  //
114  // This isn't a memory opcode because we'd need to attach two
115  // MachineMemOperands rather than one.
117 
118  // Like MVC, but implemented as a loop that handles X*256 bytes
119  // followed by straight-line code to handle the rest (if any).
120  // The value of X is passed as an additional operand.
122 
123  // Similar to MVC and MVC_LOOP, but for logic operations (AND, OR, XOR).
124  NC,
126  OC,
128  XC,
130 
131  // Use CLC to compare two blocks of memory, with the same comments
132  // as for MVC and MVC_LOOP.
135 
136  // Use an MVST-based sequence to implement stpcpy().
138 
139  // Use a CLST-based sequence to implement strcmp(). The two input operands
140  // are the addresses of the strings to compare.
142 
143  // Use an SRST-based sequence to search a block of memory. The first
144  // operand is the end address, the second is the start, and the third
145  // is the character to search for. CC is set to 1 on success and 2
146  // on failure.
148 
149  // Store the CC value in bits 29 and 28 of an integer.
151 
152  // Compiler barrier only; generate a no-op.
154 
155  // Transaction begin. The first operand is the chain, the second
156  // the TDB pointer, and the third the immediate control field.
157  // Returns CC value and chain.
160 
161  // Transaction end. Just the chain operand. Returns CC value and chain.
163 
164  // Create a vector constant by filling byte N of the result with bit
165  // 15-N of the single operand.
167 
168  // Create a vector constant by replicating an element-sized RISBG-style mask.
169  // The first operand specifies the starting set bit and the second operand
170  // specifies the ending set bit. Both operands count from the MSB of the
171  // element.
173 
174  // Replicate a GPR scalar value into all elements of a vector.
176 
177  // Create a vector from two i64 GPRs.
179 
180  // Replicate one element of a vector into all elements. The first operand
181  // is the vector and the second is the index of the element to replicate.
183 
184  // Interleave elements from the high half of operand 0 and the high half
185  // of operand 1.
187 
188  // Likewise for the low halves.
190 
191  // Concatenate the vectors in the first two operands, shift them left
192  // by the third operand, and take the first half of the result.
194 
195  // Take one element of the first v2i64 operand and the one element of
196  // the second v2i64 operand and concatenate them to form a v2i64 result.
197  // The third operand is a 4-bit value of the form 0A0B, where A and B
198  // are the element selectors for the first operand and second operands
199  // respectively.
201 
202  // Perform a general vector permute on vector operands 0 and 1.
203  // Each byte of operand 2 controls the corresponding byte of the result,
204  // in the same way as a byte-level VECTOR_SHUFFLE mask.
206 
207  // Pack vector operands 0 and 1 into a single vector with half-sized elements.
209 
210  // Likewise, but saturate the result and set CC. PACKS_CC does signed
211  // saturation and PACKLS_CC does unsigned saturation.
214 
215  // Unpack the first half of vector operand 0 into double-sized elements.
216  // UNPACK_HIGH sign-extends and UNPACKL_HIGH zero-extends.
219 
220  // Likewise for the second half.
223 
224  // Shift each element of vector operand 0 by the number of bits specified
225  // by scalar operand 1.
229 
230  // For each element of the output type, sum across all sub-elements of
231  // operand 0 belonging to the corresponding element, and add in the
232  // rightmost sub-element of the corresponding element of operand 1.
234 
235  // Compare integer vector operands 0 and 1 to produce the usual 0/-1
236  // vector result. VICMPE is for equality, VICMPH for "signed greater than"
237  // and VICMPHL for "unsigned greater than".
241 
242  // Likewise, but also set the condition codes on the result.
246 
247  // Compare floating-point vector operands 0 and 1 to preoduce the usual 0/-1
248  // vector result. VFCMPE is for "ordered and equal", VFCMPH for "ordered and
249  // greater than" and VFCMPHE for "ordered and greater than or equal to".
253 
254  // Likewise, but also set the condition codes on the result.
258 
259  // Test floating-point data class for vectors.
261 
262  // Extend the even f32 elements of vector operand 0 to produce a vector
263  // of f64 elements.
265 
266  // Round the f64 elements of vector operand 0 to f32s and store them in the
267  // even elements of the result.
269 
270  // AND the two vector operands together and set CC based on the result.
272 
273  // String operations that set CC as a side-effect.
283 
284  // Test Data Class.
285  //
286  // Operand 0: the value to test
287  // Operand 1: the bit mask
289 
290  // Wrappers around the inner loop of an 8- or 16-bit ATOMIC_SWAP or
291  // ATOMIC_LOAD_<op>.
292  //
293  // Operand 0: the address of the containing 32-bit-aligned field
294  // Operand 1: the second operand of <op>, in the high bits of an i32
295  // for everything except ATOMIC_SWAPW
296  // Operand 2: how many bits to rotate the i32 left to bring the first
297  // operand into the high bits
298  // Operand 3: the negative of operand 2, for rotating the other way
299  // Operand 4: the width of the field in bits (8 or 16)
311 
312  // A wrapper around the inner loop of an ATOMIC_CMP_SWAP.
313  //
314  // Operand 0: the address of the containing 32-bit-aligned field
315  // Operand 1: the compare value, in the low bits of an i32
316  // Operand 2: the swap value, in the low bits of an i32
317  // Operand 3: how many bits to rotate the i32 left to bring the first
318  // operand into the high bits
319  // Operand 4: the negative of operand 2, for rotating the other way
320  // Operand 5: the width of the field in bits (8 or 16)
322 
323  // Atomic compare-and-swap returning CC value.
324  // Val, CC, OUTCHAIN = ATOMIC_CMP_SWAP(INCHAIN, ptr, cmp, swap)
326 
327  // 128-bit atomic load.
328  // Val, OUTCHAIN = ATOMIC_LOAD_128(INCHAIN, ptr)
330 
331  // 128-bit atomic store.
332  // OUTCHAIN = ATOMIC_STORE_128(INCHAIN, val, ptr)
334 
335  // 128-bit atomic compare-and-swap.
336  // Val, CC, OUTCHAIN = ATOMIC_CMP_SWAP(INCHAIN, ptr, cmp, swap)
338 
339  // Byte swapping load/store. Same operands as regular load/store.
341 
342  // Prefetch from the second operand using the 4-bit control code in
343  // the first operand. The code is 1 for a load prefetch and 2 for
344  // a store prefetch.
346 };
347 
348 // Return true if OPCODE is some kind of PC-relative address.
349 inline bool isPCREL(unsigned Opcode) {
350  return Opcode == PCREL_WRAPPER || Opcode == PCREL_OFFSET;
351 }
352 } // end namespace SystemZISD
353 
354 namespace SystemZICMP {
355 // Describes whether an integer comparison needs to be signed or unsigned,
356 // or whether either type is OK.
357 enum {
361 };
362 } // end namespace SystemZICMP
363 
364 class SystemZSubtarget;
366 
368 public:
369  explicit SystemZTargetLowering(const TargetMachine &TM,
370  const SystemZSubtarget &STI);
371 
372  // Override TargetLowering.
373  MVT getScalarShiftAmountTy(const DataLayout &, EVT) const override {
374  return MVT::i32;
375  }
376  MVT getVectorIdxTy(const DataLayout &DL) const override {
377  // Only the lower 12 bits of an element index are used, so we don't
378  // want to clobber the upper 32 bits of a GPR unnecessarily.
379  return MVT::i32;
380  }
382  const override {
383  // Widen subvectors to the full width rather than promoting integer
384  // elements. This is better because:
385  //
386  // (a) it means that we can handle the ABI for passing and returning
387  // sub-128 vectors without having to handle them as legal types.
388  //
389  // (b) we don't have instructions to extend on load and truncate on store,
390  // so promoting the integers is less efficient.
391  //
392  // (c) there are no multiplication instructions for the widest integer
393  // type (v2i64).
394  if (VT.getScalarSizeInBits() % 8 == 0)
395  return TypeWidenVector;
397  }
398  EVT getSetCCResultType(const DataLayout &DL, LLVMContext &,
399  EVT) const override;
400  bool isFMAFasterThanFMulAndFAdd(EVT VT) const override;
401  bool isFPImmLegal(const APFloat &Imm, EVT VT) const override;
402  bool isLegalICmpImmediate(int64_t Imm) const override;
403  bool isLegalAddImmediate(int64_t Imm) const override;
404  bool isLegalAddressingMode(const DataLayout &DL, const AddrMode &AM, Type *Ty,
405  unsigned AS,
406  Instruction *I = nullptr) const override;
407  bool allowsMisalignedMemoryAccesses(EVT VT, unsigned AS,
408  unsigned Align,
409  bool *Fast) const override;
410  bool isTruncateFree(Type *, Type *) const override;
411  bool isTruncateFree(EVT, EVT) const override;
412  const char *getTargetNodeName(unsigned Opcode) const override;
413  std::pair<unsigned, const TargetRegisterClass *>
414  getRegForInlineAsmConstraint(const TargetRegisterInfo *TRI,
415  StringRef Constraint, MVT VT) const override;
417  getConstraintType(StringRef Constraint) const override;
419  getSingleConstraintMatchWeight(AsmOperandInfo &info,
420  const char *constraint) const override;
421  void LowerAsmOperandForConstraint(SDValue Op,
422  std::string &Constraint,
423  std::vector<SDValue> &Ops,
424  SelectionDAG &DAG) const override;
425 
426  unsigned getInlineAsmMemConstraint(StringRef ConstraintCode) const override {
427  if (ConstraintCode.size() == 1) {
428  switch(ConstraintCode[0]) {
429  default:
430  break;
431  case 'o':
433  case 'Q':
435  case 'R':
437  case 'S':
439  case 'T':
441  }
442  }
443  return TargetLowering::getInlineAsmMemConstraint(ConstraintCode);
444  }
445 
446  /// If a physical register, this returns the register that receives the
447  /// exception address on entry to an EH pad.
448  unsigned
449  getExceptionPointerRegister(const Constant *PersonalityFn) const override {
450  return SystemZ::R6D;
451  }
452 
453  /// If a physical register, this returns the register that receives the
454  /// exception typeid on entry to a landing pad.
455  unsigned
456  getExceptionSelectorRegister(const Constant *PersonalityFn) const override {
457  return SystemZ::R7D;
458  }
459 
460  /// Override to support customized stack guard loading.
461  bool useLoadStackGuardNode() const override {
462  return true;
463  }
464  void insertSSPDeclarations(Module &M) const override {
465  }
466 
468  EmitInstrWithCustomInserter(MachineInstr &MI,
469  MachineBasicBlock *BB) const override;
470  SDValue LowerOperation(SDValue Op, SelectionDAG &DAG) const override;
471  void LowerOperationWrapper(SDNode *N, SmallVectorImpl<SDValue> &Results,
472  SelectionDAG &DAG) const override;
473  void ReplaceNodeResults(SDNode *N, SmallVectorImpl<SDValue>&Results,
474  SelectionDAG &DAG) const override;
475  const MCPhysReg *getScratchRegisters(CallingConv::ID CC) const override;
476  bool allowTruncateForTailCall(Type *, Type *) const override;
477  bool mayBeEmittedAsTailCall(const CallInst *CI) const override;
478  SDValue LowerFormalArguments(SDValue Chain, CallingConv::ID CallConv,
479  bool isVarArg,
481  const SDLoc &DL, SelectionDAG &DAG,
482  SmallVectorImpl<SDValue> &InVals) const override;
483  SDValue LowerCall(CallLoweringInfo &CLI,
484  SmallVectorImpl<SDValue> &InVals) const override;
485 
486  bool CanLowerReturn(CallingConv::ID CallConv, MachineFunction &MF,
487  bool isVarArg,
489  LLVMContext &Context) const override;
490  SDValue LowerReturn(SDValue Chain, CallingConv::ID CallConv, bool IsVarArg,
492  const SmallVectorImpl<SDValue> &OutVals, const SDLoc &DL,
493  SelectionDAG &DAG) const override;
494  SDValue PerformDAGCombine(SDNode *N, DAGCombinerInfo &DCI) const override;
495 
496  /// Determine which of the bits specified in Mask are known to be either
497  /// zero or one and return them in the KnownZero/KnownOne bitsets.
498  void computeKnownBitsForTargetNode(const SDValue Op,
499  KnownBits &Known,
500  const APInt &DemandedElts,
501  const SelectionDAG &DAG,
502  unsigned Depth = 0) const override;
503 
504  /// Determine the number of bits in the operation that are sign bits.
505  unsigned ComputeNumSignBitsForTargetNode(SDValue Op,
506  const APInt &DemandedElts,
507  const SelectionDAG &DAG,
508  unsigned Depth) const override;
509 
511  return ISD::ANY_EXTEND;
512  }
513 
514  bool supportSwiftError() const override {
515  return true;
516  }
517 
518 private:
519  const SystemZSubtarget &Subtarget;
520 
521  // Implement LowerOperation for individual opcodes.
522  SDValue getVectorCmp(SelectionDAG &DAG, unsigned Opcode,
523  const SDLoc &DL, EVT VT,
524  SDValue CmpOp0, SDValue CmpOp1) const;
525  SDValue lowerVectorSETCC(SelectionDAG &DAG, const SDLoc &DL,
526  EVT VT, ISD::CondCode CC,
527  SDValue CmpOp0, SDValue CmpOp1) const;
528  SDValue lowerSETCC(SDValue Op, SelectionDAG &DAG) const;
529  SDValue lowerBR_CC(SDValue Op, SelectionDAG &DAG) const;
530  SDValue lowerSELECT_CC(SDValue Op, SelectionDAG &DAG) const;
531  SDValue lowerGlobalAddress(GlobalAddressSDNode *Node,
532  SelectionDAG &DAG) const;
533  SDValue lowerTLSGetOffset(GlobalAddressSDNode *Node,
534  SelectionDAG &DAG, unsigned Opcode,
535  SDValue GOTOffset) const;
536  SDValue lowerThreadPointer(const SDLoc &DL, SelectionDAG &DAG) const;
537  SDValue lowerGlobalTLSAddress(GlobalAddressSDNode *Node,
538  SelectionDAG &DAG) const;
539  SDValue lowerBlockAddress(BlockAddressSDNode *Node,
540  SelectionDAG &DAG) const;
541  SDValue lowerJumpTable(JumpTableSDNode *JT, SelectionDAG &DAG) const;
542  SDValue lowerConstantPool(ConstantPoolSDNode *CP, SelectionDAG &DAG) const;
543  SDValue lowerFRAMEADDR(SDValue Op, SelectionDAG &DAG) const;
544  SDValue lowerRETURNADDR(SDValue Op, SelectionDAG &DAG) const;
545  SDValue lowerVASTART(SDValue Op, SelectionDAG &DAG) const;
546  SDValue lowerVACOPY(SDValue Op, SelectionDAG &DAG) const;
547  SDValue lowerDYNAMIC_STACKALLOC(SDValue Op, SelectionDAG &DAG) const;
548  SDValue lowerGET_DYNAMIC_AREA_OFFSET(SDValue Op, SelectionDAG &DAG) const;
549  SDValue lowerSMUL_LOHI(SDValue Op, SelectionDAG &DAG) const;
550  SDValue lowerUMUL_LOHI(SDValue Op, SelectionDAG &DAG) const;
551  SDValue lowerSDIVREM(SDValue Op, SelectionDAG &DAG) const;
552  SDValue lowerUDIVREM(SDValue Op, SelectionDAG &DAG) const;
553  SDValue lowerXALUO(SDValue Op, SelectionDAG &DAG) const;
554  SDValue lowerADDSUBCARRY(SDValue Op, SelectionDAG &DAG) const;
555  SDValue lowerBITCAST(SDValue Op, SelectionDAG &DAG) const;
556  SDValue lowerOR(SDValue Op, SelectionDAG &DAG) const;
557  SDValue lowerCTPOP(SDValue Op, SelectionDAG &DAG) const;
558  SDValue lowerATOMIC_FENCE(SDValue Op, SelectionDAG &DAG) const;
559  SDValue lowerATOMIC_LOAD(SDValue Op, SelectionDAG &DAG) const;
560  SDValue lowerATOMIC_STORE(SDValue Op, SelectionDAG &DAG) const;
561  SDValue lowerATOMIC_LOAD_OP(SDValue Op, SelectionDAG &DAG,
562  unsigned Opcode) const;
563  SDValue lowerATOMIC_LOAD_SUB(SDValue Op, SelectionDAG &DAG) const;
564  SDValue lowerATOMIC_CMP_SWAP(SDValue Op, SelectionDAG &DAG) const;
565  SDValue lowerSTACKSAVE(SDValue Op, SelectionDAG &DAG) const;
566  SDValue lowerSTACKRESTORE(SDValue Op, SelectionDAG &DAG) const;
567  SDValue lowerPREFETCH(SDValue Op, SelectionDAG &DAG) const;
568  SDValue lowerINTRINSIC_W_CHAIN(SDValue Op, SelectionDAG &DAG) const;
569  SDValue lowerINTRINSIC_WO_CHAIN(SDValue Op, SelectionDAG &DAG) const;
570  SDValue lowerBUILD_VECTOR(SDValue Op, SelectionDAG &DAG) const;
571  SDValue lowerVECTOR_SHUFFLE(SDValue Op, SelectionDAG &DAG) const;
572  SDValue lowerSCALAR_TO_VECTOR(SDValue Op, SelectionDAG &DAG) const;
573  SDValue lowerINSERT_VECTOR_ELT(SDValue Op, SelectionDAG &DAG) const;
574  SDValue lowerEXTRACT_VECTOR_ELT(SDValue Op, SelectionDAG &DAG) const;
575  SDValue lowerExtendVectorInreg(SDValue Op, SelectionDAG &DAG,
576  unsigned UnpackHigh) const;
577  SDValue lowerShift(SDValue Op, SelectionDAG &DAG, unsigned ByScalar) const;
578 
579  bool canTreatAsByteVector(EVT VT) const;
580  SDValue combineExtract(const SDLoc &DL, EVT ElemVT, EVT VecVT, SDValue OrigOp,
581  unsigned Index, DAGCombinerInfo &DCI,
582  bool Force) const;
583  SDValue combineTruncateExtract(const SDLoc &DL, EVT TruncVT, SDValue Op,
584  DAGCombinerInfo &DCI) const;
585  SDValue combineZERO_EXTEND(SDNode *N, DAGCombinerInfo &DCI) const;
586  SDValue combineSIGN_EXTEND(SDNode *N, DAGCombinerInfo &DCI) const;
587  SDValue combineSIGN_EXTEND_INREG(SDNode *N, DAGCombinerInfo &DCI) const;
588  SDValue combineMERGE(SDNode *N, DAGCombinerInfo &DCI) const;
589  SDValue combineLOAD(SDNode *N, DAGCombinerInfo &DCI) const;
590  SDValue combineSTORE(SDNode *N, DAGCombinerInfo &DCI) const;
591  SDValue combineEXTRACT_VECTOR_ELT(SDNode *N, DAGCombinerInfo &DCI) const;
592  SDValue combineJOIN_DWORDS(SDNode *N, DAGCombinerInfo &DCI) const;
593  SDValue combineFP_ROUND(SDNode *N, DAGCombinerInfo &DCI) const;
594  SDValue combineFP_EXTEND(SDNode *N, DAGCombinerInfo &DCI) const;
595  SDValue combineBSWAP(SDNode *N, DAGCombinerInfo &DCI) const;
596  SDValue combineBR_CCMASK(SDNode *N, DAGCombinerInfo &DCI) const;
597  SDValue combineSELECT_CCMASK(SDNode *N, DAGCombinerInfo &DCI) const;
598  SDValue combineGET_CCMASK(SDNode *N, DAGCombinerInfo &DCI) const;
599  SDValue combineIntDIVREM(SDNode *N, DAGCombinerInfo &DCI) const;
600 
601  // If the last instruction before MBBI in MBB was some form of COMPARE,
602  // try to replace it with a COMPARE AND BRANCH just before MBBI.
603  // CCMask and Target are the BRC-like operands for the branch.
604  // Return true if the change was made.
605  bool convertPrevCompareToBranch(MachineBasicBlock *MBB,
607  unsigned CCMask,
608  MachineBasicBlock *Target) const;
609 
610  // Implement EmitInstrWithCustomInserter for individual operation types.
611  MachineBasicBlock *emitSelect(MachineInstr &MI, MachineBasicBlock *BB) const;
612  MachineBasicBlock *emitCondStore(MachineInstr &MI, MachineBasicBlock *BB,
613  unsigned StoreOpcode, unsigned STOCOpcode,
614  bool Invert) const;
615  MachineBasicBlock *emitPair128(MachineInstr &MI,
616  MachineBasicBlock *MBB) const;
617  MachineBasicBlock *emitExt128(MachineInstr &MI, MachineBasicBlock *MBB,
618  bool ClearEven) const;
619  MachineBasicBlock *emitAtomicLoadBinary(MachineInstr &MI,
620  MachineBasicBlock *BB,
621  unsigned BinOpcode, unsigned BitSize,
622  bool Invert = false) const;
623  MachineBasicBlock *emitAtomicLoadMinMax(MachineInstr &MI,
624  MachineBasicBlock *MBB,
625  unsigned CompareOpcode,
626  unsigned KeepOldMask,
627  unsigned BitSize) const;
628  MachineBasicBlock *emitAtomicCmpSwapW(MachineInstr &MI,
629  MachineBasicBlock *BB) const;
630  MachineBasicBlock *emitMemMemWrapper(MachineInstr &MI, MachineBasicBlock *BB,
631  unsigned Opcode) const;
632  MachineBasicBlock *emitStringWrapper(MachineInstr &MI, MachineBasicBlock *BB,
633  unsigned Opcode) const;
634  MachineBasicBlock *emitTransactionBegin(MachineInstr &MI,
635  MachineBasicBlock *MBB,
636  unsigned Opcode, bool NoFloat) const;
637  MachineBasicBlock *emitLoadAndTestCmp0(MachineInstr &MI,
638  MachineBasicBlock *MBB,
639  unsigned Opcode) const;
640 
641  const TargetRegisterClass *getRepRegClassFor(MVT VT) const override;
642 };
643 } // end namespace llvm
644 
645 #endif
BUILTIN_OP_END - This must be the last enum value in this list.
Definition: ISDOpcodes.h:878
A parsed version of the target data layout string in and methods for querying it. ...
Definition: DataLayout.h:110
constexpr char Align[]
Key for Kernel::Arg::Metadata::mAlign.
Definition: Any.h:26
This represents an addressing mode of: BaseGV + BaseOffs + BaseReg + Scale*ScaleReg If BaseGV is null...
LLVMContext & Context
This class represents lattice values for constants.
Definition: AllocatorList.h:23
A Module instance is used to store all the information related to an LLVM module. ...
Definition: Module.h:64
This class represents a function call, abstracting a target machine&#39;s calling convention.
Function Alias Analysis Results
unsigned const TargetRegisterInfo * TRI
NodeType
ISD::NodeType enum - This enum defines the target-independent operators for a SelectionDAG.
Definition: ISDOpcodes.h:38
MVT getVectorIdxTy(const DataLayout &DL) const override
Returns the type to be used for the index operand of: ISD::INSERT_VECTOR_ELT, ISD::EXTRACT_VECTOR_ELT...
unsigned getExceptionSelectorRegister(const Constant *PersonalityFn) const override
If a physical register, this returns the register that receives the exception typeid on entry to a la...
bool useLoadStackGuardNode() const override
Override to support customized stack guard loading.
This class consists of common code factored out of the SmallVector class to reduce code duplication b...
Definition: APFloat.h:41
This class defines information used to lower LLVM code to legal SelectionDAG operators that the targe...
Fast - This calling convention attempts to make calls as fast as possible (e.g.
Definition: CallingConv.h:42
This contains information for each constraint that we are lowering.
LLVM_NODISCARD size_t size() const
size - Get the string size.
Definition: StringRef.h:130
CondCode
ISD::CondCode enum - These are ordered carefully to make the bitfields below work out...
Definition: ISDOpcodes.h:960
virtual unsigned getInlineAsmMemConstraint(StringRef ConstraintCode) const
uint16_t MCPhysReg
An unsigned integer type large enough to represent all physical registers, but not necessarily virtua...
Machine Value Type.
The instances of the Type class are immutable: once they are created, they are never changed...
Definition: Type.h:45
This is an important class for using LLVM in a threaded context.
Definition: LLVMContext.h:68
unsigned getScalarSizeInBits() const
This is an important base class in LLVM.
Definition: Constant.h:41
bool isPCREL(unsigned Opcode)
lazy value info
Extended Value Type.
Definition: ValueTypes.h:33
TargetRegisterInfo base class - We assume that the target defines a static array of TargetRegisterDes...
This structure contains all information that is necessary for lowering calls.
unsigned getInlineAsmMemConstraint(StringRef ConstraintCode) const override
This is used to represent a portion of an LLVM function in a low-level Data Dependence DAG representa...
Definition: SelectionDAG.h:221
Wrapper class for IR location info (IR ordering and DebugLoc) to be passed into SDNode creation funct...
static const int FIRST_TARGET_MEMORY_OPCODE
FIRST_TARGET_MEMORY_OPCODE - Target-specific pre-isel operations which do not reference a specific me...
Definition: ISDOpcodes.h:885
Represents one node in the SelectionDAG.
Target - Wrapper for Target specific information.
Class for arbitrary precision integers.
Definition: APInt.h:69
ANY_EXTEND - Used for integer types. The high bits are undefined.
Definition: ISDOpcodes.h:470
MVT getScalarShiftAmountTy(const DataLayout &, EVT) const override
EVT is not used in-tree, but is used by out-of-tree target.
Representation of each machine instruction.
Definition: MachineInstr.h:63
ISD::NodeType getExtendForAtomicOps() const override
Returns how the platform&#39;s atomic operations are extended (ZERO_EXTEND, SIGN_EXTEND, or ANY_EXTEND).
#define I(x, y, z)
Definition: MD5.cpp:58
#define N
void insertSSPDeclarations(Module &M) const override
Inserts necessary declarations for SSP (stack protection) purpose.
LegalizeTypeAction
This enum indicates whether a types are legal for a target, and if not, what action should be used to...
unsigned getExceptionPointerRegister(const Constant *PersonalityFn) const override
If a physical register, this returns the register that receives the exception address on entry to an ...
TargetLoweringBase::LegalizeTypeAction getPreferredVectorAction(MVT VT) const override
Return the preferred vector type legalization action.
Primary interface to the complete machine description for the target machine.
Definition: TargetMachine.h:58
IRTranslator LLVM IR MI
StringRef - Represent a constant reference to a string, i.e.
Definition: StringRef.h:48
Unlike LLVM values, Selection DAG nodes may return multiple values as the result of a computation...
virtual TargetLoweringBase::LegalizeTypeAction getPreferredVectorAction(MVT VT) const
Return the preferred vector type legalization action.
This file describes how to lower LLVM code to machine code.
bool supportSwiftError() const override
Return true if the target supports swifterror attribute.