Go to the documentation of this file.
19 #ifndef LLVM_ADT_CACHEDHASHSTRING_H
20 #define LLVM_ADT_CACHEDHASHSTRING_H
44 const char *
data()
const {
return P; }
57 assert(!
isEqual(
S, getEmptyKey()) &&
"Cannot hash the empty key!");
58 assert(!
isEqual(
S, getTombstoneKey()) &&
"Cannot hash the tombstone key!");
79 static char *getTombstoneKeyPtr() {
83 bool isEmptyOrTombstone()
const {
84 return P == getEmptyKeyPtr() ||
P == getTombstoneKeyPtr();
87 struct ConstructEmptyOrTombstoneTy {};
89 CachedHashString(ConstructEmptyOrTombstoneTy,
char *EmptyOrTombstonePtr)
90 :
P(EmptyOrTombstonePtr),
Size(0), Hash(0) {
91 assert(isEmptyOrTombstone());
112 if (
Other.isEmptyOrTombstone()) {
127 Other.P = getEmptyKeyPtr();
131 if (!isEmptyOrTombstone())
147 swap(LHS.Size, RHS.Size);
148 swap(LHS.Hash, RHS.Hash);
155 CachedHashString::getEmptyKeyPtr());
159 CachedHashString::getTombstoneKeyPtr());
162 assert(!
isEqual(
S, getEmptyKey()) &&
"Cannot hash the empty key!");
163 assert(!
isEqual(
S, getTombstoneKey()) &&
"Cannot hash the tombstone key!");
170 if (LHS.P == CachedHashString::getEmptyKeyPtr())
171 return RHS.P == CachedHashString::getEmptyKeyPtr();
172 if (LHS.P == CachedHashString::getTombstoneKeyPtr())
173 return RHS.P == CachedHashString::getTombstoneKeyPtr();
177 return LHS.
val() == RHS.
val();
CachedHashString(CachedHashString &&Other) noexcept
This class represents lattice values for constants.
This currently compiles esp xmm0 movsd esp eax eax esp ret We should use not the dag combiner This is because dagcombine2 needs to be able to see through the X86ISD::Wrapper which DAGCombine can t really do The code for turning x load into a single vector load is target independent and should be moved to the dag combiner The code for turning x load into a vector load can only handle a direct load from a global or a direct load from the stack It should be generalized to handle any load from P
static unsigned getHashValue(const CachedHashStringRef &S)
static CachedHashString getTombstoneKey()
bool isEqual(const GCNRPTracker::LiveRegSet &S1, const GCNRPTracker::LiveRegSet &S2)
Common register allocation spilling lr str ldr sxth r3 ldr mla r4 can lr mov lr str ldr sxth r3 mla r4 and then merge mul and lr str ldr sxth r3 mla r4 It also increase the likelihood the store may become dead bb27 Successors according to LLVM ID Predecessors according to mbb< bb27, 0x8b0a7c0 > Note ADDri is not a two address instruction its result reg1037 is an operand of the PHI node in bb76 and its operand reg1039 is the result of the PHI node We should treat it as a two address code and make sure the ADDri is scheduled after any node that reads reg1039 Use info(i.e. register scavenger) to assign it a free register to allow reuse the collector could move the objects and invalidate the derived pointer This is bad enough in the first but safe points can crop up unpredictably **array_addr i32 n y store obj * new
CachedHashString(StringRef S)
CachedHashString & operator=(CachedHashString Other)
static bool isEqual(const CachedHashString &LHS, const CachedHashString &RHS)
CachedHashString(StringRef S, uint32_t Hash)
CachedHashString(const char *S)
A container which contains a StringRef plus a precomputed hash.
A container which contains a string, which it owns, plus a precomputed hash.
CachedHashStringRef(StringRef S, uint32_t Hash)
static CachedHashString getEmptyKey()
assert(ImpDefSCC.getReg()==AMDGPU::SCC &&ImpDefSCC.isDef())
void swap(llvm::BitVector &LHS, llvm::BitVector &RHS)
Implement std::swap in terms of BitVector swap.
<%struct.s * > cast struct s *S to sbyte *< sbyte * > sbyte uint cast struct s *agg result to sbyte *< sbyte * > sbyte uint cast struct s *memtmp to sbyte *< sbyte * > sbyte uint ret void llc ends up issuing two memcpy or custom lower memcpy(of small size) to be ldmia/stmia. I think option 2 is better but the current register allocator cannot allocate a chunk of registers at a time. A feasible temporary solution is to use specific physical registers at the lowering time for small(<
StringRef - Represent a constant reference to a string, i.e.
add sub stmia L5 ldr r0 bl L_printf $stub Instead of a and a wouldn t it be better to do three moves *Return an aggregate type is even return S
static CachedHashStringRef getTombstoneKey()
static unsigned getHashValue(const CachedHashString &S)
CachedHashStringRef(StringRef S)
static CachedHashStringRef getEmptyKey()
static bool isEqual(const Function &Caller, const Function &Callee)
Align max(MaybeAlign Lhs, Align Rhs)
friend void swap(CachedHashString &LHS, CachedHashString &RHS)
static bool isEqual(const CachedHashStringRef &LHS, const CachedHashStringRef &RHS)
const char * data() const
CachedHashString(const CachedHashString &Other)
Optional< std::vector< StOtherPiece > > Other