Go to the documentation of this file.
29 void StringTableBuilder::initSize() {
56 : K(K), Alignment(Alignment) {
68 using StringPair = std::pair<CachedHashStringRef, size_t>;
90 return (
unsigned char)
S[
S.size() - Pos - 1];
105 size_t J = Vec.size();
106 for (
size_t K = 1; K < J;) {
130 finalizeStringTable(
true);
134 finalizeStringTable(
false);
137 void StringTableBuilder::finalizeStringTable(
bool Optimize) {
141 std::vector<StringPair *> Strings;
142 Strings.reserve(StringIndexMap.size());
144 Strings.push_back(&
P);
153 size_t Pos = Size -
S.size() - (K !=
RAW);
160 Size =
alignTo(Size, Alignment);
191 StringIndexMap.clear();
196 auto I = StringIndexMap.find(
S);
197 assert(
I != StringIndexMap.end() &&
"String is not in table!");
206 auto P = StringIndexMap.insert(std::make_pair(
S, 0));
208 size_t Start =
alignTo(Size, Alignment);
209 P.first->second = Start;
210 Size = Start +
S.size() + (K !=
RAW);
212 return P.first->second;
uint64_t alignTo(uint64_t Size, Align A)
Returns a multiple of A needed to store Size bytes.
bool isAligned(Align Lhs, uint64_t SizeInBytes)
Checks that SizeInBytes is a multiple of the alignment.
This is an optimization pass for GlobalISel generic memory operations.
StringTableBuilder(Kind K, Align Alignment=Align(1))
void write32be(void *P, uint32_t V)
This currently compiles esp xmm0 movsd esp eax eax esp ret We should use not the dag combiner This is because dagcombine2 needs to be able to see through the X86ISD::Wrapper which DAGCombine can t really do The code for turning x load into a single vector load is target independent and should be moved to the dag combiner The code for turning x load into a vector load can only handle a direct load from a global or a direct load from the stack It should be generalized to handle any load from P
void write32le(void *P, uint32_t V)
void finalizeInOrder()
Finalize the string table without reording it.
static int charTailAt(StringPair *P, size_t Pos)
void finalize()
Analyze the strings and build the final table.
size_t getOffset(CachedHashStringRef S) const
Get the offest of a string in the string table.
MutableArrayRef - Represent a mutable reference to an array (0 or more elements consecutively in memo...
(vector float) vec_cmpeq(*A, *B) C
This class implements an extremely fast bulk output stream that can only output to a stream.
This struct is a compact representation of a valid (non-zero power of two) alignment.
A container which contains a StringRef plus a precomputed hash.
MutableArrayRef< T > slice(size_t N, size_t M) const
slice(n, m) - Chop off the first N elements of the array, and keep M elements in the array.
std::pair< CachedHashStringRef, size_t > StringPair
assert(ImpDefSCC.getReg()==AMDGPU::SCC &&ImpDefSCC.isDef())
void swap(llvm::BitVector &LHS, llvm::BitVector &RHS)
Implement std::swap in terms of BitVector swap.
<%struct.s * > cast struct s *S to sbyte *< sbyte * > sbyte uint cast struct s *agg result to sbyte *< sbyte * > sbyte uint cast struct s *memtmp to sbyte *< sbyte * > sbyte uint ret void llc ends up issuing two memcpy or custom lower memcpy(of small size) to be ldmia/stmia. I think option 2 is better but the current register allocator cannot allocate a chunk of registers at a time. A feasible temporary solution is to use specific physical registers at the lowering time for small(<
void write(raw_ostream &OS) const
StringRef - Represent a constant reference to a string, i.e.
add sub stmia L5 ldr r0 bl L_printf $stub Instead of a and a wouldn t it be better to do three moves *Return an aggregate type is even return S
static void multikeySort(MutableArrayRef< StringPair * > Vec, int Pos)
bool endswith(StringRef Suffix) const
size_t add(CachedHashStringRef S)
Add a string to the builder.