LLVM
15.0.0git
|
Functions | |
The initial backend is deliberately restricted to z10 We should add support for later architectures at some point If an asm ties an i32 r result to an i64 the input will be treated as an leaving the upper bits uninitialised For i64 store i32 i32 *dst ret void from CodeGen SystemZ asm ll will use LHI rather than LGHI to load This seems to be a general target independent problem The tuning of the choice between LOAD | ADDRESS (LA) and addition in SystemZISelDAGToDAG.cpp is suspect. It should be tweaked based on performance measurements. -- There is no scheduling support. -- We don 't use the BRANCH ON INDEX instructions. -- We only use MVC |
Variables | |
The initial backend is deliberately restricted to z10 We should add support for later architectures at some point If an asm ties an i32 r result to an i64 | input |
The initial backend is deliberately restricted to z10 We should add support for later architectures at some point If an asm ties an i32 r result to an i64 the input will be treated as an | i32 |
The initial backend is deliberately restricted to z10 We should add support for later architectures at some point If an asm ties an i32 r result to an i64 the input will be treated as an leaving the upper bits uninitialised For | example |
The initial backend is deliberately restricted to z10 We should add support for later architectures at some point If an asm ties an i32 r result to an i64 the input will be treated as an leaving the upper bits uninitialised For i64 store i32 | val |
The initial backend is deliberately restricted to z10 We should add support for later architectures at some point If an asm ties an i32 r result to an i64 the input will be treated as an leaving the upper bits uninitialised For i64 store i32 i32 *dst ret void from CodeGen SystemZ asm ll will use LHI rather than LGHI to load This seems to be a general target independent problem The tuning of the choice between LOAD XC and CLC for constant length block operations We could extend them to variable length operations | too |
therefore end up | as |
therefore end up | r2 |
therefore end up llgh | r0 |
therefore end up llgh r3 lr r0 br r14 but truncating the load would | give |
therefore end up llgh r3 lr r0 br r14 but truncating the load would lh r3 br r14 Functions | like |
therefore end up llgh r3 lr r0 br r14 but truncating the load would lh r3 br r14 Functions ret i64 and ought to be implemented ngr r0 br r14 but two address optimizations reverse the order of the AND and | force |
therefore end up llgh r3 lr r0 br r14 but truncating the load would lh r3 br r14 Functions ret i64 and ought to be implemented ngr r0 br r14 but two address optimizations reverse the order of the AND and ngr r2 lgr r0 br r14 CodeGen SystemZ and ll has several examples of this Out of range displacements are usually handled by loading the full address into a register In many cases it would be better to create an anchor point instead E g | for |
therefore end up llgh r3 lr r0 br r14 but truncating the load would lh r3 br r14 Functions ret i64 and ought to be implemented ngr r0 br r14 but two address optimizations reverse the order of the AND and ngr r2 lgr r0 br r14 CodeGen SystemZ and ll has several examples of this Out of range displacements are usually handled by loading the full address into a register In many cases it would be better to create an anchor point instead E g i64 | base |
The initial backend is deliberately restricted to z10 We should add support for later architectures at some point If an asm ties an i32 r result to an i64 the input will be treated as an leaving the upper bits uninitialised For i64 store i32 i32* dst ret void from CodeGen SystemZ asm ll will use LHI rather than LGHI to load This seems to be a general target independent problem The tuning of the choice between LOAD ADDRESS | ( | LA | ) |
therefore end up llgh r3 lr r0 br r14 but truncating the load would lh r3 br r14 Functions ret i64 and ought to be implemented as |
Definition at line 84 of file README.txt.
therefore end up llgh r3 lr r0 br r14 but truncating the load would lh r3 br r14 Functions ret i64 and ought to be implemented ngr r0 br r14 but two address optimizations reverse the order of the AND and ngr r2 lgr r0 br r14 CodeGen SystemZ and ll has several examples of this Out of range displacements are usually handled by loading the full address into a register In many cases it would be better to create an anchor point instead E g i64 base |
Definition at line 125 of file README.txt.
Referenced by llvm::object::ELFObjectFile< ELFT >::dynamic_relocation_sections(), llvm::object::ELFFile< ELFT >::dynamicEntries(), findBasePointers(), getNumberOfRelocations(), llvm::object::ELFObjectFile< ELFT >::getSectionContents(), llvm::object::XCOFFObjectFile::getSectionContents(), llvm::object::ELFFile< ELFT >::getSectionContentsAsArray(), llvm::object::ELFFile< ELFT >::getSegmentContents(), readSIB(), llvm::object::ELFFile< ELFT >::sections(), and llvm::object::ELFFile< ELFT >::toMappedAddr().
The initial backend is deliberately restricted to z10 We should add support for later architectures at some point If an asm ties an i32 r result to an i64 the input will be treated as an leaving the upper bits uninitialised For example |
Definition at line 15 of file README.txt.
therefore end up llgh r3 lr r0 br r14 but truncating the load would lh r3 br r14 Functions ret i64 and ought to be implemented ngr r0 br r14 but two address optimizations reverse the order of the AND and ngr r2 lgr r0 br r14 CodeGen SystemZ and ll has several examples of this Out of range displacements are usually handled by loading the full address into a register In many cases it would be better to create an anchor point instead E g for |
Definition at line 125 of file README.txt.
therefore end up llgh r3 lr r0 br r14 but truncating the load would lh r3 br r14 Functions ret i64 and ought to be implemented ngr r0 br r14 but two address optimizations reverse the order of the AND and force |
Definition at line 112 of file README.txt.
Referenced by llvm::checkForCycles(), and llvm::GCNIterativeScheduler::scheduleMinReg().
Definition at line 91 of file README.txt.
The initial backend is deliberately restricted to z10 We should add support for later architectures at some point If an asm ties an i32 r result to an i64 the input will be treated as an i32 |
Definition at line 11 of file README.txt.
|
inline |
Definition at line 10 of file README.txt.
Referenced by blake3_compress_subtree_wide(), chunk_state_fill_buf(), chunk_state_update(), compress_chunks_parallel(), compress_subtree_to_parent_node(), llvm::LTOModule::getDependentLibrary(), llvm::LTOModule::getDependentLibraryCount(), hash_one_avx512(), hash_one_portable(), hash_one_sse2(), hash_one_sse41(), llvm::yaml::ScalarTraits< FlowStringRef >::input(), llvm::yaml::ScalarTraits< FlowStringValue >::input(), llvm::yaml::BlockScalarTraits< BlockStringValue >::input(), llvm::yaml::BlockScalarTraits< StringBlockVal >::input(), llvm::yaml::ScalarTraits< UnsignedValue >::input(), into(), llvm_blake3_hasher_update(), parseInt(), and llvm::VersionTuple::tryParse().
Definition at line 100 of file README.txt.
therefore end up llgh r3 lr r0 br r14 but truncating the load would lh r3 br r14 Functions ret i64 and ought to be implemented ngr r0 br r14 but two address optimizations reverse the order of the AND and ngr r0 |
Definition at line 85 of file README.txt.
therefore end up llgh r3 lr r0 br r14 but truncating the load would lh r3 br r14 Functions ret i64 and ought to be implemented ngr r0 br r14 but two address optimizations reverse the order of the AND and ngr r2 lgr r2 |
Definition at line 84 of file README.txt.
The initial backend is deliberately restricted to z10 We should add support for later architectures at some point If an asm ties an i32 r result to an i64 the input will be treated as an leaving the upper bits uninitialised For i64 store i32 i32* dst ret void from CodeGen SystemZ asm ll will use LHI rather than LGHI to load This seems to be a general target independent problem The tuning of the choice between LOAD XC and CLC for constant length block operations We could extend them to variable length operations too |
Definition at line 40 of file README.txt.
The initial backend is deliberately restricted to z10 We should add support for later architectures at some point If an asm ties an i32 r result to an i64 the input will be treated as an leaving the upper bits uninitialised For i64 store i32 val |
Definition at line 15 of file README.txt.
Referenced by llvm::AVR::fixups::adjustBranchTarget(), llvm::APInt::APInt(), llvm::ScopedHashTableVal< K, V >::Create(), llvm::AArch64_AM::decodeLogicalImmediate(), emitIndirectDst(), llvm::optional_detail::OptionalStorage< uint64_t >::emplace(), executeFCMP_BOOL(), llvm::R600InstrInfo::expandPostRAPseudo(), llvm::DenseMapInfo< omp::TraitProperty >::getHashValue(), llvm::DenseMapInfo< hash_code, void >::getHashValue(), llvm::getSamplerName(), llvm::getSurfaceName(), llvm::getTextureName(), llvm::PackedVectorBase< T, BitNum, BitVectorTy, false >::getValue(), llvm::PackedVectorBase< T, BitNum, BitVectorTy, true >::getValue(), llvm::optional_detail::OptionalStorage< uint64_t >::getValue(), llvm::ARM_PROC::IFlagsToString(), llvm::ARM_PROC::IModToString(), llvm::ARM_ISB::InstSyncBOptToString(), IsConstantOne(), llvm::isImage(), llvm::isImageReadOnly(), llvm::isImageReadWrite(), llvm::isImageWriteOnly(), isImmMskBitp(), isImmU16(), isImmU6(), isImmUs(), isImmUs2(), isImmUs4(), llvm::isManaged(), llvm::isSampler(), llvm::isSurface(), llvm::isTexture(), llvm::AArch64_AM::isValidDecodeLogicalImmediate(), lle_X_memset(), llvm::ARM_MB::MemBOptToString(), llvm::PackedVector< T, BitNum, BitVectorTy >::reference::operator=(), llvm::optional_detail::OptionalStorage< uint64_t >::operator=(), llvm::GVNExpression::op_inserter::operator=(), llvm::GVNExpression::int_op_inserter::operator=(), llvm::support::detail::packed_endian_specific_integral< T, support::little, support::unaligned >::packed_endian_specific_integral(), llvm::ARMTargetLowering::PerformIntrinsicCombine(), llvm::ARMInstPrinter::printInstSyncBOption(), llvm::ARMInstPrinter::printMemBOption(), llvm::ARMInstPrinter::printTraceSyncBOption(), llvm::PackedVector< T, BitNum, BitVectorTy >::push_back(), llvm::iplist_impl< simple_ilist< llvm::AliasSet, Options... >, ilist_traits< llvm::AliasSet > >::push_back(), llvm::iplist_impl< simple_ilist< llvm::AliasSet, Options... >, ilist_traits< llvm::AliasSet > >::push_front(), llvm::support::endian::readAtBitAlignment(), llvm::optional_detail::OptionalStorage< uint64_t >::reset(), llvm::hashing::detail::rotate(), llvm::LLLexer::setIgnoreColonInIdentifiers(), llvm::HexagonPacketizerList::setmemShufDisabled(), llvm::MCAsmLexer::setSkipSpace(), llvm::Type::setSubclassData(), llvm::PackedVectorBase< T, BitNum, BitVectorTy, false >::setValue(), llvm::PackedVectorBase< T, BitNum, BitVectorTy, true >::setValue(), llvm::hashing::detail::shift_mix(), llvm::StoreInst::StoreInst(), llvm::ARM_TSB::TraceSyncBOptToString(), llvm::optional_detail::OptionalStorage< uint64_t >::value(), and llvm::support::endian::writeAtBitAlignment().