LLVM 18.0.0git
Public Member Functions | List of all members
llvm::ThreadPool Class Reference

A ThreadPool for asynchronous parallel execution on a defined number of threads. More...

#include "llvm/Support/ThreadPool.h"

Public Member Functions

 ThreadPool (ThreadPoolStrategy S=hardware_concurrency())
 Construct a pool using the hardware strategy S for mapping hardware execution resources (threads, cores, CPUs) Defaults to using the maximum execution resources in the system, but accounting for the affinity mask.
 
 ~ThreadPool ()
 Blocking destructor: the pool will wait for all the threads to complete.
 
template<typename Function , typename... Args>
auto async (Function &&F, Args &&...ArgList)
 Asynchronous submission of a task to the pool.
 
template<typename Function , typename... Args>
auto async (ThreadPoolTaskGroup &Group, Function &&F, Args &&...ArgList)
 Overload, task will be in the given task group.
 
template<typename Func >
auto async (Func &&F) -> std::shared_future< decltype(F())>
 Asynchronous submission of a task to the pool.
 
template<typename Func >
auto async (ThreadPoolTaskGroup &Group, Func &&F) -> std::shared_future< decltype(F())>
 
void wait ()
 Blocking wait for all the threads to complete and the queue to be empty.
 
void wait (ThreadPoolTaskGroup &Group)
 Blocking wait for only all the threads in the given group to complete.
 
unsigned getThreadCount () const
 
bool isWorkerThread () const
 Returns true if the current thread is a worker thread of this thread pool.
 

Detailed Description

A ThreadPool for asynchronous parallel execution on a defined number of threads.

The pool keeps a vector of threads alive, waiting on a condition variable for some work to become available.

It is possible to reuse one thread pool for different groups of tasks by grouping tasks using ThreadPoolTaskGroup. All tasks are processed using the same queue, but it is possible to wait only for a specific group of tasks to finish.

It is also possible for worker threads to submit new tasks and wait for them. Note that this may result in a deadlock in cases such as when a task (directly or indirectly) tries to wait for its own completion, or when all available threads are used up by tasks waiting for a task that has no thread left to run on (this includes waiting on the returned future). It should be generally safe to wait() for a group as long as groups do not form a cycle.

Definition at line 52 of file ThreadPool.h.

Constructor & Destructor Documentation

◆ ThreadPool()

ThreadPool::ThreadPool ( ThreadPoolStrategy  S = hardware_concurrency())

Construct a pool using the hardware strategy S for mapping hardware execution resources (threads, cores, CPUs) Defaults to using the maximum execution resources in the system, but accounting for the affinity mask.

Definition at line 194 of file ThreadPool.cpp.

References llvm::ThreadPoolStrategy::compute_thread_count(), and llvm::errs().

◆ ~ThreadPool()

ThreadPool::~ThreadPool ( )

Blocking destructor: the pool will wait for all the threads to complete.

Definition at line 221 of file ThreadPool.cpp.

References wait().

Member Function Documentation

◆ async() [1/4]

template<typename Func >
auto llvm::ThreadPool::async ( Func &&  F) -> std::shared_future<decltype(F())>
inline

Asynchronous submission of a task to the pool.

The returned future can be used to wait for the task to finish and is non-blocking on destruction.

Definition at line 83 of file ThreadPool.h.

References F.

◆ async() [2/4]

template<typename Function , typename... Args>
auto llvm::ThreadPool::async ( Function &&  F,
Args &&...  ArgList 
)
inline

Asynchronous submission of a task to the pool.

The returned future can be used to wait for the task to finish and is non-blocking on destruction.

Definition at line 66 of file ThreadPool.h.

References async(), and F.

Referenced by async(), llvm::ThreadPoolTaskGroup::async(), llvm::gsym::DwarfTransformer::convert(), llvm::DWARFLinker::link(), llvm::dwarflinker_parallel::DWARFLinkerImpl::link(), llvm::ThinLTOCodeGenerator::run(), splitCodeGen(), and llvm::splitCodeGen().

◆ async() [3/4]

template<typename Func >
auto llvm::ThreadPool::async ( ThreadPoolTaskGroup Group,
Func &&  F 
) -> std::shared_future<decltype(F())>
inline

Definition at line 89 of file ThreadPool.h.

References F.

◆ async() [4/4]

template<typename Function , typename... Args>
auto llvm::ThreadPool::async ( ThreadPoolTaskGroup Group,
Function &&  F,
Args &&...  ArgList 
)
inline

Overload, task will be in the given task group.

Definition at line 74 of file ThreadPool.h.

References async(), and F.

◆ getThreadCount()

unsigned llvm::ThreadPool::getThreadCount ( ) const
inline

Definition at line 110 of file ThreadPool.h.

◆ isWorkerThread()

bool ThreadPool::isWorkerThread ( ) const

Returns true if the current thread is a worker thread of this thread pool.

Definition at line 217 of file ThreadPool.cpp.

References llvm::report_fatal_error().

◆ wait() [1/2]

void ThreadPool::wait ( )

Blocking wait for all the threads to complete and the queue to be empty.

It is an error to try to add new tasks while blocking on this call. Calling wait() from a task would deadlock waiting for itself.

Definition at line 202 of file ThreadPool.cpp.

Referenced by llvm::gsym::DwarfTransformer::convert(), llvm::DWARFLinker::link(), llvm::dwarflinker_parallel::DWARFLinkerImpl::link(), splitCodeGen(), llvm::ThreadPoolTaskGroup::wait(), wait(), and ~ThreadPool().

◆ wait() [2/2]

void ThreadPool::wait ( ThreadPoolTaskGroup Group)

Blocking wait for only all the threads in the given group to complete.

It is possible to wait even inside a task, but waiting (directly or indirectly) on itself will deadlock. If called from a task running on a worker thread, the call may process pending tasks while waiting in order not to waste the thread.

Definition at line 211 of file ThreadPool.cpp.

References wait().


The documentation for this class was generated from the following files: