LLVM 19.0.0git
Public Member Functions | List of all members
llvm::ThreadPoolInterface Class Referenceabstract

This defines the abstract base interface for a ThreadPool allowing asynchronous parallel execution on a defined number of threads. More...

#include "llvm/Support/ThreadPool.h"

Inheritance diagram for llvm::ThreadPoolInterface:
Inheritance graph
[legend]

Public Member Functions

virtual ~ThreadPoolInterface ()
 Destroying the pool will drain the pending tasks and wait.
 
virtual void wait ()=0
 Blocking wait for all the threads to complete and the queue to be empty.
 
virtual void wait (ThreadPoolTaskGroup &Group)=0
 Blocking wait for only all the threads in the given group to complete.
 
virtual unsigned getMaxConcurrency () const =0
 Returns the maximum number of worker this pool can eventually grow to.
 
template<typename Function , typename... Args>
auto async (Function &&F, Args &&...ArgList)
 Asynchronous submission of a task to the pool.
 
template<typename Function , typename... Args>
auto async (ThreadPoolTaskGroup &Group, Function &&F, Args &&...ArgList)
 Overload, task will be in the given task group.
 
template<typename Func >
auto async (Func &&F) -> std::shared_future< decltype(F())>
 Asynchronous submission of a task to the pool.
 
template<typename Func >
auto async (ThreadPoolTaskGroup &Group, Func &&F) -> std::shared_future< decltype(F())>
 

Detailed Description

This defines the abstract base interface for a ThreadPool allowing asynchronous parallel execution on a defined number of threads.

It is possible to reuse one thread pool for different groups of tasks by grouping tasks using ThreadPoolTaskGroup. All tasks are processed using the same queue, but it is possible to wait only for a specific group of tasks to finish.

It is also possible for worker threads to submit new tasks and wait for them. Note that this may result in a deadlock in cases such as when a task (directly or indirectly) tries to wait for its own completion, or when all available threads are used up by tasks waiting for a task that has no thread left to run on (this includes waiting on the returned future). It should be generally safe to wait() for a group as long as groups do not form a cycle.

Definition at line 49 of file ThreadPool.h.

Constructor & Destructor Documentation

◆ ~ThreadPoolInterface()

ThreadPoolInterface::~ThreadPoolInterface ( )
virtualdefault

Destroying the pool will drain the pending tasks and wait.

The current thread may participate in the execution of the pending tasks.

Member Function Documentation

◆ async() [1/4]

template<typename Func >
auto llvm::ThreadPoolInterface::async ( Func &&  F) -> std::shared_future<decltype(F())>
inline

Asynchronous submission of a task to the pool.

The returned future can be used to wait for the task to finish and is non-blocking on destruction.

Definition at line 95 of file ThreadPool.h.

References F.

◆ async() [2/4]

template<typename Function , typename... Args>
auto llvm::ThreadPoolInterface::async ( Function &&  F,
Args &&...  ArgList 
)
inline

Asynchronous submission of a task to the pool.

The returned future can be used to wait for the task to finish and is non-blocking on destruction.

Definition at line 78 of file ThreadPool.h.

References async(), and F.

Referenced by async(), llvm::ThreadPoolTaskGroup::async(), llvm::gsym::DwarfTransformer::convert(), llvm::dwarf_linker::classic::DWARFLinker::link(), llvm::dwarf_linker::parallel::DWARFLinkerImpl::link(), llvm::ThinLTOCodeGenerator::run(), splitCodeGen(), and llvm::splitCodeGen().

◆ async() [3/4]

template<typename Func >
auto llvm::ThreadPoolInterface::async ( ThreadPoolTaskGroup Group,
Func &&  F 
) -> std::shared_future<decltype(F())>
inline

Definition at line 101 of file ThreadPool.h.

References F.

◆ async() [4/4]

template<typename Function , typename... Args>
auto llvm::ThreadPoolInterface::async ( ThreadPoolTaskGroup Group,
Function &&  F,
Args &&...  ArgList 
)
inline

Overload, task will be in the given task group.

Definition at line 86 of file ThreadPool.h.

References async(), and F.

◆ getMaxConcurrency()

virtual unsigned llvm::ThreadPoolInterface::getMaxConcurrency ( ) const
pure virtual

Returns the maximum number of worker this pool can eventually grow to.

Implemented in llvm::SingleThreadExecutor.

◆ wait() [1/2]

virtual void llvm::ThreadPoolInterface::wait ( )
pure virtual

Blocking wait for all the threads to complete and the queue to be empty.

It is an error to try to add new tasks while blocking on this call. Calling wait() from a task would deadlock waiting for itself.

Implemented in llvm::SingleThreadExecutor.

Referenced by llvm::ThreadPoolTaskGroup::wait().

◆ wait() [2/2]

virtual void llvm::ThreadPoolInterface::wait ( ThreadPoolTaskGroup Group)
pure virtual

Blocking wait for only all the threads in the given group to complete.

It is possible to wait even inside a task, but waiting (directly or indirectly) on itself will deadlock. If called from a task running on a worker thread, the call may process pending tasks while waiting in order not to waste the thread.

Implemented in llvm::SingleThreadExecutor.


The documentation for this class was generated from the following files: