![]() |
Reference documentation for deal.II version 8.1.0
|
Classes | |
struct | MinMaxAvg |
class | MPI_InitFinalize |
class | Partitioner |
Functions | |
unsigned int | n_mpi_processes (const MPI_Comm &mpi_communicator) |
unsigned int | this_mpi_process (const MPI_Comm &mpi_communicator) |
std::vector< unsigned int > | compute_point_to_point_communication_pattern (const MPI_Comm &mpi_comm, const std::vector< unsigned int > &destinations) |
MPI_Comm | duplicate_communicator (const MPI_Comm &mpi_communicator) |
template<typename T > | |
T | sum (const T &t, const MPI_Comm &mpi_communicator) |
template<typename T , unsigned int N> | |
void | sum (const T(&values)[N], const MPI_Comm &mpi_communicator, T(&sums)[N]) |
template<typename T > | |
void | sum (const std::vector< T > &values, const MPI_Comm &mpi_communicator, std::vector< T > &sums) |
template<typename T > | |
T | max (const T &t, const MPI_Comm &mpi_communicator) |
template<typename T , unsigned int N> | |
void | max (const T(&values)[N], const MPI_Comm &mpi_communicator, T(&maxima)[N]) |
template<typename T > | |
void | max (const std::vector< T > &values, const MPI_Comm &mpi_communicator, std::vector< T > &maxima) |
MinMaxAvg | min_max_avg (const double my_value, const MPI_Comm &mpi_communicator) |
A namespace for utility functions that abstract certain operations using the Message Passing Interface (MPI) or provide fallback operations in case deal.II is configured not to use MPI at all.
unsigned int Utilities::MPI::n_mpi_processes | ( | const MPI_Comm & | mpi_communicator | ) |
Return the number of MPI processes there exist in the given communicator object. If this is a sequential job, it returns 1.
unsigned int Utilities::MPI::this_mpi_process | ( | const MPI_Comm & | mpi_communicator | ) |
Return the number of the present MPI process in the space of processes described by the given communicator. This will be a unique value for each process between zero and (less than) the number of all processes (given by get_n_mpi_processes()).
std::vector<unsigned int> Utilities::MPI::compute_point_to_point_communication_pattern | ( | const MPI_Comm & | mpi_comm, |
const std::vector< unsigned int > & | destinations | ||
) |
Consider an unstructured communication pattern where every process in an MPI universe wants to send some data to a subset of the other processors. To do that, the other processors need to know who to expect messages from. This function computes this information.
mpi_comm | A communicator that describes the processors that are going to communicate with each other. |
destinations | The list of processors the current process wants to send information to. This list need not be sorted in any way. If it contains duplicate entries that means that multiple messages are intended for a given destination. |
MPI_Comm Utilities::MPI::duplicate_communicator | ( | const MPI_Comm & | mpi_communicator | ) |
Given a communicator, generate a new communicator that contains the same set of processors but that has a different, unique identifier.
This functionality can be used to ensure that different objects, such as distributed matrices, each have unique communicators over which they can interact without interfering with each other.
When no longer needed, the communicator created here needs to be destroyed using MPI_Comm_free
.
|
inline |
Return the sum over all processors of the value t
. This function is collective over all processors given in the communicator. If deal.II is not configured for use of MPI, this function simply returns the value of t
. This function corresponds to the MPI_Allreduce
function, i.e. all processors receive the result of this operation.
MPI_Reduce
function instead of the MPI_Allreduce
function. The latter is at most twice as expensive, so if you are concerned about performance, it may be worthwhile investigating whether your algorithm indeed needs the result everywhere or whether you could get away with calling the current function and getting the result everywhere.T
, namely float, double, int, unsigned int
.
|
inline |
|
inline |
|
inline |
Return the maximum over all processors of the value t
. This function is collective over all processors given in the communicator. If deal.II is not configured for use of MPI, this function simply returns the value of t
. This function corresponds to the MPI_Allreduce
function, i.e. all processors receive the result of this operation.
MPI_Reduce
function instead of the MPI_Allreduce
function. The latter is at most twice as expensive, so if you are concerned about performance, it may be worthwhile investigating whether your algorithm indeed needs the result everywhere or whether you could get away with calling the current function and getting the result everywhere.T
, namely float, double, int, unsigned int
.
|
inline |
|
inline |
Returns sum, average, minimum, maximum, processor id of minimum and maximum as a collective operation of on the given MPI communicator mpi_communicator
. Each processor's value is given in my_value
and the result will be returned. The result is available on all machines.
MPI_Reduce
function instead of the MPI_Allreduce
function. The latter is at most twice as expensive, so if you are concerned about performance, it may be worthwhile investigating whether your algorithm indeed needs the result everywhere or whether you could get away with calling the current function and getting the result everywhere.