MPI collectives Flashcards
what kinds of collective operations are ther e
synchs
commnication
reduction
propertries of MPI collectives
done by all processes
in same sequence
barier
int MPI_Barrier(MPI_Comm communicator);
synch operation
broadcast
int MPI_Bcast(void* buffer,
int count,
MPI_Datatype datatype,
int emitter_rank,
MPI_Comm communicator);
MPI_Bcast(&buffer, 1, MPI_INT, broadcast_root, MPI_COMM_WORLD);
not synced
gather
int MPI_Gather(const void* buffer_send,
int count_send,
MPI_Datatype datatype_send,
void* buffer_recv,
int count_recv,
MPI_Datatype datatype_recv,
int root,
MPI_Comm communicator);
MPI_Gather(&my_value, 1, MPI_INT, buffer, 1, MPI_INT, root_rank, MPI_COMM_WORLD);
stores data ordered by ranks
scatter
int MPI_Scatter(const void* buffer_send,
int count_send,
MPI_Datatype datatype_send,
void* buffer_recv,
int count_recv,
MPI_Datatype datatype_recv,
int root,
MPI_Comm communicator);
MPI_Scatter(buffer, 1, MPI_INT, &my_value, 1, MPI_INT, root_rank, MPI_COMM_WORLD);
gather v
like gather but each proc sends different number of elements
int MPI_Gatherv(const void* buffer_send,
int count_send,
MPI_Datatype datatype_send,
void* buffer_recv,
const int* counts_recv,
const int* displacements,
MPI_Datatype datatype_recv,
int root,
MPI_Comm communicator);
MPI_Gatherv(&my_value, 1, MPI_INT, buffer, counts, displacements, MPI_INT, root_rank, MPI_COMM_WORLD);
alltoall
int MPI_Alltoall(const void* buffer_send,
int count_send,
MPI_Datatype datatype_send,
void* buffer_recv,
int count_recv,
MPI_Datatype datatype_recv,
MPI_Comm communicator);
MPI_Alltoall(&my_values, 1, MPI_INT, buffer_recv, 1, MPI_INT, MPI_COMM_WORLD);
sends data from and to each process
scaling bottleneck, expensive
other gather and scatter and alltoall types
allgather
allgatherv
scatterv
alltoallv
alltoallw: also different types!! buffer+counts[]+displ[]+types[]
int MPI_Alltoallw(const void* buffer_send,
const int counts_send[],
const int displacements_send[],
const MPI_Datatype datatypes_send[],
void* buffer_recv,
const int counts_recv[],
const int displacements_recv[],
const MPI_Datatype datatypes_recv[],
MPI_Comm communicator);
now displacements are by bytes and not n of types
reduction
int MPI_Reduce(const void* send_buffer,
void* receive_buffer,
int count,
MPI_Datatype datatype,
MPI_Op operation,
int root,
MPI_Comm communicator);
MPI_Reduce(&my_rank, &reduction_result, 1, MPI_INT, MPI_SUM, root_rank, MPI_COMM_WORLD);
example operations
MPI_MAX, MPI_PROD, MPI_SUM
other reduce variants
allreduce: reduce + bcast
reducescatter_block: reduce + scatter (equal chunks)
reducescatter: unequal
MPI_Scan: reduction with prefix??
communicator creation
either duplicate
int MPI_Comm_dup(MPI_Comm old_comm,
MPI_Comm* new_comm);
MPI_Comm_free(comm)
or split
int MPI_Comm_split(MPI_Comm old_communicator,
int colour,
int key,
MPI_Comm* new_communicator);