MPI collectives Flashcards

1
Q

what kinds of collective operations are ther e

A

synchs
commnication
reduction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

propertries of MPI collectives

A

done by all processes
in same sequence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

barier

A

int MPI_Barrier(MPI_Comm communicator);

synch operation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

broadcast

A

int MPI_Bcast(void* buffer,
int count,
MPI_Datatype datatype,
int emitter_rank,
MPI_Comm communicator);

MPI_Bcast(&buffer, 1, MPI_INT, broadcast_root, MPI_COMM_WORLD);

not synced

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

gather

A

int MPI_Gather(const void* buffer_send,
int count_send,
MPI_Datatype datatype_send,
void* buffer_recv,
int count_recv,
MPI_Datatype datatype_recv,
int root,
MPI_Comm communicator);

MPI_Gather(&my_value, 1, MPI_INT, buffer, 1, MPI_INT, root_rank, MPI_COMM_WORLD);

stores data ordered by ranks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

scatter

A

int MPI_Scatter(const void* buffer_send,
int count_send,
MPI_Datatype datatype_send,
void* buffer_recv,
int count_recv,
MPI_Datatype datatype_recv,
int root,
MPI_Comm communicator);

MPI_Scatter(buffer, 1, MPI_INT, &my_value, 1, MPI_INT, root_rank, MPI_COMM_WORLD);

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

gather v

A

like gather but each proc sends different number of elements

int MPI_Gatherv(const void* buffer_send,
int count_send,
MPI_Datatype datatype_send,
void* buffer_recv,
const int* counts_recv,
const int* displacements,
MPI_Datatype datatype_recv,
int root,
MPI_Comm communicator);

MPI_Gatherv(&my_value, 1, MPI_INT, buffer, counts, displacements, MPI_INT, root_rank, MPI_COMM_WORLD);

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

alltoall

A

int MPI_Alltoall(const void* buffer_send,
int count_send,
MPI_Datatype datatype_send,
void* buffer_recv,
int count_recv,
MPI_Datatype datatype_recv,
MPI_Comm communicator);

MPI_Alltoall(&my_values, 1, MPI_INT, buffer_recv, 1, MPI_INT, MPI_COMM_WORLD);

sends data from and to each process

scaling bottleneck, expensive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

other gather and scatter and alltoall types

A

allgather

allgatherv

scatterv

alltoallv

alltoallw: also different types!! buffer+counts[]+displ[]+types[]

int MPI_Alltoallw(const void* buffer_send,
const int counts_send[],
const int displacements_send[],
const MPI_Datatype datatypes_send[],
void* buffer_recv,
const int counts_recv[],
const int displacements_recv[],
const MPI_Datatype datatypes_recv[],
MPI_Comm communicator);

now displacements are by bytes and not n of types

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

reduction

A

int MPI_Reduce(const void* send_buffer,
void* receive_buffer,
int count,
MPI_Datatype datatype,
MPI_Op operation,
int root,
MPI_Comm communicator);

MPI_Reduce(&my_rank, &reduction_result, 1, MPI_INT, MPI_SUM, root_rank, MPI_COMM_WORLD);

example operations
MPI_MAX, MPI_PROD, MPI_SUM

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

other reduce variants

A

allreduce: reduce + bcast

reducescatter_block: reduce + scatter (equal chunks)

reducescatter: unequal

MPI_Scan: reduction with prefix??

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

communicator creation

A

either duplicate
int MPI_Comm_dup(MPI_Comm old_comm,
MPI_Comm* new_comm);

MPI_Comm_free(comm)

or split
int MPI_Comm_split(MPI_Comm old_communicator,
int colour,
int key,
MPI_Comm* new_communicator);

How well did you know this?
1
Not at all
2
3
4
5
Perfectly