Message Passing Interface Flashcards
Purpose of MPI_Init
Initializes MPI
Purpose of MPI_Finalize
Terminates MPI
What do MPI functions return?
MPI_SUCCESS
What is a communicator?
Abstract description of a group of processes.
What is MPI_COMM_WORLD?
Describes all the processes involved in your parallel run.
What is a processor rank?
How does one find the rank of a process within a communicator?
A unique identifier assigned to each process.
MPI_Comm_rank()
How are ranks numbered within a communicator?
From zero to the number of processes minus 1
What does MPI_Comm_Split do?
Divides the communicator into disjoint sub communications
What role does the color argument to MPI_Comm_Split perform?
Controls of subset assignment.
What role does the key argument to MPI_Comm_Split perform?
Control the ranking of the divided processes.
What is global numbering?
Numbering the processes in the MPI World (all communicators).
What is local numbering
Numbering the processes within a particular communicator
What does it mean when an MPI program is loosely synchronous?
Tasks synchronize to perform interactions, otherwise, run asynchronously.
What 3 pieces of information are used to pass sending data to a collective communication operation?
Send Buffer
Count
Data Type
What function does MPI_barrier perform?
Blocks all processes until they all reach the barrier wall, synchronizing the processes.
What function does MPI_Bcast perform?
Communicates data from one process to other processes.
What function does MPI_Reduce perform?
Bring data from all processes into one single item.
What is the difference between MPI_Reduce and MPI_Allreduce?
Allreduce returns the single result back to all the processes.
What does MPI_Scatter do?
Each process receives a different subset of data.
What does MPI_Gather do?
Collect data from all processes to a single root process.
What does MPI_Alltoall do?
A collection of simultaneous broadcasts and gathers.
What does MPI_Scan do?
Performs a running reduction, but it keeps the partial results.
What does it mean if message-passing operations are buffered?
A send operation can complete no matter whether the receive has been posted.
What does it mean if message-passing operations are blocking?
The sending process is blocked until the receiving process has received the message.
How many calls does it take to receive a message using a non-blocking protocol?
2 Calls:
The first call initiates the receive operation and specifies the buffer size where the message will be stored.
The second call checks whether the receive operation has been completed.
What impact does buffering have on program portability?
Different computers have different buffer sizes, so not all buffers are program portable.
What techniques can be used to avoid deadlock in MPI communication?
Use nonblocking communication
Use tie-breaking to coordinate communication
Use MPI_Sendrecv call to break dependencies between send and recv calls.