MPI Flashcards
MPI_init(NULL, NULL)
- Decide which processes get which
rank - Allocating storage for message
buffers - Define a communicator that
consist of all the processes started
by the user at program start up.
This communicator is called
MPI_COM_WORLD.
MPI_Finalize()
- Says that any resources allocated for MPI can be freed
MPI_Comm_size(MPI_COMM_WORLD, &comm_sz)
- Sets
comm_sz
to the number of processors
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank)
- sets
my_rank
to the process number
MPI_Send(snd, buffer, snd_sz, snd_type, dest, tag, comm)
- Greeting is pointer to block of memory containing contents of the message ( a string rated)
- Second argument is number of items in the buffer
- Third argument: type of items in the buffer (in this case char)
- 4th argument is the rank of the destination process
- 5th argument is the tag: a nonnegative integer’s used to distinguish two messages.
- 6th argument: communicate or (in most cases
MPI_Comm_World
MPI_Recv(rcv_buf, rcv_sz, rcv_type, src, tag, comm, &status)
- Greeting is pointer to block of memory containing contents of the received message ( a string rated)
- Second argument is number of items in the buffer
- Third argument: type of items in the buffer (in this case char)
- 4th argument is the rank of the source process from which the message is sent
- 5th argument is the tag: a nonnegative integers used to distinguish two messages. (must match tag of send
operation) - 6th argument: communicator (in most cases MPI_Comm_World
- 7th argument: either. A Variable of type
MPI_Status
or a special constant calledMPI_STATUS_IGNORE
)
MPI_Status* status
- The status variable has 3 elements that it can receive
status.MPI_SOURCE
status.MPI_TAG
status.MPI_ERROR
MPI communication modes
Synchronous
* Only completes when receive completed
Buffered
* Always completes irrespective of wheter the receive has completed
Standard
* Either synchronous or buffered, runtime system decides
Ready Send
* Always completes irrespective of whether the receive has completed
Receive
* Completes when a message has arrived
Blocking forms
Standard Send
* MPI_Send
Synchronous Send
* MPI_Ssend
Buffered Send
* MPI_Bsend
Ready Send
* MPI_Rsend
Receive
* MPI_Recv
Non-blocking forms
Standard Send
* MPI_Isend
Synchronous Send
* MPI_Issend
Buffered Send
* MPI_Ibsend
Ready Send
* MPI_Irsend
Receive
* MPI_Irecv
Ready Send (MPI_Rsend
)
- Completes immediately
- Guaranteed to succeed normally
- if matching receive has already been posted
- If receiver not ready
- message may be dropped and an error may occur
- Non-blocking ready send
- has no advantage over blocking ready sent
Synchronous Send (MPI_Ssend
)
MPI_Ssend is guaranteed to block until the matching
receive starts.
MPI_Sendrecv
- An alternative to scheduling the communications ourselves.
- Carries out a blocking send and a receive in a single call.
- The dest and the source can be the same or different.
- Especially useful because MPI schedules the communications so that the program won’t hang or crash.
Testing for completion
- If operation has completed:
MPI_Test MPI_Wait MPI_Testany MPI_Waitany MPI_Testsome
MPI_Reduce
We can use MPI_Reduce to get input data from all other processes and perform an operation on them.
Operations available:
* MPI_MAX
* MPI_MIN
* MPI_SUM
* MPI_PROD
* MPI_LAND
* MPI_BAND
* MPI_LOR
* MPI_BOR
* MPI_LXOR
* MPI_BXOR
* MPI_MAXLOC
* MPI_MINLOC
Collective vs Point-Point communcation
Collective:
* All processes in communicator must call the same collective function
* For example, a program that attempts to match a call to MPI_Reduce
on one process with a call toMPI_Recv
on another process is erroneous, and, in all likelihood, the program will hang or crash.
* Arguments passed by each process must be “compatible.”
* For example, if one process passes in 0 as the dest_process and another passes in 1, then the
outcome of a call to MPI_Reduce
is erroneous, and, once again, the program is likely to hang or
crash.
* All processes need to pass an argument to all parameters even if not used
* E.g., output parameter only used on proc 0
* Matched on basis of communicator and order called
* While point-to-point was matched on basis of tags and communicator
* Collective communicator don’t use tags so they are matched on only basis of communicator and order they
are called.
MPI_Allreduce
This stores reduction result at all the processes. Its
implemented as a reduce operation followed by a
broadcast operation.
Useful in a situation in which all of the processes
need the result of a global sum in order to complete
some larger computation.
MPI_Bcast
Data belonging to a single process is sent to all of the processes in the communicator.
MPI_Scatter
MPI_Scatter
can be used in a function that reads in an entire vector on process 0 but only sends the
needed components to each of the other processes
MPI_Gather
Collect all of the components of the vector onto process 0, and then process 0 can process all of the
components.
MPI_Allgather
Concatenates the contents of each process’ send_buf_p
and stores this in each process’ recv_buf_p
.
As usual, recv_count is the amount of data being received from each process.