MPI Flashcards
MPI_init(NULL, NULL)
- Decide which processes get which
rank - Allocating storage for message
buffers - Define a communicator that
consist of all the processes started
by the user at program start up.
This communicator is called
MPI_COM_WORLD.
MPI_Finalize()
- Says that any resources allocated for MPI can be freed
MPI_Comm_size(MPI_COMM_WORLD, &comm_sz)
- Sets
comm_sz
to the number of processors
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank)
- sets
my_rank
to the process number
MPI_Send(snd, buffer, snd_sz, snd_type, dest, tag, comm)
- Greeting is pointer to block of memory containing contents of the message ( a string rated)
- Second argument is number of items in the buffer
- Third argument: type of items in the buffer (in this case char)
- 4th argument is the rank of the destination process
- 5th argument is the tag: a nonnegative integer’s used to distinguish two messages.
- 6th argument: communicate or (in most cases
MPI_Comm_World
MPI_Recv(rcv_buf, rcv_sz, rcv_type, src, tag, comm, &status)
- Greeting is pointer to block of memory containing contents of the received message ( a string rated)
- Second argument is number of items in the buffer
- Third argument: type of items in the buffer (in this case char)
- 4th argument is the rank of the source process from which the message is sent
- 5th argument is the tag: a nonnegative integers used to distinguish two messages. (must match tag of send
operation) - 6th argument: communicator (in most cases MPI_Comm_World
- 7th argument: either. A Variable of type
MPI_Status
or a special constant calledMPI_STATUS_IGNORE
)
MPI_Status* status
- The status variable has 3 elements that it can receive
status.MPI_SOURCE
status.MPI_TAG
status.MPI_ERROR
MPI communication modes
Synchronous
* Only completes when receive completed
Buffered
* Always completes irrespective of wheter the receive has completed
Standard
* Either synchronous or buffered, runtime system decides
Ready Send
* Always completes irrespective of whether the receive has completed
Receive
* Completes when a message has arrived
Blocking forms
Standard Send
* MPI_Send
Synchronous Send
* MPI_Ssend
Buffered Send
* MPI_Bsend
Ready Send
* MPI_Rsend
Receive
* MPI_Recv
Non-blocking forms
Standard Send
* MPI_Isend
Synchronous Send
* MPI_Issend
Buffered Send
* MPI_Ibsend
Ready Send
* MPI_Irsend
Receive
* MPI_Irecv
Ready Send (MPI_Rsend
)
- Completes immediately
- Guaranteed to succeed normally
- if matching receive has already been posted
- If receiver not ready
- message may be dropped and an error may occur
- Non-blocking ready send
- has no advantage over blocking ready sent
Synchronous Send (MPI_Ssend
)
MPI_Ssend is guaranteed to block until the matching
receive starts.
MPI_Sendrecv
- An alternative to scheduling the communications ourselves.
- Carries out a blocking send and a receive in a single call.
- The dest and the source can be the same or different.
- Especially useful because MPI schedules the communications so that the program won’t hang or crash.
Testing for completion
- If operation has completed:
~~~
MPI_Test
MPI_Wait
MPI_Testany
MPI_Waitany
MPI_Testsome
~~~
MPI_Reduce
We can use MPI_Reduce to get input data from all other processes and perform an operation on them.
Operations available:
* MPI_MAX
* MPI_MIN
* MPI_SUM
* MPI_PROD
* MPI_LAND
* MPI_BAND
* MPI_LOR
* MPI_BOR
* MPI_LXOR
* MPI_BXOR
* MPI_MAXLOC
* MPI_MINLOC