MPI Flashcards

1
Q

MPI_init(NULL, NULL)

A
  • Decide which processes get which
    rank
  • Allocating storage for message
    buffers
  • Define a communicator that
    consist of all the processes started
    by the user at program start up.
    This communicator is called
    MPI_COM_WORLD.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

MPI_Finalize()

A
  • Says that any resources allocated for MPI can be freed
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

MPI_Comm_size(MPI_COMM_WORLD, &comm_sz)

A
  • Sets comm_sz to the number of processors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

MPI_Comm_rank(MPI_COMM_WORLD, &my_rank)

A
  • sets my_rank to the process number
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

MPI_Send(snd, buffer, snd_sz, snd_type, dest, tag, comm)

A
  • Greeting is pointer to block of memory containing contents of the message ( a string rated)
  • Second argument is number of items in the buffer
  • Third argument: type of items in the buffer (in this case char)
  • 4th argument is the rank of the destination process
  • 5th argument is the tag: a nonnegative integer’s used to distinguish two messages.
  • 6th argument: communicate or (in most cases MPI_Comm_World
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

MPI_Recv(rcv_buf, rcv_sz, rcv_type, src, tag, comm, &status)

A
  • Greeting is pointer to block of memory containing contents of the received message ( a string rated)
  • Second argument is number of items in the buffer
  • Third argument: type of items in the buffer (in this case char)
  • 4th argument is the rank of the source process from which the message is sent
  • 5th argument is the tag: a nonnegative integers used to distinguish two messages. (must match tag of send
    operation)
  • 6th argument: communicator (in most cases MPI_Comm_World
  • 7th argument: either. A Variable of type MPI_Status or a special constant called MPI_STATUS_IGNORE)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

MPI_Status* status

A
  • The status variable has 3 elements that it can receive
  • status.MPI_SOURCE
  • status.MPI_TAG
  • status.MPI_ERROR
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

MPI communication modes

A

Synchronous
* Only completes when receive completed

Buffered
* Always completes irrespective of wheter the receive has completed

Standard
* Either synchronous or buffered, runtime system decides

Ready Send
* Always completes irrespective of whether the receive has completed

Receive
* Completes when a message has arrived

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Blocking forms

A

Standard Send
* MPI_Send

Synchronous Send
* MPI_Ssend

Buffered Send
* MPI_Bsend

Ready Send
* MPI_Rsend

Receive
* MPI_Recv

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Non-blocking forms

A

Standard Send
* MPI_Isend

Synchronous Send
* MPI_Issend

Buffered Send
* MPI_Ibsend

Ready Send
* MPI_Irsend

Receive
* MPI_Irecv

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Ready Send (MPI_Rsend)

A
  • Completes immediately
  • Guaranteed to succeed normally
    • if matching receive has already been posted
  • If receiver not ready
    • message may be dropped and an error may occur
  • Non-blocking ready send
    • has no advantage over blocking ready sent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Synchronous Send (MPI_Ssend)

A

MPI_Ssend is guaranteed to block until the matching
receive starts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

MPI_Sendrecv

A
  • An alternative to scheduling the communications ourselves.
  • Carries out a blocking send and a receive in a single call.
  • The dest and the source can be the same or different.
  • Especially useful because MPI schedules the communications so that the program won’t hang or crash.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Testing for completion

A
  • If operation has completed:
    ~~~
    MPI_Test
    MPI_Wait
    MPI_Testany
    MPI_Waitany
    MPI_Testsome
    ~~~
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

MPI_Reduce

A

We can use MPI_Reduce to get input data from all other processes and perform an operation on them.
Operations available:
* MPI_MAX
* MPI_MIN
* MPI_SUM
* MPI_PROD
* MPI_LAND
* MPI_BAND
* MPI_LOR
* MPI_BOR
* MPI_LXOR
* MPI_BXOR
* MPI_MAXLOC
* MPI_MINLOC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Collective vs Point-Point communcation

A

Collective:
* All processes in communicator must call the same collective function
* For example, a program that attempts to match a call to MPI_Reduce on one process with a call to
MPI_Recv on another process is erroneous, and, in all likelihood, the program will hang or crash.
* Arguments passed by each process must be “compatible.”
* For example, if one process passes in 0 as the dest_process and another passes in 1, then the
outcome of a call to MPI_Reduce is erroneous, and, once again, the program is likely to hang or
crash.
* All processes need to pass an argument to all parameters even if not used
* E.g., output parameter only used on proc 0
* Matched on basis of communicator and order called
* While point-to-point was matched on basis of tags and communicator
* Collective communicator don’t use tags so they are matched on only basis of communicator and order they
are called.

17
Q

MPI_Allreduce

A

This stores reduction result at all the processes. Its
implemented as a reduce operation followed by a
broadcast operation.
Useful in a situation in which all of the processes
need the result of a global sum in order to complete
some larger computation.

18
Q

MPI_Bcast

A

Data belonging to a single process is sent to all of the processes in the communicator.

19
Q

MPI_Scatter

A

MPI_Scatter can be used in a function that reads in an entire vector on process 0 but only sends the
needed components to each of the other processes

20
Q

MPI_Gather

A

Collect all of the components of the vector onto process 0, and then process 0 can process all of the
components.

21
Q

MPI_Allgather

A

Concatenates the contents of each process’ send_buf_p and stores this in each process’ recv_buf_p.
As usual, recv_count is the amount of data being received from each process.