MPI Flashcards

1
Q

What is the function used to initialize the MPI execution environment?

A

MPI_Init

int MPI_Init(int *argc, char ***argv)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the function used to finalise the MPI execution environment?

A

MPI_Finalize

int MPI_Finalize()

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does MPI_Comm_size do?

A

Reports the number of MPI processes in the specified communicator

int MPI_Comm_size(MPI_Comm comm, int *size)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does MPI_Comm_rank do?

A

Reports the rank of the calling process in the specified communicator

int MPI_Comm_rank(MPI_Comm comm, int *rank)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the range of ranks for MPI processes?

A

From 0 to size-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the predefined communicator that refers to all concurrent processes in an MPI program?

A

MPI_COMM_WORLD

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the two basic functions for point-to-point communication in MPI?

A

MPI_Send and MPI_Recv

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the purpose of the MPI_Send function?

A

Send a message to another process

int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the purpose of the MPI_Recv function?

A

Receive a message from another process

int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

List some MPI C data types.

A
  • MPI_CHAR
  • MPI_INT
  • MPI_FLOAT
  • MPI_DOUBLE
  • etc
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the purpose of MPI_Bcast?

A

It is the function used to broadcast data from one process to all others in a communicator

int MPI_Bcast(void *buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the purpose of MPI_Scatter?

A

It is the function used to scatter data from one process to all others in a communicator.

int MPI_Scatter(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is purpose of MPI_Gather?

A

It is the function used to gather data from all processes in a communicator to one process.

int MPI_Gather(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the purpose of MPI_Allgather?

A

It is the function used to gather data from all processes in a communicator to all processes.

int MPI_Allgather(const void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the two types of reduction functions in MPI?

A
  • MPI_Reduce
  • MPI_Allreduce
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does MPI_Reduce do?

A

Carries out reduction and returns result to the specified process

int MPI_Reduce(const void *sendbuf, void *recvbuf, int count, MPI_Datatype type, MPI_Op op, int root, MPI_Comm comm)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What does MPI_Allreduce do?

A

Carries out reduction and returns result to all processes

18
Q

True or False: MPI_COMM_SELF refers to all processes in an MPI program.

19
Q

What function is called to report the rank of this process?

A

MPI_Comm_rank

20
Q

What command is used to compile an MPI program in C?

21
Q

What command is used to run an MPI process?

22
Q

What are reduction variables in OpenMP?

A

Variables used to perform reduction operations across multiple processes

Communication overhead is greater if we return result to all processes

23
Q

What parameters are required for MPI_Reduce?

A
  • sendbuf
  • recvbuf
  • count
  • type
  • op
  • root
  • comm
24
Q

What parameters are required for MPI_Allreduce?

A
  • sendbuf
  • recvbuf
  • count
  • type
  • op
  • comm
25
What is blocking communication in MPI?
Communication where operations block until the message is received or safe to change buffers ## Footnote Examples include `MPI_Send` and `MPI_Recv`
26
What is the behavior of `MPI_Send`?
Blocks until the message is received or it is safe to change the send buffer
27
What is the behavior of `MPI_Recv`?
Blocks until the message is received
28
How do non-blocking communications differ from blocking communications?
They allow overlapping computation and communication, avoiding strict matching of sends and receives ## Footnote Examples include `MPI_Isend` and `MPI_Irecv`
29
What is the purpose of `MPI_Isend`?
To send a message non-blockingly ## Footnote `int MPI_Isend(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, MPI_Request *request)`
30
What is the purpose of `MPI_Irecv`?
To receive a message non-blockingly ## Footnote `int MPI_Irecv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Request *request)`
31
What is the purpose of `MPI_Wait`?
To wait for a non-blocking request to complete ## Footnote `int MPI_Wait(MPI_Request *request, MPI_Status *status)`
32
What is the purpose of `MPI_Waitall`?
To wait for all non-blocking requests to complete ## Footnote `int MPI_Waitall(int count, MPI_Request array_of_requests[], MPI_Status array_of_statuses[])`
33
What are collectives in MPI?
Collective operations that require all processes in a communicator to reach the operation to avoid deadlocks
34
What is Domain Decomposition in MPI?
A method to distribute work by dividing the computational domain among different MPI processes
35
How is a PDE solver typically set up using MPI collectives?
1. Rank zero reads input parameters and broadcasts them 2. Rank zero reads initial conditions and scatters them 3. Rank zero gathers results and writes output
36
What is the significance of using the same timestep in MPI PDE solvers?
All processes must use the same timestep for solution updates, which is determined by the minimum value across the computational domain
37
What is the role of `MPI_Allreduce` with `MPI_MIN` in timestep calculation?
To determine the minimum timestep from the entire computational domain
38
What are halos in the context of finite difference stencils?
Copies of rows/columns held by another MPI process to facilitate communication at domain boundaries
39
How often do halos need to be updated?
Once per timestep
40
What is a potential issue when using `MPI_Send` and `MPI_Recv` for halo communication?
It can incur significant overhead
41
What is a non-blocking communication pattern for updating halos?
* `MPI_Isend` for sending halos * `MPI_Irecv` for receiving halos * `MPI_Waitall` to ensure completion