Lecture 3 Flashcards

1
Q

SISD?

A

Single Instruction Single Data.
Classic Von Neumann.
Single processor executes on a single instruction. Each instruction gets completed before the next one begins (sequential).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

SIMD?

A

Single Instruction Multiple Data.
Enables parallel processing by executing the same instruction on multiple data elements at the same time.

Data-parallel: operates on multiple data elements at the same time.

Requires that the computations can be parallelised

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

MISD?

A

Multiple instruction Single Data.
Multiple processors execute different instructions on the same data item concurrently.

It requires multiple independent instructions to be executed simultaneously on the same data item which is not common.

Not covered.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

MIMD?

A

Multiple Instruction Multiple Data.
Allows multiple processors to execute different instructions on different data simultaneously.
Task parallelism.

Shared memory: multiple processors or cores access a common, shared memory address space.
Distributed memory: each has its own private memory space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

MPI_Init()?

A

Typically the first function called in an MPI program.
Initialises the MPI execution environment and sets up the necessary resources for communication among MPI processes.

  1. It establishes the communication channels.
  2. Determines the number of processes- each process is assigned a unique id.
  3. Sets up execution environment.
  4. Synchronises processes. Ensures all processes reach a synchronised state before proceeding further.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

MPI_Comm_size()?

A

Used to retrieve the total number of processes in a specific communicator.
Provides the number of MPI processes that are involved in a particular communication context.

Int MPI_Comm_size(MPI_Comm communicator, int* size);

A communicator is the group of MPI processes that can communicate with each other. Most common communicator is MPI_COMM)WORLD which represents all the processes created when the program starts.

The size is stored in the size variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

MPI_Comm_rank()?

A

Used to retrieve the rank or id of the calling process within a communicator.
Rank values are integers ranging from 0 to size-1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

MPI_Send() and MPI_Recv()?

A

Point-to-point communication.
They allow processes in a parallel program to send and receive messages between each other.

MPI_Send() is a blocking call; it does not return until the send operation is complete.
MPI_Recv() blocks until a matching message is received. Recv waits for a matching source (rank of the sender process), tag (id for the type of message being sent/received) and communicator (these can have wildcard values). Received message can be inspected using the MPI_Status object.

Non-blocking variants exist (MPI_Irecv() and MPI_Isend()).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is point-to-point communication?

A

Refers to the exchange of data between two specific processes in a PDC system. Requires a sender and receiver.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

MPI blocking vs non-blocking communication?

A

MPI is primarily blocking; they wait until an operation is completed before proceeding.
Send is complete when the message buffer has been fully transferred to the MPI system; when it is safe for the program to modify/reuse the buffer.
Receive is complete when the message data has arrived at the destination and is available for use.

MPI has non-blocking communication too which just continues regardless of completion status. Can be useful to help avoid deadlock. Sending/receiving processes can use polling (periodic checking using MPI_Test()) to check the status of non-blocking operations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is barrier synchronisation?

A

Synchronisation technique used to ensure that all processes reach a specific point in their execution before proceeding further. Establishes a point where processes wait until all processes in a group have reached that point before any of them can proceed.

MPI_Barrier()

Helps prevent race conditions and other data inconsistencies.
Could introduce bottlenecks in performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is reduction?

A

An MPI operation that combines values from multiple processes into a single value. Typically applies an associative and commutative function to the values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

MPI_Bcast()?

A

Broadcast is an operation used to distribute a single data value from the root process to all other processes in a communicator. It allows the root process to transmit its data to all other processes, ensuring that all processes have the same information.

One-to-all communication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What’s Flynn’s Taxonomy?

A

The categories SISD, SIMD, MISD and MIMD.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly