Module 2: Network and Parallelism With Data-Structures Flashcards

1
Q

What does MPI mean?

A

Message Passing Interface

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A communicator defines a group of processes that can communicate with one another.

A

Message Passing Interface (MPI)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

It is a basic approach for Inter-Process Communication

A

Message Passing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The ________ acts as a service provider, the client issues the request and waits for the response from the ____. (Same word)

A

Server

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Until the client makes a call, the ____ does not communicate.

A

Server

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Direct communication between processes.

A

Peer to peer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Here, there is no client or server, anyone can make a request to others and get a response.

A

Peer to peer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Acts as an intermediate among independent processes.

A

Message Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

It also acts as a switch through which processes exchange messages asynchronously in a decoupled manner.

A

Message systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Distributing data from one process to all processes in a group

A

Broadcast

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Takes an array of elements and distributes the elements in the order of process rank

A

Scatter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Takes elements from many processes and gathers them into a one single process.

A

Gather

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The inverse of MPI Scatter

A

Gather

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Takes an array of elements on each process and returns an array of output elements to the root process.

A

Reduce

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Returns the maximum element

A

MPI_MAX

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Returns the minimum element

A

MPI_MIN

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Sums the element

A

MPI_SUM

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Multiplies the element

A

MPI_PROD

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Performs a logical and across the elements

A

MPI_LAND

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Performs a logical or across the elements.

A

MPI_LOR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Performs a bitwise and across the bits of the elements

22
Q

Performs a bitwise or across the bits of the elements

23
Q

Returns the maximum value and the rank of the process that owns it

A

MPI_MAXLOC

24
Q

Returns the minimum value and the rank of the process that owns it

A

MPI_MINLOC

25
Are essential in collective communication to distinguish processes from one another and enable efficient data routing and synchronization.
Unique Identifiers
26
Process Rank
**Process Rank** * **Simple Integer**: Each process is assigned a unique integer value. * **Advanges**: Easy to implement and manage. * **Disadvantages**: Limited flexibility, especially in dynamic systems where processes may join or leave.
27
Process ID
**Process ID** * **System-assigned identifier**: The operating system assigns each process a unique identifier. * **Advantages**: Provides a globally unique identifier. * **Disadvantages**: Can be more complex to manage and may require additional system resources.
28
Logical Topology
**Logical Topology** * **Hierarchical or grid-based structure:** Processes are organized into a logical topology, such as a tree or grid. * **Advantages:** Can provide information about the relationships between processes and facilitate efficient data routing. * **Disadvantages:** Implementing and managing it may be more complex, especially in dynamic systems.
29
Custom Identifiers
**Custom Identifiers** * **User-defined Identifiers:** Processes can be assigned unique identifiers based on various criteria such as location, function, or workload. * **Advantages:** Can provide flexibility and adaptability to specific application requirements. * **Disadvantages:** May require additional management and can be more complex to implement.
30
A numerical identifier for groups that process together a signle unit
Process Group ID (PGID)
31
Associates a group of processes with a particular user session.
Session ID (SID)
32
Why we need unique identifiers ( 4 answers )
* Process Differentiation * Data Routing * Synchronization * Fault Tolerance
33
Is a collection of tightly or loosely connected computers that work together so that they act as a single entity
Cluster Computing
34
Is a fundemental paradigm in cluster computing where processes communicate by exchanging messages.
Message passing
35
This approach is particularly well-suited for clusters due to their distributed nature and the ability to scale horizontally.
Message passing
36
The server acts as a service provider, the client issues the request and waits for the response from the server.
Client Server
37
The sender and the receiver have to "meet" at their respective send/receive operations so that data can be transferred.
Synchronous message passing
38
Other terms for "Synchronous message passing"
Rendezvous or Handshaking
39
The sender does not wait for the receiver to reach its receive operation, rather it gets rid of the prepared data and continues its execution.
Asynchronous message passing
40
This form of transfer does not force the sender to wait, but creates another problem: there may be messages that have already been send but not yet received, and they have to be stored somewhere.
Asynchronous message passing
41
Where data is aggregated or disseminated from/to multiple processes
Collective communication
42
Blocks until all processes have reached a synchronization point
Barrier Synchronization
43
Broadcast, Scatters, Gather, All to All transmission of data across the communicator.
Data Movement (or Global Communication)
44
One process from which the communicator collects data from each process and operates on that data to compute a result.
Collective Operations (or Global Reduction)
45
Blocks all processes that have reached this routine
MPI_Barrier
46
Broadcasts a message from the process with rank "root" to all other processes of the group.
MPI_Bcast
47
Sends data from one task to all other tasks in a group
MPI_Scatter
48
Gathers together values from a group of processes.
MPI_Gather
49
Gathers data from all tasks and distribute it to all.
MPI_Allgather
50
Reduces values on all processes to a single value.
MPI_Reduce
51