Module 2: Network and Parallelism With Data-Structures Flashcards

1
Q

What does MPI mean?

A

Message Passing Interface

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A communicator defines a group of processes that can communicate with one another.

A

Message Passing Interface (MPI)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

It is a basic approach for Inter-Process Communication

A

Message Passing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The ________ acts as a service provider, the client issues the request and waits for the response from the ____. (Same word)

A

Server

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Until the client makes a call, the ____ does not communicate.

A

Server

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Direct communication between processes.

A

Peer to peer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Here, there is no client or server, anyone can make a request to others and get a response.

A

Peer to peer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Acts as an intermediate among independent processes.

A

Message Systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

It also acts as a switch through which processes exchange messages asynchronously in a decoupled manner.

A

Message systems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Distributing data from one process to all processes in a group

A

Broadcast

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Takes an array of elements and distributes the elements in the order of process rank

A

Scatter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Takes elements from many processes and gathers them into a one single process.

A

Gather

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The inverse of MPI Scatter

A

Gather

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Takes an array of elements on each process and returns an array of output elements to the root process.

A

Reduce

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Returns the maximum element

A

MPI_MAX

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Returns the minimum element

A

MPI_MIN

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Sums the element

A

MPI_SUM

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Multiplies the element

A

MPI_PROD

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Performs a logical and across the elements

A

MPI_LAND

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Performs a logical or across the elements.

A

MPI_LOR

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Performs a bitwise and across the bits of the elements

A

MPI_BAND

22
Q

Performs a bitwise or across the bits of the elements

A

MPI_BOR

23
Q

Returns the maximum value and the rank of the process that owns it

A

MPI_MAXLOC

24
Q

Returns the minimum value and the rank of the process that owns it

A

MPI_MINLOC

25
Q

Are essential in collective communication to distinguish processes from one another and enable efficient data routing and synchronization.

A

Unique Identifiers

26
Q

Process Rank

A

Process Rank

  • Simple Integer: Each process is assigned a unique integer value.
  • Advanges: Easy to implement and manage.
  • Disadvantages: Limited flexibility, especially in dynamic systems where processes may join or leave.
27
Q

Process ID

A

Process ID

  • System-assigned identifier: The operating system assigns each process a unique identifier.
  • Advantages: Provides a globally unique identifier.
  • Disadvantages: Can be more complex to manage and may require additional system resources.
28
Q

Logical Topology

A

Logical Topology

  • Hierarchical or grid-based structure: Processes are organized into a logical topology, such as a tree or grid.
  • Advantages: Can provide information about the relationships between processes and facilitate efficient data routing.
  • Disadvantages: Implementing and managing it may be more complex, especially in dynamic systems.
29
Q

Custom Identifiers

A

Custom Identifiers

  • User-defined Identifiers: Processes can be assigned unique identifiers based on various criteria such as location, function, or workload.
  • Advantages: Can provide flexibility and adaptability to specific application requirements.
  • Disadvantages: May require additional management and can be more complex to implement.
30
Q

A numerical identifier for groups that process together a signle unit

A

Process Group ID (PGID)

31
Q

Associates a group of processes with a particular user session.

A

Session ID (SID)

32
Q

Why we need unique identifiers ( 4 answers )

A
  • Process Differentiation
  • Data Routing
  • Synchronization
  • Fault Tolerance
33
Q

Is a collection of tightly or loosely connected computers that work together so that they act as a single entity

A

Cluster Computing

34
Q

Is a fundemental paradigm in cluster computing where processes communicate by exchanging messages.

A

Message passing

35
Q

This approach is particularly well-suited for clusters due to their distributed nature and the ability to scale horizontally.

A

Message passing

36
Q

The server acts as a service provider, the client issues the request and waits for the response from the server.

A

Client Server

37
Q

The sender and the receiver have to “meet” at their respective send/receive operations so that data can be transferred.

A

Synchronous message passing

38
Q

Other terms for “Synchronous message passing”

A

Rendezvous or Handshaking

39
Q

The sender does not wait for the receiver to reach its receive operation, rather it gets rid of the prepared data and continues its execution.

A

Asynchronous message passing

40
Q

This form of transfer does not force the sender to wait, but creates another problem: there may be messages that have already been send but not yet received, and they have to be stored somewhere.

A

Asynchronous message passing

41
Q

Where data is aggregated or disseminated from/to multiple processes

A

Collective communication

42
Q

Blocks until all processes have reached a synchronization point

A

Barrier Synchronization

43
Q

Broadcast, Scatters, Gather, All to All transmission of data across the communicator.

A

Data Movement (or Global Communication)

44
Q

One process from which the communicator collects data from each process and operates on that data to compute a result.

A

Collective Operations (or Global Reduction)

45
Q

Blocks all processes that have reached this routine

A

MPI_Barrier

46
Q

Broadcasts a message from the process with rank “root” to all other processes of the group.

A

MPI_Bcast

47
Q

Sends data from one task to all other tasks in a group

A

MPI_Scatter

48
Q

Gathers together values from a group of processes.

A

MPI_Gather

49
Q

Gathers data from all tasks and distribute it to all.

A

MPI_Allgather

50
Q

Reduces values on all processes to a single value.

A

MPI_Reduce

51
Q
A