Module 2 Flashcards
Networks and Parallelism with the Data structures
_ is a collection of tightly or loosely connected computers that work together so that they act as a single entity.
Cluster computing
A communicator defines a group of processes that can communicate with one another.
Message Passing Interface (MPI)
The first concept is the notion of a communicator
an application passes messages among processes to perform a task
Message Passing Interface (MPI)
It is a basic approach for Inter-Process Communication. The data exchange between the sender and the receiver. A process sends a message representing the request. The receiver receives and processes it then sends it back as the reply
Message Passing Paradigm
Operations: send, receive
Connections: connect, disconnect
The server acts as a service provider, the client issues the request and waits for the response from the server. Here server is a dump machine. Until the client makes a call server doesn’t communicate. Many Internet services are client server applications.
Client Server Paradigm
Server Process: listen, accept
Client Process: issue the request, accept the response
Direct communication between processes. Here is no client or server, anyone can make request to others and get a response.
Peer to Peer Paradigm
Request
Response
_ act as an intermediate among independent processes. It is also act as a switch through which processes exchange messages asynchronously in a decoupled manner.
Message systems
Sender sends a message which drop at first in the message system then forward to message queue which is associated with the receiver.
_ the sender and the receiver have to “meet” at their respective send/receive operations so that data can be transferred.
Synchronous message passing
This is also called ‘rendezvous’ or ‘handshaking’.
This form of transfer is simple but might be inefficient because the sender may have to wait even if it has done its duty and prepared the data to be sent.
The sender does not wait for the receiver to reach its receive operation, rather it gets rid of the prepared data and continues its execution.
This form of transfer does not force the sender to wait, but creates another problem: there may be messages that have already been sent but not yet received, and they have to be stored somewhere.
Asynchronous message passing
They are the buffers for in-transit messages.
Message queues
Data is aggregated or disseminated from/to multiple processes.
COLLECTIVE COMMUNICATION
Blocks until all processes have reached a
synchronization point
Barrier Synchronization
Broadcast, Scatters, Gather, All to All transmission of data across the communicator.
Data Movement (or Global Communication)
One process from which the communicator collects data from each process and operates on that data to compute a result.
Collective Operations (or Global Reduction)
_ blocks until all processes have reached this routine
MPI_Barrier
Distributing data from one process to all processes in the group
Broadcast
MPI_Bcast broadcasts a message from the process with rank “root” to all other processes of the group.
Takes an array of elements and distributes the elements in the order of process rank
Scatter
MPI_Scatter sends data from one task to all other tasks in a group
Takes elements from many processes and gathers them into one single process.
Gather
Inverse of MPI Scatter
MPI_Gather gathers together values from a group of processes
MPI_Allgather gathers data from all tasks and distribute it to all.
Takes an array of elements on each process and returns an array of output elements to the root process.
Reduce
MPI_Reduce reduces values on all processes to a single value.
MPI REDUCTION OPERATION
Returns the maximum element.
MPI_MAX
Maximum
C Data Types: integer, float