Parallel and Distributed Computing Flashcards
Distributed Computing System example
the cs lab, supercomputers, email, file servers, printer access
Parallel Computing system example
Stampede here on campus
Distributed System
A set of physically separate processors connected by one or more communication links
Parallel Computing
-Tightly-coupled systems-processors share clock, memory, and run one OS-frequent communication-processors connected via a network (typically a bus)-network is connected to single shared memory
Distributed Computing
-loosely-coupled systems-Each processor has its own memory-each processor runs an independent OS-communication is very expensive-collection of computers (nodes) connected by a network
Parallel computing communicates through:
shared memory (typically)-read and write accesses to shared memory locations
Two forms of parallel computing architecture:
-SMP (Symmetric Multiprocessor)-Multicore-can also combine the two (ex. Stampede)
Distributed computing communicates through:
message passing
The architecture of distributed computing
-Nodes are connected by a network-Massively Parallel Machines
SMP (Symmetric Multiprocessor)
-Multiprocessor: two or more processors have a common RAM-Symmetric refers to the OS (one OS for all processors and any processor can run it)
Multicore:
multiprocessors on the same chip
Massively Parallel Machines
-nodes are greatly simplified and include only memory, processor(s), network card-augmented with fast network interface
Clusters
-Networks workstations with a fast-built of Common off-the-shelf (COTS) parts-less specialized
Very Distributed Computing
-grid computing-cloud computing
Parallel programming involves:
-decomposing an algorithm into parts-Distributing the parts as tasks which re worked on by multiple processors simultaneously-Coordinating work and communication of those processors (synchronization)
Parallel programming considerations:
-type of parallel architecture being used-type of processor communications used
Shared memory Programming Model
-interprocess communication is implicit-synchronization is explicit-assume processes/threads can read & write a set of shared memory locations-difficult to provide across machine boundaries-programs/threads communicate/cooperate via loads/stores to memory locations they share-communication is therefore at memory access speed (very fast) and is implicit-cooperating pieces must all execute on the same system (computer)-OS services and/or libraries used for creating tasks (processes/threads) and coordination (semaphores/barriers/locks)
message passing Programming Model
“-interprocess communication is explicit-Synchronization is implicit-extensible to communication in distributed systems-““shared”” data is communicated using send/receive services (across an external network)-shared data must be formatted into message chunks for distribution-coordination is also via sending/receiving messages-program components can be run on the same or different systems, so can use many of processes-standard libraries exist to encapsulate messages”
when do send/receive operations terminate?
“-Blocking (synchronous): sender waits until its message is received; receiver waits if no message is available-Non-blocking (asynchronous): send operation ““immediately”” returns; receive operation returns if message is available or not (polling)-partially blocking/non-blocking: send/receive with timeout”
Limitations of message passing
-easy for OS, hard for programmer-programmer must code communication-programmer may have to code format conversions, flow control, error control-no dynamic resource discovery
Event Ordering
-Coordination of requests (especially in a fair way) requires events (requests) to be ordered-stand alone systems: shared clock/memory-distributed systems: no global clock; each clock runs at different speeds (clock drift)
Event ordering for distributed systems
“-through message passing-a message must be sent before it can be received-send/receives can thus ““synchronize”” the clocks”
Happened-Before Relation
- If A and B are events in the same process, and A executed before B, then A->B2. If A is a message send and B is when the message is received, then A->B3. A->B, and B->C, then A->C (transitivity)
Atomicity
either something happens or it doesn’t- don’t want partial things to happen