intro to parallel computing Flashcards
explain sequential computing
Traditionally, software has been written for serial computation:
1. A problem is broken into a discrete series of instructions.
2. Instructions are executed one after another.
3. Only one instruction may execute at any moment in time.
parallel computing
simultaneous use of multiple compute resources to solve a computational problem.
1. ̥run using multiple CPUs
2. problem is broken into discrete parts that can be solved concurrently
3. Each part is further broken down to a series of instructions
4. Instructions from each part execute simultaneously on different CPUs
what are the different parallel comp memory architecture
- ̥shared mem
- distributed mem
- hybrid
what are the parallel programming langs
*Shared memeory API : OpenMP(Open multi processing)
– C,C++,Fortran
* Distributed memory API:MPI(Message passing interface)
–C,C++,Fortran,Java,Python
* CLIK– customized C lang
* CUDA(computer unified device architecture)-fro Nvidia GPU
* pThreads
Explain SISD
- ̥Single instr -only one instruction stream is being acted on by the CPU during any one clock cycle.
- Single data: only one data stream is being used as input during any one clock cycle.
- Deterministic Execution
Explain SIMD
- type of paralle computer
- specialized problems characterized by a high degree of regularity, such as image processing.
- Two varieties: Processor Arrays and Vector Pipelines
Array processor
1. ̥Single Computer with Multiple parallel processors
2. Processing Units are designed to work together under the supervision of a single control unit.
3. Results in a single instruction stream and multiple data streams.
Explain MIMD
- ̥most common type of parallel computer.
- Execution can be synchronous or asynchronous, deterministic or non-deterministic
- Representatives: Most current supercomputers, networked parallel computer “grids” Sand multi-processor SMP computers - including some types of PCs.
Explain MISD
- ̥Few actual examples of this class of parallel computer have ever existed.
- A single data stream is fed into multiple processing units.
- Representatives: Systolic Arrays
what are the different parallel computing models?
- Shared Memory (without threads)
- Threads Model
- Distributed mem/meassge passing model
- Data Parallel Model
- Hybrid Model
- Single Program Multiple Data
- Multiple Program Multiple Data (SPMD):
Explain Distributed mem/meassge passing model
- Distributed mem/meassge passing model
1. ̥A set of tasks that use their own local memory during computation.
2. Multiple tasks can reside on the same physical machine and/or across an arbitrary number of machines.
3. Tasks exchange data through communications by sending and receiving messages.
4. Data transfer usually requires cooperative operations to be performed by each process.
Explain Data Parallel Model
- Data Parallel Model
1. ̥AKA Partitioned Global Address Space (PGAS) model.
2. Address space is treated globally
3. Most of the parallel work focuses on performing operations on a data set typically organized into a common structure, such as an array or cube
4. A set of tasks work collectively on the same data structure, however, each task works on a different partition of the same data structure.
5. Tasks perform the same operation on their partition of work
6. On shared memory architectures, all tasks may have access to the data structure through global memory.
7. On distributed memory architectures the data structure is split up and resides as “chunks” in the local memory of each task.
Explain hybrid model
- ̥A common example of a hybrid model is the combination of the message passing model (MPI) (Communications between processes on different nodes occurs over the network )with the threads model (OpenMP)( perform computationally intensive kernels using local, on-node data).
- Another example is using MPI with CPU-GPU (Graphics Processing Unit) programming.
–MPI tasks run on CPUs using local memory and communicating with each other over a network.
–Computationally intensive kernels are off-loaded to GPUs on-node.
–Data exchange between node-local memory and GPUs uses CUDA (or
something equivalent).
Explain SPMD
- ̥Single program : All tasks** execute their copy of the same program simultaneously.** This program can be threads, message passing, data parallel or hybrid.
- Multiple data : All tasks may use different data
3.SPMD programs usually have the necessary logic programmed into them to allow different tasks to branch or
conditionally execute only those parts of the program they are designed to execute. (only a portion of it.)
4.using message passing or hybrid programming,most commonly used
parallel programming model for multi-node clusters.
Explain MPMD
1.MULTIPLE PROGRAM: Tasks may execute different programs simultaneously. The programs can be threads, message passing, data parallel or hybrid. ̥
2.MULTIPLE DATA: All tasks may use different data
3.
Design issues of Parallel computing
1.Partitioning: Splitting to Smaller Problem
2.Mapping: Distributing to Multiple processor
3.Communication: if Required (Depend on Topology)
4.Consolidating : The Final result