finals Flashcards
the process contains more than
one thread, and the process is accomplishing a number of
things at the same time
multithreaded processes
is the unit of execution within a process.
Thread
each ____ has a separate memory address space, which means that a process runs independently and is isolated from other processes.
Process
There can be multiple instances of a single program, and each instance of
that running program is a _______.
Process
Speedup Formula
1 / [(1 - P ) + P/N ]
P (Parallel) = Parallel / Total Sequential Time
Total Sequential Time = Seq + Par + Seq
N = number of processors
It seems that we can infinitely increase the number of processors and thus make the system run as fast as possible.
This algorithm might look like:
1. Divide pile up into stacks and hand out one stack to each person (Serial)
2. Everyone looks for the “concurrency” card (Parallel)
3. Give the card to a separate pile (Serial)
Amdahl’s law
What are Challenges in Concurrent/Parallel
-Complexity of designing parallel algorithms:
* Removing task dependencies
* Can add large overheads
-Limited by memory access speeds
-Execution speed is sensitive to data
-Real world problems are most naturally described with mathematical recurrences
What are the Pros of Parallel
Improved Performance
Better Resource Utilization
Handles Large-Scale Problems
Scalability
Energy Efficiency
Fault Tolerance
is when tasks actually run in parallel in multiple CPUs
Parallelism
is when multiple tasks can run in overlapping
periods.
Concurrency
Cons of Sequential Computing
Limited Performance
Poor Scalability
Inefficient for Complex Problems
Resource Underutilization
Longer Execution Time
In the sequential computing, no communication or synchronization is required between different steps of the program execution. But there is an indirect ________ of the underutilization of available processing resources
Overhead
considered scalable if its performance improves after adding more processing
resources. In the case of sequential computing, the only way to scale the system is to increase the performance of system resources used – CPU, memory, etc.
Scalability
It is a straightforward approach, with a clear set of step-by-step instructions about what to do and when to do it.
Simplicity
The serial execution of tasks is a sort of chain, where the first task is followed by the second one, the second is followed by the third, and so on. The important point here is that tasks are physically executed without overlapping time periods
Sequential Computing
machines (loosely coupled multiprocessor
systems) all PEs have a local memory
Distributed memory MIMD
(tightly coupled multiprocessor
systems), all the PEs are connected to a single global memory and they all
have access to it.
shared memory MIMD
Multicomputers
Multiprocessors
MIMD
Vector processors
Fine-grained data parallel
SIMD
May be pipelined computers
MISD
Traditional von Neumann
single-CPU computers
SISD
is a multiprocessor machine which is
capable of executing multiple instructions on multiple data sets.
MIMD
computing system is a multiprocessor machine capable of executing different instructions on different PEs but all of them operating on the same dataset.
MISD
is a multiprocessor machine capable of
executing the same instruction on all the CPUs but operating on different data streams.
SIMD