Parallel Program Development Flashcards

1
Q

Ian Foster Design Methodology Steps

A
  1. Finding Concurrency
  2. Algorithm and Supporting Structures
  3. Implementation Mechanism:
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Finding Concurrency

A
  • Decomposition
  • Dependency Analysis - Design evaluation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Algorithm & Supporting Structures

A
  • SPMP
  • Fork-Join Parallelism
  • Master/Worker and Server/Client
  • Task Pool
  • Producer/Consumer
  • Pipeline processing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Implementation Mechanism

A
  • Management of the processes/ threads (creation, scheduling and destruction)
  • Sharing of information via shared data structures
  • Synchronization of the accesses to ensure correctness
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Decomposition

A

Data decomposition
* The same activity performed over many threads on different data

Task Decomposition
* Different activities assigned to different threads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Dependencies

A

Loop parallelism: types
* Forall
one or several assignments to array elements
each assignment is treated as a separate array assignment

  • Dopar
    instructions of each iteration are executed sequentially in order
    variable updates in one iteration are not visible to other iterations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

SPMD

A
  • single program multiple data
  • Open MP and MPI
  • That means all the processes execute the same program in parallel
  • Each has its own data
  • Different threads can follow different paths based on their id
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Fork-Join Parallelism

A
  • In the fork join structure, a process or thread forks off a number of other processes or threads that then continue in parallel.
  • In some cases a process waits until a child process terminate and join but in other cases like in openMP for example the master threads also continues in
    parallel with the other threads.
  • If we have a serial program whose
    runtime is dominated by a set of computer intensive loops for example then we can use OpenMP’s fork join parallize to parallize on the loops.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Master/Worker and Server/Client

A
  • In the master/worker structure there’s one master which controls the execution of the program.
  • The master often executes the main function of a parallel program and creates worker threads/ slaves when needed to perform the actual computations. The assignment of work to the worker threads is usually done by the master thread.
  • But worker threads could
    also generate new work for computation. In that case the master thread would be responsible for coordination.
  • Heterogenous systems and cloud computing systems often use a client server structure in this structure multiple clients generate requests to a server.
  • The server may have one or several threads to servers client requests.
  • When a server thread receives a request from a client it processes it and delivers the result back to the client who may then perform computations on the result.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Task Pool

A
  • A task pool structure is common data structures in which tasks to be performed are stored and they can be retrieved for execution.
  • A task comprises computations to be executed and a specification of the data to which the computation should be applied.
  • During the processing of
    a task a thread often generates new tasks and inserts them also into the pool.
  • Shared task pools are excellent for dynamic load balances because any threads that is idle can retrieve a task from the pool however this convenience comes at a price as access to a task pool must be synchronized to avoid race conditions.
  • If the tasks are too fined graine there will be contention for accessing the shared resource but if the tasks are big enough the benefit of the shared task pool will out way the overhead of synchronisation. An example is the executor interface of java.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Producer/Consumer

A
  • This model distinguishes between producer threads and consumer threads.
  • Producer threads produce data which is used as input by the consumer
    threads.
  • For the transfer of data from producer threads to consumer threads a common data structure is used.
  • This is typically a data buffer of fixed length and which can be accessed by both types of threads.
  • A producer can only store data elements into the buffer when it’s not full and the consumer thread can only retrieve data elements from the buffer if it is
    not empty.
  • In this case synchronization is also needed to ensure a correct coordination between the consumer and producer threads because the data buffer is shared.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly