MODULE 3 Flashcards

1
Q

Task dependency graph:

A

DAG with nodes being tasks and edges being dependencies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Steps in the parallelization:

A
  1. Decomposition
  2. Assignment
  3. Orchestrating
  4. Mapping
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Granularity of decomposition:

A
  • Fine grain
  • Coarse grain
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Fine grain:

A

Each task computes a single element, lots of tasks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Possible decompositions into tasks:

A
  • Independent or dependent
  • Same code or different
  • Same time or different
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Coarse grain:

A

Each task computes multiple elements, little tasks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Decomposition techniques:

A
  • Exploratory
  • Speculative
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Speculative decomposition approaches:

A
  • Conservative
  • Optimistic
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Speculative decomposition optimistic approach:

A

Scheduling tasks even if thy might be dependent and just rolling back if there’s an error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Speculative decomposition conservative approach:

A

Identifying independent tasks only when they’re guaranteed to not have dependencies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Characteristics of tasks:

A
  • Task generation
  • Task size
  • Data size
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Task sizes:

A
  • Uniform
  • Non-uniform
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Task size:

A

Amount of work and time a task takes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Data size examples with the input, output, and computation size:

A
  • Puzzle: Input < Comp
  • Min: Output < Input = Comp
  • Sort: Input = Output < Comp
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Characteristics of task interactions:

A
  • Data and size
  • Timing
  • Pattern
  • Known or unknown details
  • Involvement
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Orthogonal classification of tasks:

A
  • Static vs dynamic
  • Regular vs irregular
  • Read-only vs read-write
  • One-sided vs two-sided
10
Q

Goal of mapping:

A
  • Minimize overheads (cost of parallelization)
  • Balance interactions and idling (serialization)
11
Q

Factors that determine the mapping technique:

A
  • Data size
  • Interactions
  • Programming models
11
Q

Execution:

A

Alternating computation and interaction

12
Q

Schemes for static mapping:

A
  • Based on decomposition
  • Based on partitioning
  • Hybrid
12
Q

Hypercube:

A

N-dimensional analogue of a square and a cube where adjacent nodes differ in 1 bit

13
Q

Hierarchical mapping:

A

Task graph mapping at the top level and data partitioning in each level

14
Q

Schemes for dynamic mapping:

A
  • Centralized
  • Distributed
14
Q

Chunk scheduling:

A

A process picks up multiple tasks at once

15
Q

Distributed dynamic mapping design questions:

A
  • How are processes paired?
  • Who initiates transfers?
  • How much is transferred?
  • When are transfers triggered?