Essay Flashcards
Difference between parallel and distributed computing:
Parallel
- Number of computers: 1
- Memory type: shared or distributed
- Communication: bus
- Goals: improve performance
Distributed
- Number of computers: multiple
- Memory type: distributed
- Communication: message passing
- Goals: improve scalability, fault tolerance, and resource sharing
Difference between shared-memory and distributed-memory parallel computing:
Shared
- Programming: through threading
- Communication: Through shared pool of memory
- Pros: Easier to program
- Cons:
* Performance may suffer if memory is far from others
* Limited scalability
Distributed
- Programming: through processes
- Communication: message passing
- Pros: Tight control on message passing
- Cons: Harder to program
Distributed transparency types and their descriptions:
Access: hides difference in data representation
Location: hides the location of an object
Relocation: hides the fact an object may move while in use
Migration: hides the fact an object may move
Replication: hides the copying of an object
Concurrency: hides that the object can be shared by independent users
Failure: hides the recovery of an object
Dependability requirements and their descriptions:
Availability: Readiness for usage
Reliability: Continuity of use
Safety: Low probability of catastrophes
Maintainability: Easy to repair
Difference between failure, error, and fault + examples:
Failure: A component not living up to its specifications. Crash
Error: Part of a component that leads to a failure. Bug
Fault: Cause of an error. Sloppy programmer
Difference between fault prevention, tolerance, removal, and forecasting + examples:
Prevention: Avoid occurrence of fault. Not hiring sloppy programmers
Tolerance: Mask occurrence of fault. Make two programmers work on the same component
Removal: Reduce number of faults. Fire sloppy programmers
Forecasting: Estimating future faults. Estimate a recruiter’s chance to hire a sloppy programmer
Grid computing layers and what they do:
Fabric: Provides interface to local resources
Connectivity: Has communication protocols
Resource: Manages a single resource
Collective: Handles access to multiple resources
Application: Contains actual grid applications in a single organization
Exploratory decomposition vs Speculative decomposition
Exploratory decomposition:
* Output from a branch is unknown
* Parallel programs do more/less/same amount of work as serial programs
Speculative decomposition:
* Input at a branch is unknown
* Parallel programs do more/same amount of work as serial programs
Static task generation strategy vs Dynamic task generation strategy
Static task generation strategy: Concurrent tasks and task graph known a-priori
Dynamic task generation strategy: Concurrent tasks and task graph are computed on the fly
Static interaction vs Dynamic interaction
Static interaction:
* Timing and interacting tasks known a-priori
* Easier to code
Dynamic interaction:
* Timing and interacting tasks not known a-priori
* Harder to code
Static mapping vs Dynamic mapping
Static mapping:
* Optimal mapping may be NP complete
* Tasks mapped to processes a-priori
* Requires task sizes
Dynamic mapping:
* Tasks generated at runtime
* Tasks mapped to processes at runtime
* Task sizes unknown
Centralized dynamic mapping vs Distributed dynamic mapping
Centralized dynamic mapping:
* Processors designed as masters or slaves
* Slaves request tasks from master
* Higher chance of bottleneck
Distributed dynamic mapping:
* Processors made equal
* Processors send and receive work from others
* Lower chance of bottleneck
Type of naming systems and their descriptions
Flat naming: Random string
Structured naming: Human-readable
Attribute-based naming: (attribute, value) pairs
Name linking types and their descriptions
Hard link:
Name is resolved by following a path in a naming graph
Soft link:
A node contains the name of another so we have to:
1. Resolve the first node’s name
2. Get the name of the other node
3. Continue resolving with the other node’s name
Difference between name-space levels
Global
- Directory node level: High
- Directory node administration: Jointly managed by different administrations
Administration
- Directory node level: Mid
- Directory node administration: Grouped in a way that can be assigned to separate administration
Managerial
- Directory node level: Low
- Directory node administration: Within a single administration