Essay Flashcards

1
Q

Difference between parallel and distributed computing:

A

Parallel
- Number of computers: 1
- Memory type: shared or distributed
- Communication: bus
- Goals: improve performance

Distributed
- Number of computers: multiple
- Memory type: distributed
- Communication: message passing
- Goals: improve scalability, fault tolerance, and resource sharing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Difference between shared-memory and distributed-memory parallel computing:

A

Shared
- Programming: through threading
- Communication: Through shared pool of memory
- Pros: Easier to program
- Cons:
* Performance may suffer if memory is far from others
* Limited scalability

Distributed
- Programming: through processes
- Communication: message passing
- Pros: Tight control on message passing
- Cons: Harder to program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Distributed transparency types and their descriptions:

A

Access: hides difference in data representation
Location: hides the location of an object
Relocation: hides the fact an object may move while in use
Migration: hides the fact an object may move
Replication: hides the copying of an object
Concurrency: hides that the object can be shared by independent users
Failure: hides the recovery of an object

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Dependability requirements and their descriptions:

A

Availability: Readiness for usage
Reliability: Continuity of use
Safety: Low probability of catastrophes
Maintainability: Easy to repair

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Difference between failure, error, and fault + examples:

A

Failure: A component not living up to its specifications. Crash
Error: Part of a component that leads to a failure. Bug
Fault: Cause of an error. Sloppy programmer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Difference between fault prevention, tolerance, removal, and forecasting + examples:

A

Prevention: Avoid occurrence of fault. Not hiring sloppy programmers
Tolerance: Mask occurrence of fault. Make two programmers work on the same component
Removal: Reduce number of faults. Fire sloppy programmers
Forecasting: Estimating future faults. Estimate a recruiter’s chance to hire a sloppy programmer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Grid computing layers and what they do:

A

Fabric: Provides interface to local resources
Connectivity: Has communication protocols
Resource: Manages a single resource
Collective: Handles access to multiple resources
Application: Contains actual grid applications in a single organization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Exploratory decomposition vs Speculative decomposition

A

Exploratory decomposition:
* Output from a branch is unknown
* Parallel programs do more/less/same amount of work as serial programs

Speculative decomposition:
* Input at a branch is unknown
* Parallel programs do more/same amount of work as serial programs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Static task generation strategy vs Dynamic task generation strategy

A

Static task generation strategy: Concurrent tasks and task graph known a-priori
Dynamic task generation strategy: Concurrent tasks and task graph are computed on the fly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Static interaction vs Dynamic interaction

A

Static interaction:
* Timing and interacting tasks known a-priori
* Easier to code

Dynamic interaction:
* Timing and interacting tasks not known a-priori
* Harder to code

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Static mapping vs Dynamic mapping

A

Static mapping:
* Optimal mapping may be NP complete
* Tasks mapped to processes a-priori
* Requires task sizes

Dynamic mapping:
* Tasks generated at runtime
* Tasks mapped to processes at runtime
* Task sizes unknown

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Centralized dynamic mapping vs Distributed dynamic mapping

A

Centralized dynamic mapping:
* Processors designed as masters or slaves
* Slaves request tasks from master
* Higher chance of bottleneck

Distributed dynamic mapping:
* Processors made equal
* Processors send and receive work from others
* Lower chance of bottleneck

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Type of naming systems and their descriptions

A

Flat naming: Random string
Structured naming: Human-readable
Attribute-based naming: (attribute, value) pairs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Name linking types and their descriptions

A

Hard link:
Name is resolved by following a path in a naming graph

Soft link:
A node contains the name of another so we have to:
1. Resolve the first node’s name
2. Get the name of the other node
3. Continue resolving with the other node’s name

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Difference between name-space levels

A

Global
- Directory node level: High
- Directory node administration: Jointly managed by different administrations

Administration
- Directory node level: Mid
- Directory node administration: Grouped in a way that can be assigned to separate administration

Managerial
- Directory node level: Low
- Directory node administration: Within a single administration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

LDAP essence:

A

Directory information base: Collection of all directories
Relative distinguished name for each record: Each record is uniquely named as a sequence of naming attributes
Directory information tree: Naming graph of the directory service

17
Q

Application layers and what they contain

A

Interface layer: Units for interfacing
Processing layer: Functions
Data layer: Data to be manipulated through components

18
Q

RESTful operations and their descriptions

A

PUT: Create resource
GET: Retrieve state of resource
DELETE: Delete resource
POST: Modify resource by transferring new state

19
Q

Coordination between temporally and referentially coupled

A

Temporally and Referentially coupled: Direct
Temporally coupled and Referentially decoupled: Event-based
Temporally decoupled and Referentially coupled: Mailbox
Temporally and referentially decoupled: Shared data space

20
Q

Multi-tiered centralized system architecture and their descriptions

A

Single-tiered: Dumb terminal/mainframe configuration
Two-tiered: Client/single server configuration
Three-sided: Each layer on separate machine

21
Q

Structured P2P vs Unstructured P2P + example:

A

Structured P2P
* Data represented as (key: value) pairs
* Keys used as indexes
* Example: Hypercube

Unstructured P2P
* Nodes store list of neighbors that looks like a random graph
* Searching for a node is done through flooding or random walk
* Example: Chord

22
Q

Cloud computing layers and their jobs

A

Hardware: Stores processors, power, and cooling systems
Infrastructure: Deploys virtualization techniques
Platform: Provides abstraction for storage
Application: Actual application like office suites

23
Q

Consensuses in appending a block to a blockchain and their descriptions

A

Centralized solution: An entity decides which validator can append a block
Distributed solution: A group of servers decide which validator can append a block
Decentralized solution: Participants elect a leader to append a block

24
Q

Sources of overhead and their descriptions

A

Interprocess interactions: Processors working on non-trivial problems need to talk to each other
Idling: Processors idle due to serial components
Excess computation: Computation not performed by serial version

25
Q

Execution time metrics and their descriptions

A

Serial runtime: Execution time of a sequential computer
Parallel runtime: Time from start of first processor to end of last processor