Concurrent and distributed systems Flashcards

1
Q

local vs. global knowledge

A

tbd

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

synchronous

A

upper bound on propagation delay of messages

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

scalability

A
  • Performance should not be impaired by the size of the system
  • good scalability: space and time required is below O(log n) where n = number of processes
    – bad scalability: more than O(n)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

sharing information between processes

A
  • shared memory

- message passing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

axiom

A

An axiom is a statement that is taken to be true, to serve as a premise or starting point for further reasoning and arguments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

state reading model

A

(shared memory model)

each process can read the states of its neighbors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

link register model

A

(shared memory model)

each process can read from and write to adjacent registers. The entire local state is not shared

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

weak vs strong model

A

e.g. Non-FIFO to FIFO channel

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

algorithm complexity

A
  • space
    • in terms of n, the size of the network
  • time
    • the max. time (number of steps) needed to complete the execution of the algorithm?
  • message
    • how many messages exchanged to complete the algorithm?
    • problem: msgs may be of arbitrary size
  • bit
    • how many bits are transmitted when the algorithms runs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

a distributed system

A

a network of processes (abstract view)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Criteria for a system to be ‘distributed’

A

– multiple processes with independant thread of control
- interprocess communication
- disjoint address space
– common goal, interaction to reach that goal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

GOAL OF A DISTRIBUTED SYSTEM

A

the computers coordinate their activities and share hardware and software and data, so that users perceive it as a single, integrated computing service with a well-defined goal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

inter-process communication

A
  • involves various layers of networking
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

motivation for distributed systems

A
  • geographically distributed environment
  • speedup
  • resource sharing
  • fault tolerance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

knowledge of a process

A
  • has only local knowledge!
  • e.g. its identity, its state, its neighbors
  • algorithms needed to bring “global” knowledge
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

typical issues in distributed systems

A
  • knowledge of a process
  • network topology
  • degree of synchronization (local clocks drift)
  • failure (crash, omission, byzantine)
  • scalability
    • good: space & time below O(log n), n=#processes
    • bad: more than O(n)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

synchronous means

A
  • upper bound on propagation delay of messages exists

- FIFO channels

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

asynchronous means

A
  • no bound on propagation delay of messages exists

=> disregards time, no ordering of events

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

common subproblems in distributed systems

A
  1. Leader election
  2. Mutual exclusion
  3. Time synchronization
    • local clocks, consistent time across system
  4. Global state
    • non-trivial: getting all local states at a given time
  5. Multicasting - broadcasting
    • sending a message to several processes
    • with efficiency, scalability, reliability
  6. Replica management
    • fault tolerance + system availability: replace a crashed server with an exact replica
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

two models for communication

A
  • message passing

- shared-memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

shared-memory model

A
  • address spaces of processes overlap
  • 2 variations:
    • state reading model = each process can read the state of its neighbors
    • link register model = each process can read from and write to adjacent registers. The entire local state is not shared
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Knowledge-based communication

A
  • relies on making deductions from the absence of a signal or actions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

parallel vs distributed

A

In parallel systems, the primarily issues are:
- speed-up and increased data handling capability

in distributed systems the primary issues are:
- fault-tolerance, synchronization, scalability, etc

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

guarded action

A

if G then A
G is called a guard
A is a guarded action that occurs only when condition G holds

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
atomicity
atomic = all or nothing | atomic action = indivisible action
26
fairness
fairness defines the choices or restrictions on the scheduling of actions
27
unconditional fairness
tbd
28
weak fairness
``` A scheduler is weakly fair, when it eventually executes every guarded action whose guard becomes true, and remains true thereafter ```
29
strong fairness
A scheduler is strongly fair, when it eventually executes every guarded action whose guard is true infinitely often.
30
unfair scheduler
see sl. 2-21
31
unconditionally fair scheduler
``` will eventually give every statement a chance to execute without checking their eligibility (Example: process scheduler in a multi programmed OS.) ```
32
time and clock in distributed systems
– Synchronization of physical clocks | – Sequentiality / causality of events (logical time)
33
concurrency
absence of casual order
34
distributed mutual exclusion
rely only - on message passing - shared-memory model
35
Requirements for an algorithm ensuring distributed | mutual exclusion
- ME1: mutual exclusion - > at most 1 process in CS - ME2: Freedom from deadlock - > at least 1 process be eligible to enter CS - ME3: Progress - > every process eventually in CS
36
distributed mutex algorithms
(- client-server algorithm) - lamport's algorithm - RICART & Agrawala's algorithm - Maekawa
37
token-passing algorithms
- token ring (prop to n) - Raymond's algorithm (prop to rassine(n)) - log n?
38
requirements for a distributed mutual exclusion algorithm
- M1: mutual exclusion - M2: freedom from deadlock - M3: progress
39
logical clock vs. vector clock?
logical clocks do not detect causal ordering. Vector clocks do. What is causal ordering?
40
causal ordering
tbd
41
Scalability
An implementation of a distributed system is considered scalable when its performance is not impaired regardless of the final scale of the system. Scalability suffers when the resource requirement grows alarmingly with the scale of the system.
42
NP-hard
some problems L are so hard that although condition (2) (=every language in NP reduces to L in polynomial time) is proven, condition (1) is not: that L is in NP. If so, we call L NP-hard. L is very likely to require exponential time or worse. NP-hard =~ intractable
43
Byzantine fault
Any fault presenting different symptoms to different observers
44
associative memory
Memory that is addressed by content rather than by address; content addressable is often used synonymously. An Associative Memory permits its users to specify part of a pattern or key and retrieve the values associated with that pattern.
45
Linda
a dialect that supports parallel programming (e.g. producer-consumer, master-worker paradigm)
46
Linda programming model
a memory model based on a virtual, associative, logically-shared memory called tuple space
47
(Linda vs) message passing model
in latter data/messages have no long term existence!
48
Linda: static vs dynamic load balancing
cf
49
result parallelism
In sum, each worker is assigned to produce one piece of the result, and they all work in parallel up to the natural restrictions imposed by the problem. This is the result parallel approach Result parallelism focuses on the shape of the finished product focus on program’s result -> parallel computation of all elements in a data structure
50
specialist parallelism
``` In sum, each worker is assigned to perform one specified kind of work, and they all work in parallel up to the natural restrictions imposed by the problem. This is the specialist-parallel approach ``` specialist parallelism focuses on the makeup of the work crew -> specialist agents solve problems cooperatively
51
agenda parallelism
``` In sum, each worker is assigned to help out with the current item on the agenda, and they all work in parallel up to the natural restrictions imposed by the problem. This is the agenda-parallel approach. ``` agenda parallelism focuses on the list of tasks to be performed. -> specifies an agenda of tasks for parallel execution
52
why DS?
cf
53
issues in DS?
cf
54
what is a DS?
cf
55
goal of a DS?
cf
56
common subproblems
cf
57
two communication models
cf
58
properties of message passing model
cf
59
3 axiom spec for a class of reliable channels
1. Every message sent by a sender is received by the receiver, and every message received by a receiver is sent by some sender in the system. 2. Each message has an arbitrary but finite, nonzero propagation delay. 3. Each channel is a FIFO channel. Thus, if x and y are two messages sent by one process P to another process Q and x is sent before y, then x is also received by Q before y.
60
some behaviors of synchronous systems
cf
61
lock-step synchrony
A system of synchronous processes takes actions in lockstep synchrony, that is, in each step, all processes execute an eligible action.
62
two types of shared memory model
cf
63
implement shared-memory using message passing
cf? | 3.5.4
64
total order multicast
cf
65
multicast vs broadcast?
bc: to all mc: to a group of
66
important aspects of distributed programming
- non-determinism - atomicity - scheduling / fairness
67
fairness types of scheduler
- unconditionally fair schedular - weakly fair scheduler - strongly fair scheduler - unfair scheduler: allows all possible schedules
68
representing distributed algorithms?
cf
69
two main time related issues in DS
cf
70
how are physical clocks synchronized?
- Berkeley algorithm (internal) - Lamport and Melliar-Smith's algorithm (internal) - Cristian's method (external) - NTP (external)
71
sequential and concurrent events
cf
72
concurrency?
absence of casual order
73
logical clock vs vector clock?
logical clock: no causality detection vector clock: supports causality detection VC: poor scalability
74
Lamport timestamps
cf
75
sources of clock reading errors
- propagation delay | - processing overhead
76
NTP: 3 mechanisms
- multicasting - procedure call - P2P communication
77
Berkeley algorithm
cf
78
Lamport and Melliar-Smiths algorithm
cf
79
Cristian's method
cf
80
NTP
cf
81
Bakery algorithm
cf
82
Token ring mutex
cf
83
Raymond's algorithm
cf
84
Lamport's algorithm
cf
85
Ricart & Agrawala's algorithm
cf
86
``` distributed mutex: (message-passing model): 1. client-server algorithm 2. decentralized algorithm 3. token passing algorithm ``` 4. algorithms based on shared-memory model
1. ??? 2. Lamport' algorithm 2. Ricard & Agrawala's algorithm 2. Maekawa's algorithm 3. Raymond's algorithm 4. Petersen's algorithm 4. Bakery algorithm