Lecture 8a Flashcards

1
Q

Symmetric Multiprocessor Issues

A

Tradeoff between load balancing and processor affinity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Processor Affinity

A
  • When a process runs on a processor, some data is brought into that processor’s cache
  • Process migrates to another processor
  • Cache of new processor has to be repopulated
  • Cache of old processor has to be invalidated
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Load Balancing

A
  • Ensure load is evenly distributed among processors
  • Balance load using push migration (a specific task periodically checks load on processors and evenly distributes it evenly by moving tasks) and pull migration (idle processor pulls a waiting task from a busy processor)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Hard Real-time Scheduling

A
  • Task must be finished within a deadline

- Need different types of schedulers to ensure that deadlines are met (earliest deadline first)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Soft Real-time Systems

A
  • No strict deadline, but should be executed “quickly”

- Priority based on scheduling with preemption

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Linux: Completely Fair Scheduler (CFS)

A
  • For scheduling lower priority processes
  • Select task with lower virtual run time
  • O(log N) with red-black tree, and O(1) with caching
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

CFS Target Latency

A
  • Time during which every runnable task should run at least once
  • Each process gets a time portion based on its nice value (from -20 to 19, lower nice = higher priority)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

CFS Virtual Run Time

A
  • Used to automatically determine priority
  • Physical run time + decay (lower priority processes have higher decay)
  • Priority inversely proportional to virtual run time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Deterministic Modeling

A
  • Run all algorithms on a workload

- Not general

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Queuing Models

A
  • Use queuing theory to analyze algorithms

- Many (unrealistic) assumptions to facilitate analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Simulation

A
  • Build a simulator that models systems
  • Use a synthetic workload or traces from real systems
  • Expensive (can take hours/days)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Implementation Evaluation

A
  • Code an algorithm and test it
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why do processes cooperate?

A
  • Information Sharing
  • Computation Speed-up
  • Modularity, convenience
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Interprocess Communication (IPC) Methods

A
  • Shared Memory

- Message Passing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Shared Memory

A
  • One process creates shared memory
  • Other processes attach shared memory to their own address space
  • Shared memory is treated as regular memory
  • Synchronization is needed to prevent conflicts
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Shared Memory Pro/Con

A
  • Pro: Fast (memory speed), convenient to programmers (just regular memory)
  • Con: Need to manage conflicts (tricky for distributed)
17
Q

Message Passing

A
  • Process A sends message to B via kernel
  • send(msg), receive(msg)
  • Direct vs Indirect (naming vs ports, mailboxes)
  • Blocking vs non-blocking
  • Buffering
18
Q

Message Passing Pro/Con

A
  • Pro: No conflict (easy to exchange messages, especially in distributed systems)
  • Con: High overhead and slow