Final Lecture Review Flashcards
How to calculate total time and avg time to complete order for boss-worker:
ex)
6 workers 11 toy orders
120ms per toy order
total time for 5 workers 5 order completed -> 120ms next 5 completed -> 120ms last (11th) order completed -> 120ms = 360ms
avg time = 5(120) +5(240) + 360 / 11 = 2160/ 11 = 196ms
How to calculate total time and avg time to complete order for pipeline:
ex)
6 workers 11 toy orders
20ms per pipeline stage
total time for 1 order -> 120ms
offset is 20ms
10 remaining orders after first
total time = 120 + 20(10) = 320ms
avg time = 120 + 140 + 160 … + 320 / 11 = 2420 / 11 = 220ms
Why are threads useful?
Parallelization speed up
Specialization -> leads to hot cache
Lower memory requirement than processes and cheaper synchronization
Usefulness depends on:
- Depends on the situation and the metrics we use
- Depends on the workload
Define once again the tradeoffs between MP and MT
MP benefits: - easier to develop - relies on the OS MP costs: - more costly memory-wise due to expensive context switching and IPC - tricky port setup - costly to maintain shared state MT benefits: - shared address space - little cost of context switching MT costs: - very hard to implement - synchronization is difficult to handle
How does the event-driven model work?
Event driven model:
- Process request until wait is necessary
- Then switch to another request when waiting on a request
- This continues until all requests are served
- Single address space and single flow of control
- Single process and single thread (helper processes though as needed)
- Smaller memory requirement and no context switching
No synchronization
Compared to MT - context switching to other threads wastes time that could have been dedicated to processing other requests
Of the three models, which of the following requires the least amount of memory?
Boss-worker model
Pipeline model
Event-driven model
Event-driven model
Extra memory is only required for the helper thread in event-driven because of the concurrent blocking I/O, not for all concurrent requests
Experiments and the winners from the FLASH paper
Single file test (synthetic) -> SPED (greater reliance on cache and didn’t need to check cache for memory presence)
Owlnet trace -> AMPED barely beat out sped due to blocking I/O advantage (helpers); SPED occasionally blocked
CS trace (Larger trace which requires I/O and not fed into cache ) -> AMPED won due to smaller memory footprint and more memory for caching; SPED performed worst because of lacking async I/O
Optimizations connection rate -> all optimizations performed best and no optimizations performed the worst (optimizations are very important)
Performance under WAN -> performance improvements when more clients were added