Multi-Processing Flashcards
What are Flynn’s Taxonomy of Parallel Machines?
Classified by number of instructions streams and number of data streams. SISD (uniprocessor) SIMD (vector processor) MISD (stream processor?) MIMD (Multi-processor)
What are the limitations of uniprocessors?
- Diminishing returns with increasing issue-width.
- increasing frequency increases the power cubically
What are the types of multi-processors?
- Centralized Shared Memory
- Distributed Memory
What is centralized shared memory good for and why?
Memory access is slow and can only handle a certain number of accesses per second => good for small machines < 16
What is distributed memory?
Each core has its own memory and must communicate with other cores via a NIC. Must rely on message passing
Pros/cons of message passing vs shared memory?
- Message passing is difficult to program correctly and be performant, but once it’s correct, it’s usually easy to be performant
- Shared memory is less difficult to program correctly, but much harder to be performant
What is SMP?
Shared Multi-Processor. Multiple cores with one unified address space
What is hyperthreading?
AKA Simultaneous multithreading (SMT) can execute instructions from multiple threads in the same cycle (requires most hardware support)
What are the benefits of multithreading in cores?
- Can avoid overhead of thread scheduling
- Can take advantage of under-utilized hardware / idle time when scheduling just one thread
- More efficient / lower cost than have two cores
What hardware changes are required for SMT?
Need multiple PCs and extra logic before RSes (extra RFs, RATs). RSes and ROB stays the same, ROB just has interleaved values from both threads
Why is VIVT caching bad for SMT?
Both threads have the same address space!
How do we do caching for SMT?
- use PIPT, VIPT
- Make TLB thread-aware
What are the pros/cons for SMT w.r.t. caching?
(+) good data sharing between threads (since you will get a cache hit even if your thread hasn’t accessed the data yet)
(-) cache capacity and associativity is shared
What is cache thrashing?
when the data required by both threads > cache size => get lots of cache misses