Multicore Processors Flashcards

(53 cards)

1
Q

What is a multicore processor?

A
  • Multiple cores on same die
  • Each core capable of running a stream of instructions
  • Cores are independent but may work together
  • MIMD parallelism
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the two ways in which we can enable communication between cores?

A
  1. Shared memory
  2. Message passing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Draw a diagram to show 4 cores attached to a shared memory

A

.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

With shared memory: no matter how cores are organised, they all see ? ? values in memory

A

the same

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

All cores see shared memory as a ???

A

single address space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Writes to shared memory by one core are ? seen by others

A

eventually
Doesn’t have to happen instantly, but reads always get the most recent value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Give 2 advantages of shared memory

A

+ Different cores can use memory to communicate - one core writes the value that the other reads
+ Shared memory is easy to program because it matches the view the programmers have

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

When might one core write a value to memory that another core then reads?

A

If a program that was in the first core is rescheduled to run on the second one

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the advantages and disadvantages of using shared caches?

A

+ Good use of space
+ Data shared between cores faster than memory ie. good for applications that share data
- If a process has a large working set it can starve cache resources from other processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the advantages of using private caches?

A

+ Provide guaranteed space for each core
+ May be faster to access
- Different processes might have different caching requirements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Draw a diagram of 4 cores with a shared memory and a single level of shared cache

A

.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

In this diagram, what is L1 cache really split into?

A

Data cache (read and write)
Instruction cache (read only)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Draw a diagram of 4 cores with a shared memory and multiple shared caches

A

.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Draw a diagram of 4 cores with a shared memory, private L1 and L2 cache, and shared L3 cache

A

.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

List the 3 cache inclusion schemes

A
  1. Inclusive
  2. Exclusive
  3. Non-inclusive non-exclusive (NINE)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Draw a diagram to show inclusive cache inclusion

A

.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Draw a diagram to show exclusive cache inclusion

A

.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Draw a diagram to show NINE cache inclusion

A

.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are the advantages and disadvantages of inclusive cache inclusion?

A

+ Miss may not have to go all the way to memory
- Wasteful since data is duplicated at different cache levels (L1 data is also in L2, L3)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are the advantages and disadvantages of exclusive cache inclusion?

A

+ More space in cache because no duplicate data
- More complicated - when we remove data from L1 it needs to be written to L2. This might kick data out of L2 etc. so effects will be propagated through cache hierarchy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is a write through cache?

A

Each write is propagated through caches to memory
Creates alot of traffic, still need to propagate values to other cores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is a write back cache?

A

Changes are written to memory after a batch of writes
Ideally only propagate values when necessary (ie. on reads by another core)

23
Q

What kind of data do caches allow faster access to?

A

Data with temporal or spatial locality

24
Q

Which type of cache do we prefer to use?

A

Write-back caches

25
What is cache coherence?
Ensures all cores maintain a consistent view of shared data, preventing inconsistencies that can arise when multiple caches hold copies of the same data
26
How can we maintain cache coherence?
Implement hardware to: 1. propagate values to keep caches and memory coherent 2. run cache coherence protocol to ensure all cores read the most up-to-date value for each memory address
27
What is the hardware we implement for cache coherence?
Snoopy bus
28
Describe a snoopy bus
Allows all caches to see what others do Using this information, caches make decisions about actions they might need to take
29
Draw a diagram of 3 cores with private L1 cache and shared memory. The most up-to-date value of x is in core 1's cache, with an older copy in memory, and core 2 wants to read x. Show the snoopy bus
.
30
How do we know the most up-to-date value?
Associate status bits with each cache line, on access the status tells us what actions to take
31
The coherence protocol is run on ? ? ?
each cache line
32
Draw a diagram showing the need for coherence
.
33
What is the cache coherence protocol we will use?
MSI Protocol
34
What 3 states could each cache line be in (ie. the 3 possible status values)?
1. Modified 2. Shared 3. Invalid
35
Describe the modified state
Single up-to-date copy in this cache, value in main memory is stale
36
Describe the shared state
Copy in this cache matches memory, may be present in other caches
37
Describe the invalid state
No copy in this cache ie. cache miss
38
What happens when there is a local read and the cache line status is M? Draw the state transition
Read hit, we can simply return the data M->M
39
What happens when there is a local read and the cache line status is S? Draw the state transition
Read hit, we can simply return the data S->S
40
What happens when there is a local read and: the cache line status is I and the data is in another cache in M state? Draw the state transition
Read miss, initiate a bus read to bring it into the cache That cache snoops the bus and flushes the data back to memory, it transitions to S state We get the data from memory and keep a copy in S state I->S
41
What happens when there is a local read and: the cache line status is I and the data is in other caches in S state? Draw the state transition
Read miss, initiate a bus read to bring it into the cache Those caches snoop the bus but do nothing We get the data from memory and keep a copy in S state I->S
42
What happens when there is a local read and: the cache line status is I and the data is not in another cache? Draw the state transition
Read miss, initiate a bus read to bring it into the cache We get the data from memory and keep a copy in S state I->S
43
What happens when there is a local write and the cache line status is M? Draw the state transition
We can simply return the data, write hit M->M
44
What happens when there is a local write and the cache line status is S? Draw the state transition
Send an upgrade request to other caches Any caches that hold a copy (in S state) invalidate We transition to M, write hit S->M
45
What happens when there is a local write and: the cache line status is I and the data is in another cache in M state? Draw the state transition
Write miss, initiate a bus read to bring data into cache That cache snoops bus and flushes data back to memory, invalidates its own copy We get the data from memory and keep a copy in M state I->M
46
What happens when there is a local write and: the cache line status is I and the data is in other caches in S state? Draw the state transition
Write miss, initiate a bus read to bring data into cache Those caches snoop the bus and invalidate their copies We get data from memory and keep a copy in M state I->M
47
What happens when there is a local read and: the cache line status is I and the data is not in another cache? Draw the state transition
Write miss, initiate a bus read to bring data into cache We get data from memory and keep a copy in M state I->M
48
Draw the full state transition diagram for the MSI protocol
.
49
What is an exclusive bus read?
Bus read for data with the intent to modify the data
50
Draw a table to show how the state of a line in one cache limits the states the same line in another cache can have
.
51
Look over coherency example
.
52
What would be faster than going to memory to get data as the MSI protocol requires?
Cache-to-cache sharing
53
MSI sends a lot of invalidates for thread-private data. How can we solve this?
Add an extra Exclusive state - this is the only cache with a copy, value in memory is up-to-date