Caches Flashcards

1
Q

What’s the point of the cache?

A

To make accessing words faster and to stop always having to request data from main memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the four algorithms for clearing a cache line?

A

First in first out: FIFO
Last used item
Least frequently used
Random cache clearing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are some of the key things to think about with cache design

A

Size of cache
How you map to the cache
How you clear the cache lines

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What’s the relationship (in the way data is transferred) between main memory, the cache and the CPU?

A

Blocks go between the main memory and cache.

Words go between the cache and CPU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the process of if a word is not found in the cache?

A

The CPU must go to main memory.
The block is delivered from main memory into the cache.
The word is then delivered from the cache into the CPU.
SIGNIFICANT TIME ISSUES THERE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What do you have to consider when an item in the cache has been modified?

A

You need to denote that the item has been modified in the cache. A ‘dirty bit’.
That item will have to be written back to main memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the different write policies associated with caches?

A

Write through

Write back

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is write through?

A

All write operations to the cache are also made to main memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a potential problem with write through?

A

Substantial memory traffic and may cause bottleneck.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is write back?

A

When an update occurs, an ‘update’ bit is made on the cache line and the data is then written to main memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Name two situations where you need to have a write policy.

A

First: An I/O device could have read/write access to main memory. It could update main memory and cache isn’t. Or flip it: cache could update and the data the I/O device is handling isn’t adjusted accordingly until too late.

Second: multiple CPUs, each with their own cache. If one cache is updated and the others aren’t, it could invalidate the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a possible when considering writing back to main memory

A

If more than one device has access to main memory and something is updated in the cache, that means portions of main memory are invalid. Or if an I/O device has updated the word in main memory, then the cache is out of date.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are three approaches to ensuring cache coherency ?

A

1) Bus watching with write through
2) Hardware transparency
3) Non-cacheable memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Explain bus watching with write through policy

A

Cache controller watches address lines
It detects write operations to main memory
If something is written to shared memory where there is something already in cache, the cache is invalidated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does bus watching with write through depend on?

A

That all cache controllers use write through

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Explain hardware transparency

A

Additional kit is used to make sure any updates in main memory via the cache are distributed across all caches

17
Q

Explain non-cacheable memory

A

A portion of main memory is shared by more than one CPU

Any accesses to this part of memory are designated as cache misses.

18
Q

Why could a large block size be good?

A

Able to exploit locality — spatial and temporality.

The chance of a hit ratio for subsequent hits is greater if large blocks are brought into the cache because the word is likely to be near by.