Caches and Caching Flashcards

1
Q

What are the two concepts associated with caching?

A

Caching and Memory Cache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is caching?

A

It is an optimization technique
that speeds up access to slow storage by
storing frequently accessed items in fast
storage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is memory cache?

A

It is a type of fast memory
storage between the CPU and RAM that
stores frequently accessed memory items and speeds up memory access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are some instances where caching is used as an optimization technique?

A

Physical memory cache, Web cache. Disk buffers, VM, TLB

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Explain web cache

A

Speeds up access to web pages by
storing a copy of the frequently accessed pages locally in the computer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Explain disk buffers in the context of cache.

A

Store in RAM the disk blocks that are
frequently accessed in the computer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Explain Virtual Memory in the context of cache

A

Store in RAM the disk blocks that are
frequently accessed in the computer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Explain Translation Lookaside Buffer in the context of caching.

A

Store in RAM the disk blocks that are
frequently accessed in the computer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a Cache Hit?

A

● When a request is satisfied from cache,
● No need to access Large Data Storage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a Cache miss?

A

● Request cannot be satisfied from cache,
● Item is retrieved from Large Data Storage,
● A copy is placed in cache.
● If cache is full, an item not used recently in the cacheis removed to make room for the new item (LRU-Least Recently Used).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What happens when the cache is full?

A

an item not used recently in the cacheis removed to make room for the new item (LRU-Least Recently Used).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the hit ratio in context of cache?

A

● Percentage of requests satisfied from cache between 0 and 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the miss ratio in the context of cache?

A

● Percentage of requests not satisfied from cache.
● Time = r * Thit + (1-r)*Tmiss Thit < Tmiss

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the worse and best cases of cache in terms of hit ratio?

A

● In the worst case (r=0) the cost will be the same as not having cache at all.
● In the best case (r=1), all accesses will be in the cache.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is Locality of Reference?

A

refers to repetitions of the same request.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does it mean to have a high locality of reference?

A

Many repetitions of same request

17
Q

What does it mean to have a low locality of reference?

A

Few repetitions of the same request

18
Q

What is better for cache, a high or low locality of reference?

A

high, more repetitions means the cache will be more effective.

19
Q

What is preloading (prefetch) in caches?

A

It is an optimization technique where items are stored in the cache before the request arrives.

20
Q

When does preloading work?

A

When items are grouped. ex:
● When loading a web page, load also the images in the web page.
● When accessing an item in a struct, also fetch the other items in the struct.
● When reading a program instruction, also read the instructions that follow.

21
Q

Where is memory cache located?

A

In between the cpu and ram

22
Q

What are two different types of memory cache?

A

Write-Through, and Write-Back

23
Q

What is Write-Through cache?

A

● Place a copy of the item in cache
● Write a copy to Physical Memory

24
Q

What is Write-Back cache?

A

● Place a copy of item in cache
● Write the copy to RAM only when necessary
● Since multiple processors may have different
caches, a Cache Coherence mechanism is
needed to make sure the cache is consistently updated.

25
Q

What are the specifics of multi level cache?

A

● L1 Cache
● Built inside the CPU
● L2 Cache
● External to the CPU
● L3 Cache
● Built into RAM

26
Q

Where are instructions and data stored?

A

In RAM

27
Q

Do modern computers separate instructions and data in cache?

A

No

28
Q

Do instructions have a high or low locality of reference?

A

High, usually accessed sequentially and some functions are accessed more than others

29
Q

Does data have a high or low locality of reference?

A

Medium, there is some locality of reference for example some part of the stack are accessed more than the heap.

30
Q

What cache can store virtual memory?

A

L1

31
Q

What are cache lines?

A

The way RAM is divided.

32
Q

What is the typical size of a cache line/block?

A

64 bytes

33
Q

How much memory is fetched during a cache miss?

A

1 line, 64 bytes

34
Q

What are the types of cache technologies?

A

● Direct Memory Cache
● Associative Memory Cache

35
Q

What is Direct Memory Cache?

A

● The memory is divided into blocks (Cache Lines)
and each Block has a number.
● N blocks in memory are grouped into tags.

36
Q

What is Associative Memory Cache?

A

● Generalization of Direct Memory Cache.
● Uses parallel hardware.
● Maintains independent caches:

37
Q

What are the direct cache steps?

A

Input: Memory Address
Output: Data in that address
Algorithm:
● Obtain tag t, block number b, and offset
● Examine Tag b in slot b in cache. If tag
matches, extract the value from slot b in cache.
● If tag does not match, read the item in memory
and copy data in slot b and replace the tag with
t.
● Problem: Two items with the same block
number cannot be in cache at the same time.

38
Q

Does it matter what order you access elements in an array for speed?

A

Yes, accessing sequentially is faster for the cache.