Architectures and Operating Systems Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

List memory technologies charateristics:

A

Static RAM (SRAM):

Access Time: 0.5ns – 2.5ns
Cost: $500 – $1000 per GB

Dynamic RAM (DRAM):

Access Time: 50ns – 70ns
Cost: $10 – $20 per GB

Flash Superconductor Memory:

Access Time: 5,000ns – 50,000ns
Cost: $0.75 – $1 per GB

Magnetic Disk:

Access Time: 5,000,000ns – 20,000,000ns
Cost: $0.05 – $0.1 per GB

Ideal Memory:

Access Time: Same as SRAM
Capacity and Cost: Same as disk

The price decreases as you go down whereas the speed increases as you go up.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is Cache?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is cache hit?

A

A cache hit occurs when the data requested by the CPU is found in the cache, resulting in faster data access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is cache miss?

A

A cache miss occurs when the requested data is not found in the cache, requiring the system to retrieve it from a slower memory level

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Formula for hit rate?

A

Hit rate = number of cache hits/total number of memory accesses *100

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Formula for miss rate?

A

Miss rate = 1 - hit rate

or

Miss rate = number of cache misses/total number of memory accesses * 100

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How can you measure cache performace?

A

AMAT (Average memory access time)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Formula for AMAT?

A

AMAT = hit time + (miss rate * miss penalty)

A lower AMAT indicates better cache performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is miss penalty?

A

Miss penalty is the extra time required to fetch data from a lower memory level (like main memory) after a cache miss, including data retrieval and cache update time. At least 10 of clock cycles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Solve this problem:

Say hit rate is 98%, hit time is one clock cycle and the miss penalty is
50 clock cycles. What is AMAT?

A

AMAT = hit time + (miss rate * miss penalty)
AMAT = 1 (always 1) +( 0.02 (100-98/100) * 50) = 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the two types of locality that improve cache performance?

A

Temporal Locality: Recently accessed items are likely to be accessed again soon. Instructions within a loop or frequently accessed variables.

Spatial Locality: Items near recently accessed data are likely to be accessed soon. Sequential instructions or array elements accessed in order.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is direct mapping?

A

Direct mapping assigns each memory location to exactly one cache block based on the index calculated using modulo division:

Cache Index=Memory Address mod 2k

where k is the number of index bits.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is hit time?

A

Hit time is the time required to access data from the cache, including the time to check the cache tags and retrieve the data. Usually 1-3 clock cycles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a fully associative cache?

A

A fully associative cache allows any memory address to be placed in any cache block. It requires checking all cache tags for a match, which increases complexity and cost.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is a set associative cache?

A

A set associative cache divides the cache into multiple sets, each containing a fixed number of blocks. A memory address maps to a specific set, and data can be placed in any block within that set.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How does set associativity balance between direct and fully associative caches?

A

Set associativity offers a compromise by limiting the number of possible cache blocks a memory address can map to, reducing complexity compared to fully associative caches while increasing flexibility over direct-mapped caches.

17
Q

What are the components of a memory address in a set associative cache?

A

Tag: Identifies the specific memory block.
Set Index: Determines which set the data belongs to.
Block Offset: Specifies the exact byte within the cache block.

18
Q

What is the valid bit in a cache block?

A

The valid bit indicates whether the data in a cache block is valid (1) or invalid (0). Initially, all valid bits are set to 0.

19
Q

What happens during a cache hit?

A

The CPU sends the address to the cache controller.
The cache controller uses the index to locate the set.
It checks the valid bit and compares the tag.
If it matches, the data is sent to the CPU.

20
Q

What steps occur during a cache miss?

A

The CPU requests data from the cache.
The cache controller detects a miss.
Data is fetched from the main memory.
The data is copied into the appropriate cache block.
The valid bit is set to 1

21
Q

How is the cache size calculated for a direct-mapped cache?

A

For example:
Number of Blocks: 32
Block Size: 1 byte
Address Size: 16 bits

Each block requires:

Tag: 11 bits (16 - 5 index bits (5 bits comes before all the index added up)
Valid Bit: 1 bit
Data: 8 bits

Total Cache Size:
32×(11+1+8)=32×20=640 bits

22
Q

What is a replacement strategy in associative caches?

A

When all blocks in a set are valid, the least recently used (LRU) block is replaced to make space for new data.

23
Q

What is a common issue with direct mapping?

A

If multiple frequently accessed memory addresses map to the same cache block, it can cause excessive cache misses, reducing cache efficiency.

24
Q

How does a fully associative cache address the issue of direct mapping?

A

By allowing any memory address to be placed in any cache block, fully associative caches can better utilise cache space and reduce conflicts, leading to fewer cache misses.

25
Q

How does set associativity optimise cache performance?

A

Set associativity reduces the complexity of fully associative caches by dividing the cache into sets and allowing multiple blocks per set, thus balancing speed and flexibility.

26
Q

What is the valid bit used for in cache blocks?

A

The valid bit indicates whether the data in a cache block is valid (1) or invalid (0). It helps determine if a cache block contains meaningful data

27
Q
A