Architectures and Operating Systems Flashcards
List memory technologies charateristics:
Static RAM (SRAM):
Access Time: 0.5ns – 2.5ns Cost: $500 – $1000 per GB
Dynamic RAM (DRAM):
Access Time: 50ns – 70ns Cost: $10 – $20 per GB
Flash Superconductor Memory:
Access Time: 5,000ns – 50,000ns Cost: $0.75 – $1 per GB
Magnetic Disk:
Access Time: 5,000,000ns – 20,000,000ns Cost: $0.05 – $0.1 per GB
Ideal Memory:
Access Time: Same as SRAM Capacity and Cost: Same as disk
The price decreases as you go down whereas the speed increases as you go up.
What is Cache?
What is cache hit?
A cache hit occurs when the data requested by the CPU is found in the cache, resulting in faster data access.
What is cache miss?
A cache miss occurs when the requested data is not found in the cache, requiring the system to retrieve it from a slower memory level
Formula for hit rate?
Hit rate = number of cache hits/total number of memory accesses *100
Formula for miss rate?
Miss rate = 1 - hit rate
or
Miss rate = number of cache misses/total number of memory accesses * 100
How can you measure cache performace?
AMAT (Average memory access time)
Formula for AMAT?
AMAT = hit time + (miss rate * miss penalty)
A lower AMAT indicates better cache performance.
What is miss penalty?
Miss penalty is the extra time required to fetch data from a lower memory level (like main memory) after a cache miss, including data retrieval and cache update time. At least 10 of clock cycles.
Solve this problem:
Say hit rate is 98%, hit time is one clock cycle and the miss penalty is
50 clock cycles. What is AMAT?
AMAT = hit time + (miss rate * miss penalty)
AMAT = 1 (always 1) +( 0.02 (100-98/100) * 50) = 2
What are the two types of locality that improve cache performance?
Temporal Locality: Recently accessed items are likely to be accessed again soon. Instructions within a loop or frequently accessed variables.
Spatial Locality: Items near recently accessed data are likely to be accessed soon. Sequential instructions or array elements accessed in order.
What is direct mapping?
Direct mapping assigns each memory location to exactly one cache block based on the index calculated using modulo division:
Cache Index=Memory Address mod 2k
where k is the number of index bits.
What is hit time?
Hit time is the time required to access data from the cache, including the time to check the cache tags and retrieve the data. Usually 1-3 clock cycles.
What is a fully associative cache?
A fully associative cache allows any memory address to be placed in any cache block. It requires checking all cache tags for a match, which increases complexity and cost.
What is a set associative cache?
A set associative cache divides the cache into multiple sets, each containing a fixed number of blocks. A memory address maps to a specific set, and data can be placed in any block within that set.
How does set associativity balance between direct and fully associative caches?
Set associativity offers a compromise by limiting the number of possible cache blocks a memory address can map to, reducing complexity compared to fully associative caches while increasing flexibility over direct-mapped caches.
What are the components of a memory address in a set associative cache?
Tag: Identifies the specific memory block.
Set Index: Determines which set the data belongs to.
Block Offset: Specifies the exact byte within the cache block.
What is the valid bit in a cache block?
The valid bit indicates whether the data in a cache block is valid (1) or invalid (0). Initially, all valid bits are set to 0.
What happens during a cache hit?
The CPU sends the address to the cache controller.
The cache controller uses the index to locate the set.
It checks the valid bit and compares the tag.
If it matches, the data is sent to the CPU.
What steps occur during a cache miss?
The CPU requests data from the cache.
The cache controller detects a miss.
Data is fetched from the main memory.
The data is copied into the appropriate cache block.
The valid bit is set to 1
How is the cache size calculated for a direct-mapped cache?
For example:
Number of Blocks: 32
Block Size: 1 byte
Address Size: 16 bits
Each block requires:
Tag: 11 bits (16 - 5 index bits (5 bits comes before all the index added up)
Valid Bit: 1 bit
Data: 8 bits
Total Cache Size:
32×(11+1+8)=32×20=640 bits
What is a replacement strategy in associative caches?
When all blocks in a set are valid, the least recently used (LRU) block is replaced to make space for new data.
What is a common issue with direct mapping?
If multiple frequently accessed memory addresses map to the same cache block, it can cause excessive cache misses, reducing cache efficiency.
How does a fully associative cache address the issue of direct mapping?
By allowing any memory address to be placed in any cache block, fully associative caches can better utilise cache space and reduce conflicts, leading to fewer cache misses.
How does set associativity optimise cache performance?
Set associativity reduces the complexity of fully associative caches by dividing the cache into sets and allowing multiple blocks per set, thus balancing speed and flexibility.
What is the valid bit used for in cache blocks?
The valid bit indicates whether the data in a cache block is valid (1) or invalid (0). It helps determine if a cache block contains meaningful data