Memory Flashcards
What are the three levels in the memory hierarchy
Cache, main memory, virtual memory
Temporal locality
- locality in time
- keeps recently accessed data in higher levels of memory hierarchy
Spatial Locality
- locality in space
- when data is accessed, nearby data brought into higher levels of memory hierarchy too
Hit:
data found in that level of memory hierarchy
Miss:
data not found in that level of memory hierarchy (must go to next level)
Hit rate:
hits/#memory accesses (1 minus miss rate)
Miss rate:
misses/#memory accesses (1 minus hit rate)
Average Memory Access Time (AMAT):
average time for processor to access data. Given by:
tcache + MRcache(tMM + MRMM(tVM))
Cache:
- highest level of memory
- fast (about 1 cycle access time)
What is Capacity (C):
number of data bytes in cache
What is block size (b):
bytes of data brought into cache at once
What is number of Blocks (B):
number of blocks in cache (B = C/b)
What is Degree of associativity (N)
number of blocks in a set
How do you calculate number of sets in cache?
S = B/N
three types of cache organisation:
- Direct mapped (1 block per set)
- N-way set associative (N blocks per set)
- Fully associative (all cache blocks in one set)
How do larger blocks in direct mapped cache improve performance?
reduce compulsory misses though spatial locality
How does associativity improved cache performance?
reduces conflict misses
What are the different types of misses?
- compulsory: first time data accessed
- capacity: cache too small to hold all data of interest
- conflict: data of interest maps to same location in cache
Miss penalty:
time it takes to retrieve a block from lower level of hierarchy
How are capacity misses reduced?
LRU replacement: least recently used block in a set is evicted
What is page size? (VM)
amount of memory transferred from hard disk to DRAM at once
What is address translation? (VM)
determining physical address from virtual address
What is page table? (VM)
lookup table used to translate virtual addresses to physical addresses
What are page table challenes?(VM)
- page table is large
- load/store requires 2 main memory accesses
- this cuts performance in half`
What is a TLB and how does it work?
- Table Lookaside Buffer is a small cache of most recent translations
- speeds up address translation by reducing # of memory accesses for most loads/stores from 2 to 1
- small: accessed in <1 cycle
- fully associative
- > 99% hit rate
- however it can’t store many entries (16 - 512)
How is memory protected in virtual memory?
Each process/program has its own page table with a unique virtual to physical page mapping. Each process can use the entire virtual address space.