CMPSC 311 Test 2 Caching Flashcards
cache
—- a smaller, faster storage device that acts as a staging area for a subset of the data in a larger, slower device.
faster, smaller, larger, slower
fundamental idea of a memory hierarchy: For each k, the —–, —– device at level k serves as a cache for the —–, —– device at level k + 1
locality, k, k + 1, slower, bit
Why do memory hierarchies work? because of ——, programs tend to access the data at level — more often than they access the data at level ——. thus storage at level k + 1 can be —-, and thus larger and cheaper per —–
storage, bottom, data, top
big idea of caches: The memory hierarchy creates a large pool of —- that costs as much as the cheap storage near the —-, but that serves —- to programs at the rate of the fast storage near the ——
layers, processors
Most modern computers have multiple —— of caches to manage data passing into and out of the ——-
L1
cache levels:
—–: very fast and small, processor adjacent
L2
cache levels:
—: a bit slower but often much larger
L3
cache levels:
—-: larger still, maybe off chip. May be shared amongst processors in multi-core system.
Memory
cache levels:
—–: slowest, least expensive
Instruction, data
—- caches are different from —– caches
registers, L1, L2, main memory, local secondary storage, remote secondary storage
memory hierarchy top to bottom order
locality
caches exploit —– to improve performance, of which there are two types.
spatial locality
—— —-: accessed data used is tend to be close to data you already accessed
temporal (time) locality
—– —- —: data that is accessed is likely to be accessed again soon
spatial
two cache design strategies:
—–: cache items in blocks larger that accessed
temporal
two cache design strategies:
—-: keep stuff used recently around longer
subset, data, blocks
general cache concepts:
smaller, faster, more expensive memory caches a—— of the blocks.
—– is copied in block-sized transfer units
Larger, slower, cheaper memory viewed as partitioned into —–