Paging Flashcards
Paging
Von Neumann: sequential nature, locality dependent
CPU doesn’t need whole memory allocation, just needs pages
Logical addr space of process can be noncontiguous
Physical memory allocated to process when phys memory available
Physical memory - frames (fixed size blocks, power of 2, 512 bytes to 8,192 bytes
Logical memory - pages, same size as frames but there are usually more frames than pages
Paging mechanics
- Divide physical memory into fixed-sized blocks called frames (size is power of 2, between 512 bytes and 8,192 bytes)
- Divide logical memory into blocks of same size called pages
- Keep track of all free frames
- To run a program of size n pages, need to find n free frames and load program
- Set up a page table to translate logical to physical addresses
- Internal fragmentation may occur
Paging Address Translation Scheme
For given logical address space 2^m and page size 2^n
Page number p —— Page offset d
where p = m-n d = n
How many page tables does a process have
One
Page conversion logical to physical
CPU generates logical address (page)
Physical address is frame
One page per process
In general less frames than pages
Offset from top of page to where instruction is
Offset = displacement
Logical may be page, offset; physical frame, displacement
Calculate how much memory needs to be allocated for page tables for 100 processes
4k memory, 32 byte machine
n = 2^32/2^12 (32 bytes 2^32 addr)
= 1 M pages (4 k = 2^12)
1M * 8 bytes * 100 proc = 800M
Can’t fit in CPU, must be in memory
2 MA problem
2 memory accesses: must go to memory 2x for each instruction (in contiguous memory, only 1)
- search for page table, get frame
- memory access delay to go to frame and get instruction
so there’s page table delay to get the frame # and memory access delay to get data/instruction
2 MA versus CPU speed - why do we need a cache
MA = 100 ns, CPU speed 10 ns
MA = 100 ns * 2 access = 200ns
CPU speed 10 ns
10 ns work, wait 190 ns for page – need cache
Cache registers run at same speed as CPU (registers and CPU have same clock) see diagram
TLB
TLB translation look aside buffer
Special fast hardware cache
Also called AM associative memory
TLB is on CPU chip; made of registers, as fast as CPU
EAT with 20% TLB hit rate
0.80 * 100 ns + 0.2 * 200 ns = 120 ns
Virtual memory definition, why needed
Separation of user logical memory from physical memory
Needs:
- physical memory imited
- Multiprocessing - every prgm requires address space
- program size may not fit in physical memory - each process has its own page table
Demand paging
pages in memory only as required (if 100 pages, each 4k, need 4k*100: but actually don’t need all 100 pages at a time
consider ability to execute partially loaded program no longer constrained by limits of physical memory
VM (paging) allows address spaces to be shared by several processes
Less I/O needed
Less memory needed
Faster response
More users
PTBR and PTLR
Page Table base register - points to page table which is held in main memory
Page Table length register - indicates size of page table
How does TLB (AM) work
AM is parallel search
Address translation (p,d)
stores both page # and frame # (where page table has only frame #)
If p is in associative regiser, go to it and get frame #
Otherwise go to main memory page table and get frame #
CPU goes to TLB and memory concurrently
TLB hit - only goes to memory 1 (cancels memory request( (solves 2 memory access problem)
Memory protection
Associate protection bit with each frame (each entry in page table)
Valid - associated page is in process’ logical address space, and thus is a legal page
invalid - not