Main Memory (9) Flashcards

1
Q

Compiling Programs

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Memory Protection

A
  • Required to ensure correct operation
  • Making sure a process can access only memory location in its address space
    • base and limit registers come handy
      • used to give each process a separate address space
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Address Binding

A
  • First user process physical address always at 0000
    • Addresses are mapped to different domains at different stages
      • source code: symbolic addresses
      • compiled code: address bind to relative addresses
        • example: 14 bytes from the beginning of the program code
      • linker or loader will bind relocatable addresses to absolute
  • Where binding to real addresses can happen:
    • compile time: if mem. loc is known apriori
    • load time: must generate relocatable code if mem. location is not known
    • execution time: binding delayed until run time if proc. can be moved around during exectuing
      • need hw support for address maps
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Logical vs. Physical address space

A
  • Logical generated by CPU: referredto as virtual address
  • Physical address: seen by the memory unit
  • Address binding schemes where Logical === Physical:
    • compile-time
    • load-time
  • Address binding schemes where Logical !== Physical:
    • execution-time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

MMU

A
  • Memory management unit
  • HW dev that maps virtual addresses to physical.
  • Simple scheme
    • relocation register := base register
    • value of reloc. reg. is added to every address generated by a request by the process and sent to memory
  • User program uses virtual addresses
    • everything is done at execution-time, when reference is made
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Dynamic Loading

A
  • A whole program doesn’t need to be in memory to be executed, a routine can be loaded when called
  • To achieve this:
    • routines kept on disk in a relocatable load format
  • Main memory space is less used
  • Good if there are large chunks of code that are needed to handle infrequnt cases
  • OS don’t need to support it, can be implemented in program design.
    • It can help though, by providing dynamic loading libraries
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Dynamic Linking

A
  • Static Linking: system libs and program code are combined at the load stage
  • Dynamic Linking (aka Shared Libraries): linking is done at execution time
    • The code as pointer to small piece of code that locates memory-resident library routines and replaces the pointer to itself with the address of the routine. (stub)
    • OS checks if routine is in processes memory address
      • if not, it adds to the address space of the proc.
    • Dynamic linking is useful
    • Versioning may be needed!
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Contiguos allocation

A

Main memory is split in two partitions:

  • resident OS
    • usually in low mem with IVT
  • User processes in high memory
  • Each process in single contiguos section of mem
  • limit and base(relocation) registers used to confine user process
  • MMU maps logical address dynamically
  • Can allow for things like kernel code being transient and changing size.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Variable Parition and Dynamic Storage-Alloc. Problem

A
  • Number of process in main memory at a time is limited
  • Variable parition sizes is necessary for efficiency
  • Hole: block of not used memory
  • OS tracks free and occupied partitions
  • New process arrives: take mem. from hole
    • Strategies:
      • First-fit: first that fits
      • Best-fit: the smallest one that fits
      • Worst-fit: largest hole
      • First-fit and best-fit are faster and use less storage
  • Processes exiting its parition.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Fragmentation

A
  • External: the sum of the free space is enough to satisfy request, but it’s not contiguos
    • Can be reduced with compation
      • possible only if relocation is dynamic and done at exec. time
      • I/O is a problem
        • Latch job in memory while it is doing I/O
        • Do I/O only through OS buffers
  • Internal: allocated memory > requested memory -> wasted space
  • First-fit analysis: N blocks allocated, N/2 lost to fragmentation
  • Backing store has same frag. problems
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Paging

A
  • A process can be allocated in a non contiguos address space
    • Avoids external fragmentation
    • Avoids the problem of memory chunks of different sizes
    • Internal fragmentation stays
  • Basically:
    • Divide physical memory into fixed-sized blocks called frames (power of 2, between 512B and 16MB)
    • Divide logical memory into blocks of the same size called pages
    • Keep track of free frames
    • We need N frames if a program need N pages
    • Page table translates logical addresses to physical
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Paging Address Translation

A
  • CPU generated addresses (logical) are divied in:
    • Page number (p): used as an index into a page table to get base physical address
    • Page offset (d): combined with the base physical address is the address of the object we want

Page size = 2^n

Logical address space = 2^m

size of p = m - n

d = n

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Paging: calculating internal frag.

A
  • Page size = 2048B
  • Process size = 72766B
    • Required mem = 35 pages + 1086B
  • Internal Frag. = Page size - 1086B
  • Worst case Internal Frag = Page size - 1B
  • Small size pages better?
    • Mhmm, consider that every page needs to be tracked in the page table
    • More pages -> more memory needed
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Page table implementation

A
  • Page table is stored in main mem.
    • Page table base register (PTBR) -> points to page table
    • Page table length register (PTLR) -> is the size of the table
  • Paging requires that at every data/instruciton access we do 2 memory accesses
    • page table + data/instruction
    • Mitigated by Translation look-aside buffers (TLBs) aka associative memory
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

TLB

A
  • Translation Look-Aside Buffer
  • Some store address-space identifiers in each TLB entry - uniquely idenitfies processes to provide address-space protection
    • others need to flush at every context switch
  • Usally small (64 to 1024 entries)
  • On a TLB miss, value is loaded into TLB for faster access next time
    • Various replacement policies
    • Some entries can be wired down to be permanent

Effective Access Time (EAT) = HitRatio * MemAccessTime + (1-HitRatio) * 2 * MemAccessTime

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Paging Memory Protection

A
  • Use bits to indicate which protection the frame has: read/write/execute
  • Valid-invalid bit: says if it is legal for a process to access a page
    • if page is in the process logical address space
    • page-table length register could be used
  • Violations result in trap to the kernel
17
Q

Shared pages

A
  • Shared code
    • One copy of read-only (reentrant) code sahred by many proc.
      • Dynamic linking case
    • Like many threads that share the process code
    • if sharing of read-write pages is allowed it can be used for interprocess comm.
  • Private code and data
    • Each process keeps isolated the copy of code and data
18
Q

Page Table Structure Types

A
  • We divide the page table because it may be big, and we may not find enough contiguos space
  • Types:
    • Hierarchical: page the page table
      • Can be 2/3/4 levels
      • Example 2-level
        • page number: 22 bits
        • page offset: 10bits
        • p1 (outer page table): 10bits
        • p2 (inner page table): 22bits
        • d (page offset): 10 bits
    • Hashed
      • Virtual page number is hashed and used as index of page table
      • Index is used to access page entry: linked list of base physical address
      • Common for address space > 32 bits
    • Clustered page tables
      • similar to Hashed, but each entry refers to several pages
        • Useful for sparse address spaces (scattered memory references)
      • used in 64 bits address spaces
    • Inverted: track all physical pages instad of logical pages
      • Table is shared between all processes
        • Takes less space
      • Use the physical address of the page as index of the page table
      • Search for entry with given logical address and pid
        • the index of the element is the base address
      • We could use an hash table to have better performance
      • Implementing shared memory?
        • mapping of a virtual address to a shared physical address
19
Q

Swapping

A
  • A process can be swapped temporariliy out of memory to secondary memory and then brought back for continued execution
    • Total physical memory space of processes can exceed physical
  • Roll out, rolll in: variant used in priority-based scheduling
    • low priority processes get swapped to let higher priority ones in
  • System maintains ready-queue: processes ready to run with image on disk
  • Most of the Swap Time is transfer time
  • Swapped process is going to the same physical address?
    • Depends on address binding and I/O
  • Pending I/O problem:
    • can’t swap out a process who’s waiting for I/O
    • solution: double buffering
      • always transfer to kernel space then to I/O dev.
      • adds overhead
  • Some OS have modified versions that use swapping only when necessary
20
Q

Context Switch with Swapping

A
  • If swapping is needed then context switch can cost a lot of time
  • Example: 100MB process swapping using an HDD with transfer rate of 50MB/s
    • Swap out time: 2000ms
    • Swap in time if process has similar dim: 2000ms
    • Total = 4s!
  • To reduce time we should reduce memory swapped:
    • system calls to inform OS of memory use: request_memory(), release_memory()
21
Q

Swaping on Mobile Systems

A
  • Not supported
  • Flash memory is small and supports limited number of write cycles
  • Other methods of freeing memory:
    • iOS asks apps to relinquish allocated memory:
      • read-only data that can be reloaded from flash
      • Failure results in termination
    • Android terminates apps saving state to flash
    • Both support paging