Exam 2 Flashcards
What is Memory Management?
It allows virtual address spaces to be mapped to physical memory. It is supported by the OS.
Typically, virtual memory is too big to store in physical memory, so we can only store a portion in there.
Memory management also maps virtual addresses that cannot fit in physical memory over to the disk.
What are Page Tables?
A data structure used by a virtual memory system in the OS to map between virtual addresses and physical addresses.
Computer memory is NOT simplistic where all programs fit into memory and each program occupies a continuous part of memory. Why?
- This would make memory allocation very difficult.
- It would be hard to make sure apps didn’t write over/run into each other.
- It would be hard to efficiently and consistently find the free space you need for an app?
What is the Fixed Partition Scheme for memory management and why doesn’t it work?
IDEA: memory is loaded into partitions such that one process is loaded into each partition. The processes are assigned a partition at load time and the loader rewrites the memory addresses. New processes wait in a queue until a sufficiently large partition becomes available. When you compile a program, all you need to do is load it into memory, and change all virtual addresses to physical addresses by adding a fixed offset.
ADVANTAGES:
- Simple
- Relatively little hardware support needed
DISADVANTAGES:
- Have to decide partition boundaries in advance.
- Inflexible.
- Inefficient - maybe the program doesn’t need all its content at the start but it’s still loaded in and taking space.
- You need to know memory requirements in advance, but this is often hard to know. You can’t use dynamic memory allocation if you do this.
What is the Base & Limit Registers solution for memory management, and why doesn’t it work?
Basic idea: hardware support - two special registers (for user-mode processes)
- A base register, automatically added to each address as it is used. We add new virtual addresses to the base register to figure out the actual physical address.
- A limit register, against which all accesses are tested. This is the upper bound address that a program cannot go beyond. This is a way to prevent programs from running over each other.
DISADVANTAGES:
- Requires hardware support to do an “add” and “compare” on every access.
- It leads to increased overhead.
What is the biggest challenge that Dynamic Memory Allocation presents.
It can create holes in physical memory. If one process ties up a block of memory and then leaves, another process may be able to take that block if it’s the same size or smaller, but if it’s bigger it can’t use any of that space.
To track the holes, we use Bitmaps and Linked Lists.
What are Bitmaps and how do they work?
Suppose memory is divided into chunks of 10kb. We maintain a vector of 0’s and 1’s that specifies availability. The i’th bit tells us whether the i’th chunk is free.
For example, look at 20 bits:
00000011 00111110 0011
The OS occupies the first 20kb. As a result, above we have the two rightmost digits set as 1 and 1.
The next 30kb are free 000 following the OS’s 11.
If a new app comes in and requests 20kb of memory, we just look at the bitmap and find 2 consecutive 0’s. Then we can put the app there.
What are the advantages and disadvantages of using Bitmaps?
ADVANTAGE: simple to implement.
DISADVANTAGE: Slow to search for free bits. Search for k allocation units for a process requires searching for k consecutive 0’s.
How are Linked Lists used to manage memory?
Every record in the Linked List has a process ID/marked free (H/B), a starting location, a size, and a pointer to the next record. The current state is say:
(H,2,3), (B,5,5), (H,10,2), (D,12,2), (H,14,6)
In the first one above, H represents that it is a HOLE. the 2 represents an offset of 2*10kb = 20kb as a starting location for this free hole. The 3 is the size, meaning this hole is 30kb wide.
When you have an app that needs to be loaded into physical memory, we just look for a hole that is large enough to take the new entry.
The PCB of a process points to a record storing its location in memory. Example: process B has a pointer to B,5,5.
You also need to use doubly linked lists because when a process terminates, neighboring blocks need to be examined. You may have to merge multiple adjacent blocks into 1 contiguous large hole.
What are the three strategies for seeing which block of open memory is the most appropriate fit for a new process?
Consider the scenario where you have a 15kb program to store and you have holes of sizes:
- 30kb
- 20kb
- 60kb
- 30kb: First-Fit
- 20kb: Best-Fit
- 60kb: Worst-Fit
First, Best, and Worst are the three kinds of fits.
Let {Hi | 1,…n} be unused blocks and k be the size of a requested block.
First Fit: Select the first Hi such that size(Hi) > k. Good because we don’t need to traverse the whole list to find a hole.
Best Fit: Select Hi such that size(Hi) >= k, and if size(Hi) > k, then size(Hi) >= k for all i != j. In other words, select the smallest block that is big enough. This gets the tightest fit but will likely be a slower solution.
Worst-Fit: Select the largest block (reduce size of biggest hole).
Your choice of strategy can lead to memory fragmentation. No consensus on which is best.
What is the External Memory Fragmentation Problem?
This is space the OS is aware of but is too small to give to new programs.
You have small holes in memory that are left behind after you removed and added processes from memory. The holes are too small for storing a contiguous program.
Example: We use best-fit to put the 15kb program in the 20kb spot. This leaves 5kb open for another process to take, but no other process can use it because it’s too small a slot.
What is Internal Memory Fragmentation?
This is when we allocate more kb for a program than it ends up using, thus wasting the space it doesn’t use.
Example: We thought process B needs 50kb of memory. When it runs, it ends up only needing 40kb. 10kb is unused and the OS cannot reallocate it to another process.
What is the Buddy Algorithm (brief overview)?
Used by Linux to allocate memory.
In the Buddy Algorithm, Linux maintains page sizes by the power of 2. Doing this makes it easy to search for pages of the right sizes that you want. Think of a page as a discrete fixed size of memory that can be allocated to a process. They are typically 4kb (this is tunable).
Describe the Buddy Algorithm in depth.
When a request for memory comes in, it is rounded to a power of 2 (e.g. 2,4,8,etc).
For a request of 8 pages, we repeatedly half a block of memory until we have reached the size that we need. Say we started with a 64 page block. We then cut it to 32 & 32, then cut one of the 32’s into 16 & 16, and then cut one of the 16’s into 8 & 8. Then if we get another request for 8 pages, it can take the other 8 that we got from breaking up that 16.
If we then get a request for a size of 4 pages, we take the unused 16, and break it into 2 8’s and then one of the 8’s into 2 4’s, bringing us to a page of size 4.
The reverse is also true though. Say we release 8 pages of memory in step h and then another 8 in step i. Then, both of these will be merged into size 16.
Because the buddy algorithm allocates in powers of 2, we can maintain a fixed size array to maintain all the info in the memory chunks. The chunks are ordered in their sizes for searching. For example, if you are interested in pages of size 8, we can go to the entry for pages of size 2^3. Then we find the beginning offset which in that case would be of size 24. The offset points to the memory offset based on the number of pages from the beginning. Likewise, if the user asks for a chunk of size 32, we can go to the array and look for a chunk the size of 2^5.
*See lecture slides for visuals - that’s much clearer.
What is one of the biggest downsides of the Buddy Algorithm?
It can cause internal fragmentation.
Example: If an app asks for a 65 page chunk, we have to round up to the next power of two above 64, which is 128. That means 63 pages are essentially wasted since the OS allocates entire 128 page chunk even though it knows only 65 will be used.
What is Slab Allocation?
This is a Linux approach to address the internal allocation problem of the Buddy Algorithm.
This is where we take chunks using the buddy algorithm, but carve out slabs (smaller units) from them, and manage smaller units separately.
Linux also mitigates issues with internal fragmentation by using object caches.
What is an Object Cache?
Object Caches: dealing with creation/destruction of objects of certain types. Pointers to one or more slabs, storing objects of the same type. Essentially, if you allocate and deallocate objects frequently, when you free them up, rather than freeing for allocation for other processes, the OS may actually cache it. At some later time, if you allocate the same object again, the OS can get that from the cache much more easily (and faster). This is usually a pointer to one or more slabs where you end up storing objects of a common type.