All Lectures Flashcards

1
Q

What are the goals of an OS?

A
  • Allow user to use applications
  • Allow applications to use computer resources
  • Allow users and applications to interact
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the three key abstractions of an OS?

A

Process
-Instance of a program

Memory (Address) Space
-The range of memory locations that a process can access

File
-A stream of data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How does each layer in the system implement its set of services?

A

It implements its services by using the layer below it and providing its services to the layer above it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Identify and describe the two modes of the CPU

A

Supervisor (Kernel) mode:

  • Access to all hardware, I/O and memory
  • Can execute ALL CPU instructions

User mode:

  • Program running on CPU is restricted from accessing all devices or memory
  • Can only execute a restricted set of CPU instructions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the kernel?

A

O/S that is loaded into memory when the computer starts up.
Resides in memory at all times
Runs in supervisor mode

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the roles that the kernel plays?

A
  • protects the hardware and processes
  • manages hardware and resources
  • provides an execution environment for the processes
  • creates and runs processes
  • services process requests
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define process

A

A process is a running instance of a program in memory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does the kernel get control back from a process?

A

Through system calls and interrupts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What happens when there is an interrupt?

A

CPU switches to kernel mode. CPU uses an interrupt id to index into the interrupt table. Invokes interrupt handler, switches back to user mode, returns to where the process was before the interrupt.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the purpose of interrupts?

A
  • Help the kernel gain control of the CPU
  • Notify the kernel of I/O
  • Help processes make requests to the kernel in a controlled way
  • Notify the kernal of errors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How does the kernel get control of the CPU if no interrupts occur?

A

The timer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the importance of the timer?

A

The timer allows the kernel to gain control of the CPU on a regular basis. No process can monopolize the CPU because kernel can switch to another process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How do processes make requests to the kernel?

A

Using a trap (software interrupt).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is context switching?

A

When an interrupt handler runs the current process has to be context switched out and the kernel context has to be context switched in

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How does context switching work?

A

The handler copies all registers from CPU to memory, initializes all registers to what the kernel expects, switches to the kernel stack, runs the handler

To switch back:
Switches to the processes stack, copies registers from memory to the CPU, returns from where the process left off

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a process composed of?

A

CPU context
•Memory space
•Thread of executtion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Describe the CPU context of a process

A

General registers,
Program counter
Special purpose registers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Describe the memory space of a process

A

Stack: local variables and return addresses
Heap: dynamically allocated memory
Data: global and static variables
Code: program code

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Describe the 5 process states.

A
New: process created
Ready: ready to run
Running: running on the CPU
Blocked: waiting for a service
Terminated: killed and needs to be cleaned up
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Describe the various process transitions

A

New->ready: admitted to scheduler
Ready->running: dispatched by the scheduler
Running->ready: interrupt due to timer or I/O
Running->blocked: service request (system call)
Blocked->ready service request completion
Running-> terminated: exit or kill

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How is a new process created?

A

Process creation is initiated by a process and performed by the kernel.
Parent process makes a fork() call and then the kernel
-makes a copy of the parent
-allocates a new PCB and PID
-copies parent’s PCB to new PCB
-sets new PID
-Marks new process as ready
-Sets return value of parent process to child PID
-Sets return value of child process to 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is the ready queue?

A

An abstract queue which stores pointers to PCBs in the order that the processes are scheduled to run

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is a scheduler?

A

The scheduler is responsible for deciding which process to run next. The scheduler is called by the OS whenever a process related event occurs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the goal of the scheduler?

A

Maximize CPU utilization and Throughput. Minimize turnaround time, waiting time, and response time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is non-preemptive scheduling vs preemptive scheduling?

A

Non-preemptive scheduling does not use a timer, and preemptive scheduling uses a timer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is meant by CPU Bound vs I/O Bound

A

A process is CPU bound if it predominantly does computation and little I/O blocking and lots of running.

A process is I/O Bound if it predominantly does I/O and little computation and lots of blocking and little running.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Describe a FIFO scheduler and its characteristics

A

First come first serve

  • simple implementation
  • non preemptive
  • indiscriminate between CPU and I/O bound processes
  • susceptible to the convoy effect
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Describe a SJF scheduler and its characteristics

A

Shortest job first: priority of job is based on its CPU burst length

  • Probably optimal
  • nonpreemptive
  • hard to implement
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is an exponential moving average and why would we need to know that>

A

An exponential moving average is a weighted average so that the most recent term has more weight than all previous terms:

Sum all (1-a)^i * atn 
tn is the nth term, a is 1/2

We can use an exponential moving average to predict the CPU burst time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is priority based scheduling?

A

Single priority ready queue of all processes. Priority of job assigned by user. Process gets to run until it requests a kernel service.

  • nonpreemptive
  • predictable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What are the problems with non-preemptive schedulers?

A

Starvation: low priority or long processes may never get to run

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What is RR scheduling?

A

Round Robin scheduling. FIFO queue where all processes run until there is an interrupt, the process ends, or they use up their quantum of time (timer goes off)

  • simple implementation
  • indiscriminate between CPU and I/O bound processes
  • high waiting time,
  • convoy effect
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Describe MLFB scheduler and its characteristics

A

System adjusts priority based on CPU burst. All processes start out in the highest priority queue, if they use up their quantum they get kicked to the next queue down. The lowest queue is scheduled using RR.

  • all processes get to run
  • I/O bound processes get priority
  • used in many systems today
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What is parallel execution vs concurrent execution?

A

Parallel execution is when 2 or more operations are performed simultaneously.

Concurrent execution is when 2 or more operations appear to be performed simultaneously

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What is the definition and benefits of cooperating processes?

A

Processes are cooperating if they can affect of be affected by other processes.

Benefits include:

  • information sharing
  • modularity
  • computation
  • security
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What are the requirements for process cooperation?

A

IPC

Process synchronization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Describe the composition of a thread

A

A thread comprises

  • a single thread of execution
  • cpu context
  • stack
  • shares remaining memory with all other threads
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Why use threads?

A

Scalability: easier to use multiprocessor systems
Sharing; easier to share resources and data in concurrent setting
Economy: easier and cheaper to create a thread
Responsiveness: easier to create more responsive applications

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

How do threads access resources?

A

Resources are allocated to the process.

All threads belonging to the same process can access the resource.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

How is a thread created?

A

A stack is allocated from the heap of an existing process (cheap). A TCB is initialized.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What is a critical section?

A

The piece of code in which race conditions can occur.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What is destructive interference?

A

Concurrent operations resulting in incorrect program behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What is a race condition?

A

situations that can result in destructive interference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

What are our solution goals to the critical section problem (4 goals)

A

Mutual exclusion
Scheduler independent
Allows progress
Starvation free

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

What is mutual exclusion?

A

No two threads/processes may be simultaneously in a critical section

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

What is scheduler independent?

A

No scheduling assumptions may be made about threads/processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

What is allows progress

A

No thread/process outside a critical section may block other processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

what is starvation free

A

No process should have to wait forever

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

What is a lock?

A

A lock is a mutual exclusion mechanism to protect critical sections that prevent other threads from entering if another thread is present. Allows threads to wait and eventually enter.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

What is a spin lock?

A

A spin lock is a lock mechanism that requires the thread to spin in a loop testing a condition of some sort.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

What are the advantages and disadvantages of letting the OS/System be responsible for entering/leaving critical sections?

A

Advantages:

  • no race conditions in critical sections
  • programmers don’t have to debug critical sections
  • code becomes simpler

Disadvantages:

  • code becomes system dependent
  • routines are more expensive because of more system calls
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

What’s wrong with locks?

A

Many implementations do not allow a thread to acquire its own lock. Many implementations require that the thread that owns the lock release it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

What is a semaphore?

A

An integer variable instead of a boolean lock variable.
Represents number of pending wakeups and sleeping processes.
Two atomic operations: acquire, release.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

What makes monitors unique from other abstractions?

A

Other abstractions such as semaphores or locks are language independent.
All you need to do to acquire/release a lock or semaphore is to call a function.
Monitors abstract locks and semaphores from the user.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

What is a condition variable?

A

A condition variable is declared inside a monitor and represents queues of processes/threads waiting for a specific condition to occur.

Supports two operations: wait and signal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

How does a condition variable work?

A

When a process needs to wait for something to occur it performs a wait on the condition variable and gets added to the condition variable’s queue. Then it leaves the monitor and is put to sleep.

Once the condition is met the process meeting the condition performs a signal on the variable which wakes the process at the head of the queue. Then the current process leaves the monitor allowing the next woken process to enter.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

What are some conditions for deadlock?

A

Mutual exclusion:
Processes are hogging the resources

Hold and Wait:
A process will attempt to acquire all resources it needs and hold them and wait for all other resources to become available

No Preemption:
Processes can not be forced to release a resource

Circular Wait:
There is a set of processes and resources such as there is a cycle in the resource allocation graph that can not be broken.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

What are the four ways to handle deadlock?

A

Do nothing.
Deadlock prevention.
Deadlock Avoidance.
Deadlock Detection and Recovery.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

What are some options we have for preventing hold and wait?

A
  • Allow each process to hold only one resource at a time. Requires a lot of excess copying.
  • Processes must be allocated all their resources at the start of execution. Requires programmers to know up front all resources that may be required
  • Processes may not own any resources when requesting resources. Processes may starve due to the combination of resources not being made available
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

How can we prevent preemption?

A

If a process is holding resources and requests another resource that is not available, then all of its resources are released and added to the request list. Some resources cannot be preempted. Starvation could occur.

Preempt resources only if they are requested by another process… but then run into the same problems as in option 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

How can we prevent circular wait?

A

Order resources and require that they be acquired in ascending order. Easy to implement but requires that developers write their code to follow the order.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

What is a safe state?

A

A system is in a safe state if the sequence of resource requests by executing processes will not lead to deadlock (no cycles in the resource graph).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

What is an unsafe state?

A

A system is in an unsafe state if the sequence of resource requests by executing processes may lead to deadlock.

64
Q

What is a deadlock state?

A

A system is in a deadlocked state if the sequence of cycles have occurred in the resource allocation graph.

65
Q

List the four types of IPC

A

Shared Memory
Message Passing
Signals
Pipes

66
Q

What is shared memory (IPC)?

A

Memory shared between processes.
Processes can communicate by accessing shared memory.

Problems:
-Can’t be used across multiple machines
:Hard to debug

67
Q

What is message passing (IPC)?

A

To communicate processes ask the system to send a message on their behalf (ex. the Internet).

A message queue stores messages for the processes. To send a message processes must know the ID or name of the process they are sending it to. To receive a message it is not always necessary for the process to explicitly specify the receiver.

68
Q

What are Synchronous vs Asynchronous sends?

A

In a synchronous send the sender waits to ensure the receiver has gotten the message before becoming unblocked. In an asynchronous send, the sender does not wait to see if the receiver has gotten the message.

69
Q

What are pipes?

A

Pipes are a special form of message passing where two processes establish a direct connection (I’m pretty sure its a unidirectional connection). They can pass messages or data streams to each other. Pipes can either be local or interhost.

70
Q

What are signals (IPC)?

A

Processes can asynchronously notify other processes, all processes must be running on the same machine and controlled by the same OS. One process asks the OS to notify another process on its behalf.

71
Q

What is the definition of a file?

A

A stream or sequence of bytes

72
Q

What are the types of files?

A

Regular, directories, hardware devices, virtual devices, pipes and sockets

73
Q

Describe a regular file.

A

A sequence of bytes stored on secondary storage. Readable writeable seekable. Can be accessed sequentially or randomly

74
Q

Describe a directory?

A

Special files containing lists of other files and directories also stored in secondary storage. Only accessible via special system calls. Map file names to file-system identifiers. Induce typically a tree-like structure.

75
Q

What is hierarchical file organization?

A

Organizing files in a tree-like hierarchy.
Allow multiple directories where each directory may contain files and directories. The top directory is called the root. The location of the file in the directory tree is specified by the files path

76
Q

What is a hardlink vs a softlink

A

A hard link always links to a valid file.

A soft link is a link to another path which may or may not be valid.

77
Q

Briefly describe the anatomy of a disk.

A

A hard disk consists of one or more platters where each platter is divided into tracks which is divided into sectors. Each sector contains the same number of bytes. Hard disks have two operations: read and write

78
Q

What three data structures are used to support the file API?

A

File Control Blocks, System Wide File Table, Per-process file table

79
Q

Describe the File Control Block

A

Each FCB stories various meta data about the file including but not limited to:
-Identifier, owner, size, location of the data

80
Q

What are the basic operations of the file API?

A

open, close, read, write, lseek

81
Q

Describe the system wide open file table

A

Each opened file is represented by an entry in a system wide open file table. Each entry contains a file identifier, reference count, pointer to the FCB, and file cache information

82
Q

What is a file descriptor table?

A

The kernel allocates a file descriptor table for each process. Each entry contains an index into the SWOFT and a current offset pointer.

83
Q

What is a file descriptor?

A

The index returned by the open operation is the file descriptor. To perform an operation on the file, the FD must be passed as one of the arguments to the system call. The FD is used to index into the FD table.

84
Q

What are the three standard files?

A

stdin: standard input
stdout: standard output
stderr: standard error

85
Q

What is a device block?

A

A device block contains references to device specific operations. The OS uses device blocks to locate corresponding operations when performing an operation on a device. (Multiple classes implementing the same interface)

86
Q

What are the four allocation strategies?

A

Contiguous allocation, linked allocation, linked extents allocation, indexed allocation.

87
Q

Describe contiguous allocation of files and list its pros and cons.

A

The FCB for the file stores the start of the file and the length of the file. The file is stored in contiguous blocks starting at B.

Advantages:

  • simple to implement
  • excellent read/write performance

Disadvantages:

  • External fragmentation
  • may need to move files as they grow
  • may not be able to add files even when space exists
88
Q

Describe linked allocation of files and list its pros and cons

A

The FCB stores start block B and end block C.
Each block stores a link to the next block in the file.

Advantages:

  • avoids external fragmentation
  • simple to implement.

Disadvantages:

  • No contiguity
  • horrible performance
  • wastes space in each block for a link to the next.
89
Q

Describe linked extents allocation of files and list its pros and cons

A

The FCB for the file stores the start block B and the length L of the file.
Blocks are groups together in one or more extents. Each extent starts with a header:
-N number of blocks in extent
-E: link to next extent

Advantages:
-Good read/write performance

Disadvantages:
:In worst case reduces to linked allocation

90
Q

Describe indexed allocation of files and list its pros and cons

A

The FCB’s are clustered together on the disk. Each FCB stores the indices of all blocks storing the data, and indices to blocks storing indices.

Advantages:

  • decouples metadata from data
  • provides flexibility for contiguous block allocation

Disadvantages:

  • Uses space to store the metadata
  • Significant overhead for small files
  • Allocate many indices for large files
91
Q

Describe the Unix file system

A
Indexed allocation based file system.
Comprises three main components:
-Inodes
-Indirect blocks
-data blocks
92
Q

Describe the anatomy of an I-node

A

Typically 128 or 256 bytes.
Contains meta data about the file
Indices to the first 12 blocks of the files
An indirect block, double indirect block and triple indirect block

93
Q

Describe how small files are stored in the unix file system

A

For files smaller than 48kb only the first 12 direct block indices are used. Each index refers to a 4kb data block that stores the file data. Only as many blocks as needed are allocated.

94
Q

Describe how medium sized files are stored in the unix file system

A

For files between 48KB and 4MB+48KB in size all the direct blocks will be used.
The indirect block is also used, it refers to a 4KB block containing 1024 direct block indices. Only as many blocks as needed are used.

95
Q

Describe how to store large sized files in the unix file system

A

For files larger than 4MB+48KB the double indirect and triple indirect blocks can be used if needed.
The double indirect block refers to a 4KB block containing 1024 Indirect block indices each referring to 1024 data blocks.
The triple indirect block supports terabyte sized files.

96
Q

How are I-node blocks stored?

A

I-node blocks are located in fixed locations spread out on disk. Each block contains 16 or 32 I-nodes. The file system stores a map of all i-node block locations and uses it to locate the i-node number

97
Q

What is an I-node free list?

A

A list of available inodes

98
Q

What is a free block list?

A

a list of available regular blocks

99
Q

What is the superblock?

A

the superblock stores all the data structures for managing the inodes and blocks.
(multiple copies of the superblock are stored on the disk as backups)

100
Q

What is the clean flag?

A

When the file system starts up the clean flag (in the superblock) is set to false (if it was true). If the system crashes, the flag indicates that the file system should be checked. On a proper shutdown the clean flag is set to true and the superblock is written to the disk.

101
Q

How can we manage free lists?

A
  • Bit vectors
  • Linked lists
  • Index based (tree like) structure
102
Q

Briefly describe an abstract directory structure

A

A directory is a file comprising a sequence of directory entries. Each directory entry stores one mapping from file name to file ID. The sequence is typically unordered.

103
Q

What are the two most common options for implementing a directory

A

Linear list: easy to implement but find() is slow

Hash Table: greatly speeds up find() but may have to deal with collisions or overloaded file tables

104
Q

Describe caching in a nutshell

A

A cache works by storing a copy of the file data in memory and accessing the in-memory copy rather than the one in secondary storage (100000x faster to access).
If the file is in the cache, use the data from there. Otherwise first load the file into the cache.
When a file is closed the contents of the cached file may need to be written back to secondary storage.

105
Q

Describe the unified buffer cache

A

The unified buffer cache caches files that are accessed either via memory mapping or the file API. The cache is limited to a fixed amount of memory. When a new file is read its blocks are loaded into the buffer. Implements LRU

106
Q

Briefly describe the LRU algorithm

A

Store accessed items in a list.
When a file is accessed remove it from the list and add it to the end of the list, in this way the LRU item will always be at the front of the list.
When there is no more space in the list evict the file at the front of the list and add the new file to the end of the list.

107
Q

Describe the backup process

A

Backups are typically done either as a full backup which makes a copy of all files, or in incremental backups which makes a copy only of files that have been changed since the last backup

108
Q

What is the archive bit?

A

The archive bit indicates whether a file has changed since the last backup. During a backup all archive bits are cleared, when a file is modified the backup bit is set.

109
Q

Describe check pointing

A

When a backup starts a check point is created. All files altered during the backup are made and altered as copies that do not replace the immutable version of the file until the backup is complete

110
Q

What order of operations helps with system consistency?

A
Data blocks
Indirect blocks
Double Indirect blocks
Triple indirect blocks
I-Node blocks
111
Q

What is journaling and how does it help with consistency?

A

The idea is to log all operations to a journal file and mark operations completed as committed in the log. If a crash does occur before operations are committed a scan of the journal is all that is required to fix any inconsistencies

112
Q

What is a log-structured file system?

A

When performing updates never update an existing block, instead allocate a new block and append it to the log. In this way data blocks are never updated only inodes as new blocks are appended to the log. The problem with this is that the number of blocks we have will run out and even if we do free up space from old blocks it will be fragmented.

113
Q

What does CIA stand for? Describe each item.

A

CIA stands for confidentiality, integrity and availability.

Confidentiality: computer assets are only accessible to authorized users.

Integrity: computer assets can only be changed by authorized users.

Availability: computer assets are available to authorized users.

114
Q

What are 4 possible threats to security?

A

Interruption, Modification, Interception, Fabrication

115
Q

What are the goals of security?

A

To protect against accidental and deliberate transgressions of users and processes.

116
Q

What is a policy and what is a mechanism?

A

A policy is the way we want the system to behave.

A mechanism is the techniques we can apply to enforce a policy.

117
Q

What mechanisms are used to get authentication?

A

something you know (ex pin number) and something you have (ex bank card)

118
Q

What is the difference between authentication and authorization?

A

Authentication is who the user is, and authorization is what the user can do.

119
Q

What is a protection domain

A

The set of files that a user has access to is a protection domain.

120
Q

What is an access control matrix?

A

A structure used to store the mapping between objects, users, and access rights.

121
Q

Describe an access control list

A

The list of all protection domains and their access rights for an object. The ACL is stored with the object being protected. Hard to track down all the permissions for one protection domain.

122
Q

Describe a capability list

A

The list of all objects and access rights that belong to a domain of protection. Stored with the user. Hard to determine who has access to an object.

123
Q

What are some access right issues?

A

Revocation, limited propagation, confinement

124
Q

What is a MMU?

A

The memory management unit defines the logical address space of each process/OS. It translates logical addresses to physical addresses and generates an interrupt if an invalid logical address is issued. All memory access by the CPU goes through the MMU.

125
Q

What is the physical address space vs the logical address space?

A

The physical address space is the address range of the physical memory. A logical address space is the address space visible to a process.

126
Q

Why use logical address spaces at all?

A

Having separate logical spaces provides each process with an independent view of memory and thus makes programming easier (pointers and references are the same). We can allocate memory to multiple processes.

127
Q

What is the contiguous allocation of processes?

A

Memory is divided into fixed size partitions and each process resides in its own partition. Each partition has a base (start of the partition) and a limit (end of the partition). A logical address is generated between 0 and limit-1.

128
Q

What are the advantages/disadvantages of contiguous allocation for processes?

A

Pros:

  • simple implementation
  • fast translation
  • fast process switching

Cons:

  • external fragmentation
  • process address space can not be increased
129
Q

What are some of the strategies for memory allocation in a contiguous memory scheme? (pros and cons)

A

First fit:

  • Fastest possible decision
  • Not necessarily the best decision

best fit:

  • Reduces memory waste in the short term
  • More likely to create small holes of wasted memory

worst fit.

  • Less likely to create small holes
  • reduces ability to satisfy large memory requests
130
Q

What is external vs internal fragmentation?

A

external fragmentation is small holes of memory in the free pool.

internal fragmentation is unused memory inside allocated chunks.

131
Q

What is the fixed block sizes solution to memory allocation?

A

Fix allowable requests to a number of different sizes and use a list for free blocks of each size.
Allocate the smallest allowable block for each process. If the list is empty grab block from the next list up and subdivide it. This leaves no external fragmentation but internal fragmentation is still an issue.

132
Q

Describe the buddy system.

A

The buddy system says to allocate memory in chunks of power of two.

When freeing memory, if a buddy of a block being freed is free (aka if they are next to each other), merge the two blocks. Keep repeating until no more merging is possible.
No fragmentation but internal fragmentation is still an issue.

133
Q

What are some data structures used for managing memory?

A

Linked lists (common and simple)
Binary trees
List of lists

134
Q

What is swapping?

A

One process at a time is in memory (logical == physical). Entire process memory is saved to disk when it stops or blocks or is preempted. Next process in ready queue is loaded from disk into memory and runs.

Pros:
-Simple, no MMU required
Cons:
-Expensive, slow

135
Q

What is swapping WITH contiguous allocation?

A

Requires an MMU to perform address translation to allow multiple processes to be in memory at the same time. When memory is needed for another process one or more processes are swapped back to the disk to free up the memory. (similar to how a cache uses LRU)

Pros:

  • Simple
  • MMU can be simple
  • Low overhead

Cons:

  • Expensive
  • Slow
  • External fragmentation
136
Q

What is segmentation?

A

Allow a process to comprise multiple segments and processes can use special segment selectors as part of their pointer. Processes may run with some or all segments in memory. There are 4 segment selectors (code, data and heap, stack, and extra)

Pros:
-Allocation flexibility
-Low overhead
-Cheaper to swap a segment than a process
Con's
-Processes are no longer contiguous
-MMU is complex
-Large segments are still expensive
-External fragmentation
137
Q

Describe paging

A

Divide process memory into small equal sized “pages”. Pages are numbered 0-N. Divide physical memory into equally sized frames.
A page can be stored in any frame, and the MMU maps pages to frames. The MMU takes the page number from the logical address of the page, and determines the frame number where the page data resides.

138
Q

What is a page table and a multi-level page table?

A

Each process has a page table that contains an entry for each page in the memory space of the process. Each entry contains a frame number, a valid bit, access bits and access level. The MMU uses the page table to translate addresses.

A multilevel page table divides the page number into two parts, where the first level table entry points to the second level table entry which contains page descriptors. The second level table entry only exists if it is valid. Uses significantly less memory to create this table than a single level page table.

139
Q

How does translation with page tables work?

A

MMU uses a page number to index into the page table. Then iff the entry is valid uses the 20 but frame number to access the memory.

Note that memory only be accessed if the page is valid so a process doesn’t need all its memory to be allocated at creation. If the OS needs to extend the heap or the stack is can simply allocate more frames and update the page table.

140
Q

What are some of the problems with page tables?

A
  • Slower address translation
  • Every memory operation requires an extra read which is expensive
  • A page table is HUGE so each process must be allocated 4MB of memory PLUS the memory of the actually process.
141
Q

What is the translation look aside buffer?

A

A cache for address translations using the page table to save additional reads.

142
Q

Why is the TLB a problem when context switching, and how can we solve the problem?

A

If new processes use entries in the TLB they will access the old processes memory.

We can flush the TLB on context switch but it will be expensive to reload.

Or, we can associate address space IDs or process IDs with TLB entries so we only evict the TLB entries that we need to lose.

143
Q

What is the concept of virtual memory?

A

The address space of a process is limited by the architecture of the MMU, not the physical memory.
The use of paging provides the process with a logical view of a large contiguous memory space which decouples the view from the physical memory space

144
Q

What is demand paging (as opposed to paging)?

A

In demand paging, a process must allocate a page before using it. A process requests pages from the OS which allocates a page and updates the page table entries.

CPU sends virtual address to MMU which extracts a page number and loads the page table entry. If the page is valid, perform translation. Else if unallocated end program, else if nonresident, locate a frame and update page table entry. Return to process and try again

145
Q

What are the three states of a page?

A

Unallocated (no data associated with the page)
Resident (page contains data and is assigned to a frame)
Nonresident (page might contain data but is not assigned to a frame)

146
Q

What is a valid bit?

A

A valid bit is set to 1 or 0 and lets the system know the state of that page.

147
Q

How does demand paging affect performance?

A

processes get the memory they need rather than what they want.

148
Q

What is Copy-on-Write?

A

Demand paging allows us to perform copy on write,
When a new process is created, don’t allocate any new memory. Duplicate the page table and mark all pages as read only, when any copy of the process attempts to write to a read only page and interrupt will occur and a new frame will be allocated. Both pages will be marked as read/write and the process will be able to make their change to the copy.

In this way, only pages that are modified need to be copied.

149
Q

What is a guard page?

A

A guard page is a way of notifying the system when the stack needs to grow. When a guard page is written to, a new guard page will be allocated to the stack.

150
Q

What is a library?

A

Libraries are bundles of compiled code that provide functionality to applications. The code is linked in as the last step of compilation.

Each process is linked with a small stub of code that asks the OS to load shared libraries. This code is run when the process starts and the libraries are loaded and mapped to the process space.

151
Q

What is interprocess shared memory?

A

Unnamed memory segments can be allocated to memory so that when a process forks its children will have access to that segment.

Named memory segments can be allocated to shared memory and assigned a name in the file system so that other processes can use it.

152
Q

How do we know whether to write a victim page to the disk before evicting it?

A

Keep track of whether the page has been modified using a dirty bit that is cleared when the page is loaded into memory and is set when a write is performed to the page. The OS can check if the bit is set and write back to the disk when the page is evicted.

153
Q

How do we decide which page should be the next victim?

A

FIFO:

  • Simple
  • No correlation between whether a page has been used and its eviction

Optimal page replacement (replace the page that would not be used for the longest period of time)
-impossible to implement

LRU:

  • Good performance, implementable
  • requires significant hardware support (data structures, counters)
154
Q

How can we approximate LRU to make page victim selecting more optimal>

A

Use a reference bit:

  • The bit is set when a page is accessed and cleared by a timer at regular intervals
  • Simple to implement and doesn’t require much hardware
  • crude approximation of LRU, some pages are less likely to be evicted than others

Use multiple reference bits:

  • the timer shifts the reference bit to the right if its set and then clears the reference bit
  • The OS will clear a page whos bit is least

Second change algorithm:

  • Each page has a referenced bit which is set when the page is accessed. All pages are kept in a circular queue
  • The OS moves around the circle to find a victim and sets reference bits to 0, then moves the pointer to the next page table entry.
  • Simple to implement and cost is relatively low.
155
Q

What is a memory mapped file?

A

Instead of processes allocating a buffer and reading into it, map the file directly into memory and access it as if it was loaded into the buffer. Using the mmap() function.