Operating Systems Flashcards
Give OS operations for Process Management, Memory Management, and I/O Management
Process: 1) Create/Delete 2) Suspend/Resume 3) Inter-process Communication (IPC) Memory Management: 1) Keep track of which parts of memory are being used by whom 2) Decide which processes to load into memory when space becomes available 3) Allocated/Deallocate space I/O: 1) Buffering / Caching 2) Device Driver Interfaces
What are System Calls used for? Give some examples of them.
They’re used for the user level requesting control from the kernel-level to use privileged services. Examples: fork, exec, read, write, create, etc.
What is the difference between a monolithic OS and one that uses a microkernel?
Monolithic OS - All services co-exist together with no clear separation of functionality. Microkernel - Moves as much from the kernel into user-space; communication takes place between user modules using message passing. Microkernels are more reliable and secure than monolithic OS’s because they don’t have so much code running in kernel mode, but there’s some performance overhead of user-space to kernel-space communication.
Describe the life cycle of a process.
(1) Process A is created, entering the “new” state. (2) A is admitted and enters the “ready” state. (3) A is allowed to run when called by the scheduler, thus entering the “running” state. (4) One of 3 things occurs: (a) An interrupt is thrown, putting it back in the “ready” state. (b) I/O or event wait, entering the “waiting” state. (c) Finishes executing, thus entering the “terminated” state.
What is a Process Control Block (PCB)?
Information kept by the OS on a process, including its (run) state, ID, program counter, registers, scheduling info, etc.
What is a Context Switch?
When the CPU switches to another process, saving the state of the old one and loading the saved state of the new one.
When does preemptive scheduling apply, and why is it used?
It applies when: (1) When a processes switches from the running state to the ready state. (2) When a process switches from the waiting state to the ready state. It’s used because in non-preemptive scheduling, one a CPU is allocated to a process, that process keeps the CPU until it explicitly relinquishes it, thus causing it to potentially hog the CPU.
What are some issues that may arise in multi-processor scheduling?
1) Cache affinity 2) Co-runner selection 3) Homogeneous vs heterogeneous processors 4) Load sharing
What is priority inversion, and how might it be averted?
Priority inversion is when a higher-priority process is blocked during a lower-priority process because it’s waiting for a resource held by an even lower priority process. It can be avoided using priority inheritance! (I.e. the process holding the resource now inherits the high-priority process’s priority). -This has its own problems though, such as “chain blockings” of high priority processes by multiple lower priority processes.
What is Rate Monotonic Scheduling (RMS)?
-A form of static priority preemptive scheduling. -Each task is initiated at fixed intervals and completed in a certain period -Each task must finish before the start of the next cycle
In Proportional Sharing, what is the fractional share ( fi(t) ) of process pi at time t?
fi(t) = weight of process i / The sum of all the process weights
What is the ideal service time (Si) using a resource in the interval t0 to t1 in proportional sharing? What is the lag?
Si = Integral (from t0 to t1) of fi(t) dt
lagi(ti) = Si(t0, t1) - actual service time
What is the virtual time of a task in proportional sharing?
V(t) = Integral from 0->t of ( 1 / the sum of all the weights ) dt
What is the Virtual Eligible Time in Proportional Sharing? What about the Virtual Deadline?
V(e) = the virtual time of process i + ( the actual time of process i / its weight )
V(d) = V(e) + r / the weight
NOTE: r = Si(e,d), representing the service time a new request should receive in the interval [e,d], i.e. the service time of a request (e.g. a quantum).
What is a monitor (synchronization)?
A high-level abstraction that provides a convenient and effective mechanism for process synchronization.
What is Priority Ceiling Protocol?
A means of avoiding chained blocking (a result of priority inheritence).
Each semaphore has a fixed priority ceiling that = the highest priority among all the tasks that will require it.
How it works:
- Ti can access a semaphore S only if both of these conditions are met:
- S is not already allocated to any other task
- Priority of Ti is higher than the current processor celing = max(priority ceilings of all the semaphores allocated to tasks other than Ti)
What is a Logical/Virtual Address (Same thing) vs. a Physical Address?
Logical addresses are generated by the CPU, whereas physical addresses are the addresses as seen by the memory unit. While logical and physical addresses are the same in compile-time and load-time address-binding schemes, logical/virtual and physical addresses differ in execution-time address-binding schemes.
What is the Memory-Management Unit (MMU)?
The hardware device that performs virtual to physical address translation. They may use paging, segmentation, or relocation register techniques as part of address translation.
True or False: User programs never see physical addresses, only logical ones.
True.