DAT320 Flashcards
What is an operating system?
An operating system is the software layer that manages a computer’s resources for its users and their applications.
What happens when you run a program?
CPU:
- Fetches instructions stored in exe file
- Loads data stored in exe file
- Decodes and executes the instructions
- Stores result to memory
What are the jobs of an OS?
- Allocates memory
- Manages memory
- Manages CPU
- OS provides process abstraction
- Manages devices
- Basically manages hardware and provides process abstraction.
- OS also allows you to communicate with the computer without knowing how to speak the computer’s language.
What are the design goals of an operating system?
- Convenience
- Efficiency of usage of CPU, memory etc.
Explain short: What is the goal of process abstraction?
The goal of process abstraction is to run multiple processes concurrently, by time sharing the cpu. We then need mechanisms (context switch) and policy(scheduler). This makes the illuision that each program has it’s own isolated machine.
What is virtualization of the CPU?
- It involves a single cpu acting as if it were multiple seperate CPUs
- Enables users to run many concurrent programs
What can a program read and update when it is running?
- Memory (Data that the running program reads/writes is in memory)
- Register (Because many instructions read/update the registers)
What is I/O?
I/O stands for input and output and is reffered to devices connected to a computer. This could be a mouse and keyboard.
Which states can a process be? Explain them
Give the possible transitions between the three states.
- Running: Process is running on a processor, executing instructions
- Ready: process is ready to run , but Os has chosen not to run it at this given moment
- Blocked: process har performed some operation that makes it not ready to run until some other event takes place.
For example: when a process initiates an I/O request to disk or network, it becomes blocked and thus some other process can use the processor.
Running to ready: OS preempts the running process.
Running to Blocked: Process makes system call to innitiate IO
Ready to running: OS schedules process to run next.
Blocked to ready: IO requested by process is done.
- New
- Dead
Which data structure is where the operating system keeps the program counter, stack pointer and other information about a process.
PCB (process control block)
What does a PCB (process control block) contains?
- Process identifier
- Process state
- Pointers to other related processes (parent)
- CPU context of the process (saved when the process is suspended)
- Pointers to memory location
- Pointers to open files
What does API stand for?
Application programming interface (API)
Why do we have an API in the OS?
- The API contains sets of system calls, including create, wait and destroy.
- System call is a function call into OS code that runs at a higher privilege level of the CPU
- There are usually interfaces to get some status information about a process as well, such as how long it has run for, or what state it is in.
-Makes it possible for all services in the operating system to communicate.
What does fork() do?
- Creates a new child process
- All processes are created by forking from a parent
- The init process is ancestor of all processes
What does exec() do?
In computing, exec() is a functionality of an operating system that runs an executable file in the context of an already existing process, replacing the previous executable.
What does exit() do?
Exit() terminates a process
What does wait() do?
Wait() causes a parent to block until child terminates/exits or a signal is received.
What happens during fork()?
- A new process is created by making a copy of parent´s memory image
- The new process is added to the OS process list and scheduled
- Parent and child start execution just after fork (with differnet return values)
- Parent and child execute and modify the memory data independently
What happens during wait()?
- Terminated process exist as a zombie.
- The parent process is then supposed to execute the wait() system call to read the dead process’s exit status and other information.
- This allows the parent process to get information from the dead process. After wait() is called, the zombie process is completely removed from memory.
- When a parent calls wait(), zombie child is cleaned up or “reaped”
- Wait() block in parent until child terminates (non-blocking ways to invoke wait)
- What id parent terminates before child? Init process adopts orphans and reaps them
What happends during exec()?
- A process can run exec() to load another executable to its memory image
- So a child can run a different program from parent
- Variants of exec() to pass commandline arguments to new executable
What different modes can an OS run a executable from?
Which two modes does the OS have?
User and kernel mode
What is kernel mode?
- It enables the OS to run system calls
- Kernel does not trust user stack
- Kernel does not trust user provided addresses
In Kernel mode, the executing code has complete and unrestricted access to the underlying hardware. It can execute any CPU instruction and reference any memory address. Kernel mode is generally reserved for the lowest-level, most trusted functions of the operating system. Crashes in kernel mode are catastrophic; they will halt the entire PC.
When is trap function called?
- System call (program needs OS service)
- Program fault (program does something illegal, e.g. access memory it doesn´t have access to)
- Interrupt as external devices needs attention of OS (e.g. network packet has arrived on network card)
What is a trap function?
It is a software interrupt generated by the user program or by an error when the operating system is needed by it to perform the system calls or an operation
What are the two types of scheduler?
Preemptive and non preemptive
What are the difference between Preemptive and non-Preemptive schedulers?
Non preemtive schedulers are polite:
- Switch only if process blocked or terminated - Once resources are allocated to a process, the process holds it till it completes its burst timr or switches to waiting state.
Preemptive (non-cooperative) schedulers can switch even when process is ready to continue:
- CPU generates periodic timer interrupt and the resources are allocated to a process for a limited time
- After servicing interrupt, OS check if the process has
run for too long
What is a context switch?
For example when a process switches from user to kernel mode.
It then saves context (PC, registers, kernel stack, pointer) of process on kernel stack
What is SJF (shortest-job-first)?
It runs the shortest job available, not preemptive
What is FIFO (first-in first-out)?
It runs the process with the shortest arrival time
What is RR (round-robin), and how does it work?
A scheduling algorithm.
It runs each process on a given time quatum. This could be 2ms.
What is a CPU burst?
The CPU time used by a process in a continuous stretch
Why do we want to have a scheduler?
- Maximize utilization (fraction of time cpu is used)
- Minimize average (turnaround time = time from process arrival to completion)
- Minimize average (response time = time from process arrival to first scheduling
- Fairness: all processes must be treated equally
- Minimize overhead: run process long enough to amortize cost of context switch ( 1 microsecond)
What is stride scheduling?
- Every process gets a ticket value
Stride = ticket/high number(e.g. 1000 or 10000)
Pass += strideValue - OS runs the process with the lowest stride value
- If pass-value of multiple processes are equal, the one with the lowest stride value goes first.
- If both the pass-value AND stride value is the same, the process that entered the scheduling que first gets to run first.
What is the formula on turnaround time?
Tturnaround = Tcompletion - Tarrival
What is turnaround time?
Performance metric for scheduler policy. Often the better the metric the less fair the scheduler is.
Turnaround time is the time from the process arrives at the queue to the time it is done/finished.
What is the formula on response time?
Tresponse = Tfirstrun - Tarrival
Positives and negatives of SJF
- Optimal for turnaround time
- Can get stuck behind a large job (convoy effect)
Positives and negatives of RR
- Good for response time and fairness
- Bad for turnaround time
- Time quatum > context switch
Why is it difficult to create a good scheduler?
Turnaround time and response time:
Both are bad where the other is good
What is multi level feedback queue(MLFQ)?
It contains different levels of priority that a process is assigned, and each process are ran using RR. First a process is ran at the highest priority and if not done it moves down. This continues to the process is done.
If priority A > priority B, A runs
If priority A = Priority B , they run in round robin,
What are some of the problems with multi level feedback queue(MLFQ)?
Starvation: If there are to many interactive jobs they will combine to consume a lot cpu time, preventing long running jobs from getting any cpu time, so they starve.
What can we done in MLFQ to prevent starvation?
Periodically boost the priority level of all jobs. For a given time slice we can move all jobs to the highest priority queue.
What is lottery scheduling?
Each process gets different amount of tickets and an OS chooses a ticket. The process that has that ticket can run for a given time.
How do we calculate fairness in a scheduler?
U = tCompletionA / tCompletionB (1 = perfect)
What is the difference between lottery scheduler and stride scheduler
Stride scheduler is deterministic
What is the advantages and disadvanages of using randomness in a scheduler?
Negative:
- May not deliver the exact desired proportions
- A major drawback of the lottery scheduling algorithm is that it is inherently unpredictable. Due to the random nature of the scheduler, there is no way for OS designers to know for sure what will happen in lottery scheduling, possibly producing undesirable results
Positive:
- Fairness
- Avoid corner cases
- Lightweight for the OS
Why do we virtualize memory?
Virtual memory serves two purposes:
- First, it allows us to extend the use of physical memory by using disk.
- Second, it allows us to have memory protection, because each virtual address is translated to a physical address.
What does virtual address space contain?
Contains program code, heap and stack
Why do we use virtual address space
To be able to locate where in physical memory the data is stored. It is translated from Virtual Address to Physical Address
What component does the translation of virtual to physical address?
MMU (memory-managment unit)
What is the goal of virtualization?
- Transparency: user programs should not be aware of the messy details
- Efficiency: minimize overhead and wastage in term of memory space and access time
- Isolation and protection: A user process should not be able to access anything outside its address space
Does the OS have its own address space? If not where is it?
- OS is not separate process with its own address space
- Instead, OS code is part of the adress space of every process
- A process sees OS as part of its code (library)
- Page table map the OS addresses to OS code
What does the stack contain?
- Keep track of where the process is in the function call chain
- Allocate local variables of functions
- Pass parameters to functions
- The return value from functions
- Return address: Where to continue executions after a function call
What does the heap contain?
- Global variables
- Malloc() and free() in C
- New() in java/C++/GO
What does malloc() do?
Allocates x amount of memory
What does free() do?
free(x) where x is the pointer to the memory returned by malloc. It frees the allocated memory
What are the procedures during address translation?
- OS tells what base(starting address) and bounds(end address) values
- MMU does the calculation
- If out of bounds a trap execution is called
What is the role of OS in address translation?
- OS maintains free list of memory
- Allocates space to process during creation and cleans up when done
- Maintains information of where space is allocated to each process (in PCB)
- Sets address translation information in hardware
- Updates this information upon context switch
- Handles traps due to illegal memory access
What is the role of hardware in address translation?
The CPU provides privileged mode of execution, this makes it so the instructions set has priviliged instructions to set translation information (base and bounds).
Then the Hardware(MMU) uses this information to perform translation on every memory access (instruction fetch, load or store)
If the translation is illegal, the MMU generates traps to OS(eg, VA is out of bounds)
What is the formula for physical address
PA = base register + VA
What are the problems with base and bounds?
If stack and heap are small, space between them is wasted. This is called internal fragmentation.
This is caused by fixed size slots (due to our assumptions as same size address space).
- Requires a big chunk of free space in the middle
What differs segmentation from basic address space?
- It gives us the flexability with multiple base/bounds pairs
- The idea is one pair for each logical segment of the address space, which includes code, stack and heap
No internal fragmentation in segmentation.
How does the virtual address know which segment contains what (segmentation storing)?
Using the two first bits "00" = Code "01" = Heap "10" = Stack "11" = Unused
How can processes share memory (segmentation)
Protection:
- A few extra bits per segment
- Read, wrtite, execute
- Each process still thinks that is accessing its own private memory
- While the OS is secretly sharing their memory
- HW must then check if a particular access is permissible
Why do we use segmentation?
- Unused space between stack and heap need not be allocated in physical memory
- Allowing for more address spaces in physical memory we can use segmentation
What are the problems with free space managment within segmentation?
Physical memory becomes full of holes of free space (external fragmentation)
What is external fragmentation?
Physical memory becomes full of holes of free space (external fragmentation)
What is a solution to external fragmentation within segmentation?
One solution:
- compacting physical memory be rearranging segments
- Stop process one by one, move segment, update segment registers (point to new location)
- Very memory intensive and use of process time
Is there a better approach to avoid external fragmentation with segmentation?
- Use a free list management algorithm
- Keep large extents of memory available for allocation
- Ex: best fit algorith - keeps list of free spaces - returns the one closest in size that satisfies desired allocation to the requester
Best fit:
Search free list of memory chunk closest to requested size.
What are the benefits of segmentation?
- Support sparse address spaces (address spaces with a lot of emty spaces)
- Avoid wasting memory between logical segments of an address space
- Fast translation
- Code sharing
- No internal fragmentation
What are the problems with segmentation?
- Free memory gets left as odd sized pieces (external fragmentation) (does not happen with paging)
- Memory allocation difficult (many smart algorithms)
- Fundamentally hard to avoid external fragmentation
- Not flexible enough, to support fully generalized sparse address space
- If heap is sparsely used - must keep entire heap in memory
How to manage free space, when satisfying variable sized request?
- Splitting(splits a segment based on the request)
and coalescing(Smelte sammen minne) If different chunks is free, continually. The allocator will coalesce thse chunks together.
if 3 chunks of free space, each 10 bytes, a request for 15 bytes will not work. even though we have 30 bytes free space - Tracking the size of allocated regions
- Embedding a free list
What is splitting?
Example, trying to allocate 1 bytes when we have two 10 bytes segment available:
- Find a free chunk that satisfiy the request and split it into two
- Return the first chunk to the caller (1 byte)
Second chunk remain on the list (9 bytes)
What is coalescing?
It allows us to combine free segment next to each other in memory
Why would we need coalescing?
It occur when we for example have three free segment next to each other with size 5. If we try to allocate 7 bytes it will fail. Thats why we coalesce them to get 15 byte available in one segment
In free space managment how do we track the size of allocated regions?
We create a header field in the allocation with the information about the size. May also contain pointers to speed of freeing.
What are some of the strategies to manage free space? (Free space management)
-best fit
search free list for chunk closest to requested size
high cost; heavy performance
-worst fit
opposite of best fit, search for biggest chunk
worst fit tries to leave big chunks free instead of lots of small chunks
- first fit
finds the fist chunk with enough space
advantage of speed
may pollute the beginning of free list with small objects
may use address-based ordering
keeping the list ordered by the address of the free space
coalescing becomes easier - next fit
instead of starting at the start of the free list, keep an extra pointer to the last searched location and start there
What is buddy allocation
Given the example that we want to allocate 7 bytes and have 64kb free space: - We split the size into 2 to when it doesn't fit anymore and choose the split before. - First: 64 - Second: 32+32 - Third: 16+16 - Fourth: 8+8 - Fifth: 4+4 (target) Therefor we use the 8 byte size.
Basically. split memory into chunks, and when they are small enough and the request is met, use that. The rest gets coalesced
How does paging differ from segmentation?
Instead of allocation of variable sized chunks we use fixed size chunks
What is a page table?
Page table is a per process datastructure to help with address translation
Keeps track of where each virtual page is in memory
What does a page table store?
Stores address translations for each virtual page of the address space.
Basically mappings from virtual page number to physical frame number
What does a page table entry (PTE) contain?
Each PTE contains:
- PFN (physical frame number)
- Valid bit: is this page used by process
- Protection bit: read/write permissions
- Present bit: is this page in memory
- Dirty bit: Has this page been modified
- Accesses bit: has this page recently accesses
How does address translation happen in software(paging)?
- Most significant bits of VA give the VPN
- Page table maps VPN to PFN
- PA is obtained from PFN and offset within a page
- MMU stores (physical) address of start og page table, not all entries
- Walks the page table to get relevant PTE
Important bits of Virtual Address til virtual page number
Page table maps from virtual page number to physical frame number
Page address is obtained from Physical fram number and offsett within a page.
MMU stores physical address in the start of the page table
Walks the page table to get relevant Page Table Entries
Why is paging slow?
Paging requires perform ONE extra memory acccess
How do we determine the size of VPN?
Formula: 2^x Example: - 64 bit = 2^6 - So we need 6 bits - If the page size is 16 bytes we need to be able to access 4 pages. Therefore we need 2 bits in the VPN and the rest 4 bytes will be the offset
How to we calcualte VPN and offset if we have a virtual address 21:
21 in binary: 010101
VPN : 01
Offset: 0101
Explain how a page table is used
The OS indexes the array by the virtual page number (VPN) and look up the page table entry (PTE) at that index in order to find the desired physical frame number (PFN)
What does the Translation Lookaside Buffer (TLB) contains?
-A cache of recent VA-PA mappings
Why do we use TLB’s?
To speed up translation
How does the TLB work(Translation Lookaside Buffer)?
- HW checks if desired virtual address to physical address mapping is in TLB (cache)
- If so (tlb hit), use TLB translation (without consulting the page table in memory)
- Else tlb miss, consult page table (which has all the translation
What are the problems with TLB’s?
- If two pages use the same entry of the memory, only one of them can be remembered at once.
- Only remembers a small number of entries
- Context switches
- TLB contains translation only valid of running process
- When switching between processes HW/OS must ensure that next proc does not use translation from previous process. HW cannot distinguish which entry is meant for which process
What are some of the TLB policies we can do to ensure a best up to date TLB?
Two approaches:
- Evict entries at random
- Evict least recently used (minst brukt)
Why can’t we just increase the page size to reduce the size of a page table?
- Leads to waste within each page (internal fragmentation)
- Therefore most systems use relatively small pages
- So our problem will not be solved that easily
How can we solve some the problem with page tables being to big?
- Combine page table and segmentation
- Multi level page table
- Inverted page table (only opne pagetable with all the physical addresses)
What are the problem of using segmentation and page table (hybrid)?
- External fragmentation
- Not flexible
- If we use large page table we might still get a lot of waste
What is multi level page table?
Multilevel pagetable are multiple pagetables mapped to eachother.
An example:
- An application accesses the pagetable
- The first 10 bits maps the address to the second pagetable
- The next 10 bits maps the address to the actual physical memory
- As always, the offset goes straight trough.
What are the advantages with multi level page table?
(a) Page number lookups are faster.
(b) The page table can consume much less space if there are large regions of unused memory.
(c) Each segment (code, data, stack) can be managed separately.
What are the disadvantages with multi level page table?
Extra memory references to access address translation tables can slow programs down by a factor of two or more.
What is an inverted page table (don’t say an inverted page table, i know you tried!)
- Instead of having many page tables (one per process of the system) we keep a single page table that has an entry for each physical page of the system
- The entry tells us which process is using this page and which virtual page of the process maps to this physical page
- Finding the correct entry is now a matter of searching through this data structure
Only one page table with all the physical addresses <3
What is swapping, within the page table section?
Usually a page table is big and will be stored on disk.
The goal of swapping is to have the illusion of large virtual memory for multiple concurrently running processes
Explain the mechanism of swapping within the page table section?
- When translating VA to PA, MMU reads present bit
- Present bit in page table entry: indicates of a page of a process resides in memory or not
- If page present in memory, directly accesses
- If page not in memory, MMU raises a trap to the OS - page fault (trap)
How do we handle a page fault exception?
- Page fault traps OS and moves CPU to kernel mode
- The OS chooses a page to evict from RAM and write to DISK
- If the page is DIRTY, it needs to be written back to DISK first
- The OS reads the page from DISK and puts it in RAM
- The OS then changes the page table to map the new page
- The OS jumps back to the instruction that caused the page fault and loads it.
What are some of the problems that can occur with swapping mechanism during a page fault?
- When servicing page fault, what is OS finds that there is no free page to swap in the faulting page
- OS must swap ut an existing page and then swap in the faulting page – to much work!
What page replacement policy do we have?
- Optimal: Replace page table not needed for longest time in future (not practical, as we can not predict future)
- FIFO: replace page that was brought into memory earliest (may be a popular page)
- LRU/LFU: replace the page that was least recently (or frequently) used in the past
What is the LRU policy?
the page that is used least recently will be replaced
What is LFU policy?
LFU is a cache eviction algorithm called least frequently used cache
What roles does an operating system have?
- Referee: Operating systems manage resources shared between different applications running on the same physical machine. Examples: Memory protection, scheduler
- Illusionist: Operating systems provide an abstraction of physical hardware to simplify application design. Examples: Virtual memory, file system
- Glue: Operating systems provides a set of common services that facilitate sharing among applications. Examples: HAL, system calls, copy paste
What is the purpose of dual-mode operation on a CPU?
There are two modes of operation on a CPU: Kernel mode and user mode. The purpose of the two modes is to limit the privileges of a user program, so that it cannot cause harm to the machine, operating system, or other user processes running on the system.
What is a cache?
- Copy of result of computation that can be accessed quickly
- Partial copy of data that can be accessed quickly
Give two examples of virtualization?
- CPU
- Memory
Explain the software design principle of separating mechanism from policy
By separating mechanism and policy, we can provide a generic low-level mechanism (machinery) that can be reused across multiple high-level policy decisions. Such policy decisions can more easily be tuned and adapted to different environments and workloads, without changing the underlying mechanism.
- Policies are ways to choose which activities to perform. Mechanisms are the implementations that enforce policies.
Give examples on policies
- Scheduling
- Allocation of resources
- Page fault handling
- Segmentation handling
Give examples on mechanism?
- Context switching
- Swapping
- Authorization
Give the three main states that a process can take on
- Ready
- Running
- Blocked (waiting)
Explain why a process has state?
A process can be in one of several states at a time. This is a way for the OS to manage the processes and to facilitate efficient resource utilization. In particular, a Blocked process will not consume CPU resources while waiting for IO to finish, which would otherwise be extremely wasteful.
Explain what a process is, and how the operating system handles processes with respect to memory and execution
A process is the instance of a computer program that is being executed by one or many threads. (limited priviliges)
A process is divided into two static components: code and data,
and two dynamic components: heap and stack.
There is two parts to a process:
- Thread
- Address space
What is a thread?
- A thread is a path of execution within a process. A process can contain multiple threads.
- a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler
Why do we want to use threads?
- Parallelism
- Concurrency
- Even if no parallelism, concurrency of threads ensures effective use of CPU when one of the threads blocks (for I/O for example)
- Better average response time
- Better performance
What is parallelism?
- Running multiple processes in parallel on multiple CPU’s
What is concurrency?
- Running multiple threads/process at the same time, even on a single CPU core, by interleaving their execution
What is the main difference between parallelism and concurrency?
Parallelism is about running multiple processes over multiple CPU’s but concurrency is about running multiple threads at the same time on a single CPU
How does the OS schedule threads?
- OS schedules threads that are ready to run independently, much like processes
What is TCB?
Thread Control Block (TCB) is a data structure in the operating system kernel which contains thread-specific information needed to manage it.
What can happen with threads that share some data?
Race condition
What is a race condition?
- When multiple process access the same data, weird think happens
- When using a counter function is is actually necessary with 3 steps with assembly. And when many threads access the same it can get the old value before the other counter gets to update the value. So both function gets the base value 30 and both add with +1 but the new value is now 31 and not 32.
What is a critical section?
- Portion of code that can lead to race condition
What is mutual exclusion?
- Only one thread should be executing critical section at any time
How can we avoid race condition?
Mutual exlusion
How can we get mutual exlusion?
Atomicity of the critical section
How is atomicity achieved?
- Locks
- Condition variables
- Channels
etc
What are some of the problems with atomicity/mutual exlusion?
Where one thread must wait for another to complete some action before it continues. This interaction arises, for example, when a process performs a disk I/O and is put to sleep; when the I/O completes, the process needs to be roused from its slumber so it can continue.
How does the lock mechanism works?
- We can use a special lock variable to protect data
- All threads accessing a critical section share a lock
- One threads succeeds in locking - owner of lock
- Other threads that try to lock cannot proceed further until lock is released by the owner
What are the goals of locks?
- Mutual exclusion
- Fairness: all threads should eventually get the lock and no thread should starve
- Low overhead: acquiring, releasing and waiting for lock should not consume too many resources
What are some of the different types of locks?
- Spin lock (thread ligge å venta i en for loop(spinne til den fe tilgang til locken)
- Sleep lock(Thread sleeps until lock is free. Sleeping results in more idle time)
What is a spin lock?
- Spins until lock is acquired
- While loops
What is a sleep lock?
- Instead of spinning for a lock, a contending thread could simply give up the CPU and check back later
- Yield() moves thread from running to ready state
When should we use locks?
- A lock should be acquire before accessing any variabel or data structure that is shared between multiple threads of a process
What is the difference between a fined and coarse grained locks?
One big lock for all shared data(Coarse grained locks) vs separate locks(fined locks)
What are the positives and negaives of fine-grained locks?
- Fine grained allows more parallelism
- Multiple fine-grained locks may be harder to manage
What are the advantages and disadvantages with coarse grained locks?
- Slower
- Easier to manage
Why do we want to use sleeping locks?
- CPU wasted by spinning contending threads
- More so if a thread holds spinlock and blocks for long
What are the advantages and disadvantages with spin locks?
- Provides mutual exclusion
- spin locks don’t provide any fairness guarantees
- Spin locks, in the single CPU case, performance overheads can be quite painful
How does Compare-and-swap works for locking?
it simply checks if the flag is 0 and if so, atomically swaps in a 1 thus acquiring the lock
How does Load-Linked and Store-Conditional for locking works?
- The load-linked operates much like a typical load instruction, and simply fetches a value from memory and places it in a register
- The key difference comes with the store-conditional, which only succeeds (and updates the value stored at the address just load-linked from) if no intervening store to the address has taken place
How does Fetch-And-Add for locking works?
- Fetch-and-add instruction atomically increments a value while returning the old value at a particular address.
- Unlock is accomplished simply by incrementing the turn such that the next waiting thread (if there is one) can now enter the critical section
What is a two phase lock?
- A two-phase lock realizes that spinning can be useful, particularly if the lock is about to be released. So in the first phase, the lock spins for a while, hoping that it can acquire the lock. However, if the lock is not acquired during the first spin phase, a second phase is entered, where the caller is put to sleep, and only woken up when the lock becomes free later
- In another term, a hybrid between spinlock and sleeping lock
Why is condition variables used?
To achieve synchronization
How does condition variables works?
- A condition variable (CV) is a queue that a thread can put itself into waiting in some condition
- Another thread that makes the condition true can signal the CV to wake up a waiting thread
- Signal wakes up one thread, signal broadcast wakes up all waiting threads
How do we check a condition using condition variables?
In a while loop
We do this to avoid corner cases of thread being woken up even when condition not true
What is a semaphore?
- Semaphore is a variable with an underlying counter
- Synchronization primitive like condition variable
- Acts like a lock
- Shared between threads
How does a semaphore work?
If a semaphore is going to work like a mutex lock is has a value of 1:
Thread 1: decrements the counter (acquiring the lock) and runs the code in the critical section. When finished it calls post() with increment it by 1 (lock is released)
- Now thread 2 can run and etc.
Why is semaphores special?
- Can be used to set order of execution between threads like Condition Variables
What are some of the concurrency bugs?
We have two:
- Deadlock bugs (trying to share access same resource, but stops to function)
- Non deadlock bugs
What is a deadlock bug?
- threads cannot execute any further and wait for each other
What is a non deadlock bug?
non deadlock but incorrect result when threads execute
- Creates atomicity bugs
How can we fix a non deadlock bug?
Locks, ez clap
- Check lock for given area
- Is it really secure?
How can a deadlock occur?
- Mutual exclusion: a thread claims exclusive control of a resource
- Hold-and-wait: thread holds a resource and is waiting for another
- No preemption: thread cannot be made to give up its resource (cannot take back a lock)
- Circular wait: there exists a cycle in the resource dependency graph
- All four of the above conditions must hold for a deadlock to occur
How can we prevent circular wait (deadlock bug)?
- Aquire locks in a certain fixed order
- Total ordering must be followed
How can we prevent hold-and-wait (deadlock bug)?
- Require that all processes request all resources at one time.
- Require that processes holding resources must release them before requesting new resources, and then re-acquire the released resources along with the new ones in a single new request.
- But this method may reduce concurrent execution and performance gains
How can we generally avoid deadlock bugs and what to to if one occurs?
- Deadlock avoidance: if OS know which process needs which lock, it can schedule the processes so that deadlock does not occur.
- Detect and recover: reboot system or kill deadlock process.
How do we calculate average response time?
Steps:
- Calculate each process response time
- Add them all together and divid it by the number of processes.
Explain the term temporal locality?
Temporal locality: programs tend to reference the same instructions and data that they have recently accessed (time dimension).
For example, instructions inside a loop or a data structure that is repeatedly accessed.
By caching these memory values, we can improve
performance.
Explain the term spacial locality?
Programs tend to reference data near other data that has been recently
referenced (space dimension).
For example, the next instruction to execute is usually near to the previous one, and different fields in the same data structure tend to be referenced at
nearly the same time.
To exploit this, caches are often designed to load a block of data at
the same time, instead of a single location.
Belady’s anomaly occurs when?
The number of page faults increase despite adding more page frames.
Will ONLY happen with FIFO page scheduling algorithm.
Will NOT happen with LRU, LFU or Optimal page scheduling algorithms.
What is the virtualization of memory represented by?
And what is the virtualization of the cpu represented by?
Memory- The address space.
Cpu - Processes
What is virtualization?
Virtualization is the general technique for sharing a physical resource (processor, memory, disk,etc) by transforming it to a more general and easy-to-use virtual form.
Which signal is used to synchronize the CPU with an I/O device when the device wishes to notify that it is ready?
Interrupts, because it is used to signal the CPU that some external event has occurred that may require attention.
As the cpu executes instructions, it checks for wether an interrupt has occurred. If so, it completes current instruction or it may stall and handle the interrupt instead.
Consider an interrupt while a processor is executing an instruction. What happends?
The processor finishes the execution of the instruction before handling the interrupt.
Why is memory protection important? Name a simple technique we can use and how it works.
Because it restricts the user from accessing kernel memory or other processes’ memory.
Base and bounds technique
Base: Start of process memory
Bounds: Process length.
Variable sized chunks that can only be changed by kernel mode.
Cpu fetches instruction or data, the PC or address reference is checked against the range (base, base+bounds) If in range, proceed, otherwise. Exception.
What is base and bounds techinque?
Virtual memory where access to computer memory is controlled by
two cpu registers called base and bounds registers.
It allows us to place address space anywhere we’d like in physical memory, and do so while ensuring that the process can only access its own address space.
What translates from virtual address to physical address?
Memory management unit MMU.
Why does increading the RAM of a computer typically improve computer perfomance.
A page fault is the act of accessing a page that is not in physical memory. This means that adding more ram will make us have more virtual memory and therefore less page faults.
Why does the translation look-aside buffer not necessarily be saved on a context switch between processes?
A Translation lookaside buffer (TLB) is a CPU cache that memory management hardware uses to improve virtual address translation speed. A TLB has a fixed number of slots that contain page table entries, which map virtual addresses to physical addresses. On a context switch, some TLB entries can become invalid, since the virtual-to-physical mapping is different. The simplest strategy to deal with this is to completely flush the TLB
Classify each of the following as mechanism/policy/neither
Scheduling Context switching Limited direct execution Page replacement Fragmentation Address translation Best/worst fit Locality
Scheduling - Policy Context switching - Mechanism Limited direct execution - Mechanism Page replacement - Policy Fragmentation- Neither Address translation- Mechanism Best/worst fit- Policy Locality- Neither
What is the purpose of the free list in paged memory?
Keeping track of unused physical page frames.
Explain the software design principle of seperating mechanism from policy.
By seperating them, we can provide a provide a machinery that can be reused across multiple high-level policy decisions. Such policy decisions can more easily be tuned and adapted to different environments and workloads, without changing the underlying mechanism.
Mechanism- What the computer does
Policy - How its gonna use the mechanisms
Can easily change Policy without having to change mechanism.
What is the working set model attempting to model?
The working set model attempt to model the critical mass of a program’s memory pages, such that most memory references will result in a cache hit, and thus improve the program’s performance.
What is the working set model attempting to model?
The working set model attempt to model the critical mass of a program’s memory pages, such that most memory references will result in a cache hit, and thus improve the program’s performance.