W2 Flashcards
What is parallelism? When it can take place?
The simultaneous execution of multiple processes or threads
When there exist multiple processors or cores
What is concurrency? How does it differ from parallelism? Can concurrency take place on a single-core processor?
The ability of an operating system to manage multiple processes at the same time, allowing them to make progress independently
Processes are not executed simultaneously, but rather multiple tasks are dealt with at once
Yes, as only one process is actually being executed at any one time
What are three disadvantages of processes?
- Creation overhead, e.g., in terms of memory space
- Complex inter-process communication
- Process-switching overhead (mode + context switch, including save/restore contexts of execution)
What is the difference between processes and threads? Definition, Ownership, Address Space, Info
Process is an independent program in execution with its own memory space. Thread is the smallest unit of execution within a process.
Processes define ownership on resources, while threads may share access to the same variables, code, or files.
All threads operate in their process’ address space, while different processes have different address spaces.
Processes have their context of execution saved in a PCB, while threads have an associated executed state which is the PCB extended with thread info.
What additional info does a thread’s associated executed state have in addition to the PCB?
Program counter, stack counter, return addresses of function calls, values of processor registers
Why do threads operate faster than processors?
Thread creation and termination is much faster as no memory address space copy is needed
Context switching between threads of same process is much faster, as no need to switch whole address space, only swap CPU registers content
Communication between threads faster and more at programmer’s hand than between processes
What are some reasons to introduce threads? What is their primary downside?
Reasons:
Increase concurrency level with better performance
Use natural concurrency within a program
Downside:
No protection against other threads in the same process (risk of faults, memory sharing)
What are user threads?
Threads managed by a user-level library without kernel intervention, with the OS unaware of these threads.
What are kernel threads?
Threads managed and schedule directly by the operating system kernel
What are the four multithreading mapping models?
Many-to-one Model
One-to-one Model
Many-to-many Model
Two-level Model
What is the many-to-one multithreading mapping model?
multiple user threads are created and managed by a user-level thread library
all these threads are mapped to a single kernel thread
kernel treats the entire process as a single thread, regardless of amount of user threads
implemented entirely in user space
Advantages of many-to-one multithreading mapping model? Disadvantages
Adv:
no need for kernel involvement, fast and easy to deploy
Disadv:
only one user accesses kernel at a given time;
thus, multiple threads cannot run in parallel on multiple processors;
blocking system call from one user thread blocks all user threads of the process;
What is the one-to-one multithreading model?
each user-level thread maps to a kernel thread
kernel fully aware of all threads in process
threads managed and scheduled by OS
Advantages of one-to-one multithreading model? Disadvantages?
Adv: allows for concurrency between all threads
Disadv: all threads managed by kernel, with negative impacts on performance in case of many user threads
What is the many-to-many model?
limited nr of kernel threads
multiple user threads mapped to a pool of kernel threads
user-level thread library schedules user threads onto available kernel threads, and kernel schedules kernel threads on the CPU
Advantages of many-to-many multithreading model? Disadvantages? What is the concurrency level limited by in this model?
Advantages: concurrency, bounded performance cost for kernel
Disadvantage: more complex to implement
Nr of kernel threads
What is the two-level multithreading model?
maps multiple user threads to a smaller number of kernel threads (like in many-to-many model)
certain user threads can also be bound directly to kernel threads (like in one-to-one model)
How does thread switching as a kernel activity? How about as handled by user-level thread libraries?
- Kernel maintains execution state of thread, works similar to process switching
- Library maintains execution state of threads, must obtain control in order to switch threads; responsibility of programmer to call library to yield execution
What are thread pools? What are their advantages?
A collection of pre-initialised threads that can be reused to execute tasks, avoiding the overhead of creating and destroying threads repeatedly
Adv:
- slightly faster to service a request with an existing thread than creating a new one
- allows number of user threads in an application to be bounedd by the size of the pool
What are some common processor scheduling algorithms?
First-Come First-Served (FCFS)
Shortest-Job-First (SJF)
Round Robin (RR)
Priority Scheduling - Rate Monotonic (RM), Deadline Monotonic (DM), Earliest Deadline First (EDF)
Multilevel Queue
Multilevel Feedback Queue
What does the decision mode define?
When scheduling decisions are taken
Preemptive vs Non-Preemptive Scheduling:
In preemptive scheduling, the OS can interrupt a running process to allocate CPU time to another process
In non-preemptive scheduling, once a process starts executing, it runs until it completes or voluntarily relinquishes control of the CPU.
Time-Based vs Event-Based Scheduling:
In time-based scheduling, scheduling decisions are made based on a regular time slice or a clock interrupt. Timers used to determine the necessity of context switches.
In event-based scheduling, decisions are made in response to specific events rather than fixed time intervals, such as the arrival of new processes, completion of I/O operations, etc.
What is the priority function?
the function defining which ready tasks are chosen for execution next
What is the arbitration rule?
Breaks ties between tasks
How is the waiting time defined?
Total amount of time a process spends in the ready queue, waiting for CPU time to be allocated, excluding the time the process is actively executing.
How is the response time defined?
The amount of time that elapses between when a user initiates a request (or a process is submitted) and when the system first produces a response or starts delivering output, i.e., when the process starts being executed.
How is the turnaround time defined?
The total time taken to execute a particular process, from the time it is submitted to the time it completes execution, including all waiting time, execution time, and time spent in the ready queue.
What is throughput?
The number of jobs or processes that are completed per unit of time
What is the priority function of FCFS? Is it preemptive or non-preemptive? What is its arbitrartion rule?
Tasks are schedule in order of their arrival
Non-preemptive
Random choice among processes that arrive at the same time
What is the priority function of SJF scheduling? What is its decision mode? What is its arbitrary rule? What is it optimal with relation to? What is its preemptive version called?
Tasks are schedule in terms of shortest execution time
Non-preemptive
Chronological or random ordering
Average waiting time
Shortest-Remaining-Time-First
How does Round Robin Scheduling work? Arbitrartion Rule?
Each process in the ready queue is allocated a fixed time slice, i.e., a “quantum”
If a process completes within its quantum, it releases the CPU voluntarily.
If it does not, it is preempted, and the next process in the queue is given a turn to executed
Random arbitration
What is priority scheduling? What is an issue with it?
A priority number is associated with each task that is either assigned by the programmer or computed through a function that depends on task properties. CPU is allocated to task with highest priority.
starvation, i.e., low priority processes never executing
What is Rate Monotonic (RM) Priority Scheduling? Is it preemptive or non-preemptive? Arbitrartion function?
Processes are often periodic, i.e., activated every period T. RM priorities the processes with the shortest period T.
Preemptive
Chronological/Random
What is Deadline Monotonic (DM) scheduling? Preemptive? Arbitration function?
Prioritises the processes with the shortest relative deadline D.
Preemptive
Chronological/Random ordering
What is Earliest Deadline First (EDF)? Decision mode? Arbitration rule?
Highest priority assigned to process with shortest remaining time until its deadline
Preemptive
Chronological/Random
What is multilevel queue scheduling? How is scheduling between the queues done?
Partitions the ready queue into various sub-queues, such as:
- system processes, interactive processes, interactive editing processes, batch processes, student processes
Each queue has its own scheduling algorithm
Scheduling between queues can be done with:
1. fixed priority scheduling (where high priority queues are served first, and low priority queues last)
2. time slice, where each queue gets a certain amount of CPU time which it can schedule amongst its processes.
What is a multilevel feedback queue? What is a multilevel feedback queue scheduler defined in terms of?
Instead of processes being assigned to one, fixed sub-queue, tasks can be moved across sub-queues.
- Number of queues
- Scheduling algos for each queue
- Method used to determine when to upgrade/demote a task
- Method used to determine which queue a task will enter when that task needs service.
What is a sequential task? What takes the place between the states?
A discrete sequence of tasks, e.g., observable in program code?
Indivisible, atomic steps/actions
What is the single reference rule?
A statement in a programming language may be regarded as atomic if at most one reference to a shared variable occurs
How to regard a non-atomic statement as atomic, e.g., S?
You write it as <S>, e.g., <x:=x+1></S>
What is a trace? Does it maintain the internal execution of the individual tasks? What do the actions refer to?
A sequence of atomic actions, obtained by interleaving the execution of concurrent tasks
Yes
Assignments or tests
What is a “possible trace”?
A trace in which all tests yield true
What are shared variables?
Variables accessible to several tasks (processes/threads)
What is a private variable?
Variable accessible only to a single task
What is interference?
disturbing others’ assumptions about the state
What is meant by race condition?
A situtation in which correctness depends on the execution order of concurrenct activities
What is synchronisation?
Ordering the execution, i.e., forbidding certain traces
What are the five requirements on synchronisation solutions? Explain each shortly
- Functional correctness - satisfying the given specification
- Minimal waiting - waiting only when correctness is in danger
- Absence of deadlocks - don’t manoeuvre the system into a state such that progress is no longer possible
- Abscence of livelocks - livelocks occurs when two or more processes continuously change their state in response to each other without making any progress
- Fairness in competition:
- weakly, each contender should eventually be admitted to proceed
- strongly, a bound on the waiting time for each contender can be placed
How does Peterson’s Algorithm work? Write out the structure of the algorithm?
Uses a shared variable t to denote the turn. Uses two booleans bY and bX to ensure minimal waiting.
void Px {
while (true) {
bX = true
t = Y
while(bY=true AND t!=X) { skip;}
x = x+1
bX = false
}
}
What are the limitations of Peterson’s Algorithm?
Can synchronise only two tasks, with extensions for more tasks existing but being more complex.
Busy-wait, which wastes CPU Cycles
Does not work for single core due to busy-wait, as the process on the core that is busy waiting will never give the CPU to the other process which is required to stop the busy wait
What is a mutex?
A synchronisation primitive used to ensure that only one thread or process can access a shared resource at a time
What is a mutex initialised to? What are its two operations?
1
lock(m): await(m>0) + m=m-1; checks if mutex is available, which is the case if m>0 ; m=m-1 process acquires the lock by decrementing m
unlock(m): m=m+1; releases the mutex by incrementing m, signalling that the mutex is now available for other processes to acquire
What are some challenges with priority scheduling when protecting shared resource access?
- Blocking:
- a low priority task obtaining a resource, and a high priority task then waiting on it - Priority Inversion:
- occurs when a lower-priority task holds a resource that is needed by a higher-priority task
- involves the medium task, which preempts the lowest task, which means that the highest task has to wait for both to complete
What is a solution to priority inversion that prevents a middle-priority task to preempt the lowest priority task that has the resource? What is the name of this protocol?
Adjusting the priority of the task T holding the resource to the maximum of the priority of any other task that is blocked on the allocated resources of P and its own priority.
Priority Inheritance Protocol
Which of the following components of program state are shared across threads in a multithreaded process:
register values, heap memory, global variables, stack memory
Share heap memory and global variables
Do NOT share register values and stack memory
Can a multithreaded solution using multiple user-level threads achieve better performance on a multiprocessor system than on a single processor system?
No, since the multithreaded sysstem cannot make use of the different processors simultaneously, as it see only a single process.
Consider a multiprocessor system and a multithreaded program written using the many-to-many threading model. Let the number of user-level threads in the program be more than the number of processors in the system. Discuss the performance implications of the following scenarios.a. The number of kernel threads allocated to the program is less than the number of processors.b. The number of kernel threads allocated to the program is equal to the number of processors.c. The number of kernel threads allocated to the program is greater than the number of processors but less than the number of user-level threads.
When the number of kernel threads is less than the number of processors, then some of the processors would remain idle since the scheduler maps only kernel threads to processors and not user-level threads to processors. When the number of kernel threads is exactly equal to the number of processors, then it is possible that all of the processors might be utilized simultaneously. However, when a kernel thread blocks inside the kernel (due to a page fault or while invoking system calls), the corresponding processor would remain idle. When there are more kernel threads than processors, a blocked kernel thread could be swapped out in favor of another kernel thread that is ready to execute, thereby increasing the utilization of the multiprocessor system.
Do processes share global variables?
No, they do not.
Which is the most general scheduling algorithm?
Multilevel Feedback-Queue Algorithm