part2 Flashcards
What is parallelism? When can it take place?
The simultaneous execution of multiple processes or threads.
When there exist multiple processors or cores.
What is concurrency? How does it differ from parallelism? Can concurrency take place on a single-core processor?
The ability of an operating system to manage multiple processes at the same time, allowing them to make progress independently.
Processes are not executed simultaneously, but rather multiple tasks are dealt with at once. Yes, as only one process is actually being executed at any one time.
What are three disadvantages of processes?
- Creation overhead, e.g., in terms of memory space.
- Complex inter-process communication.
- Process-switching overhead (mode + context switch, including save/restore contexts of execution).
What is the difference between processes and threads? Definition, Ownership, Address Space, Info.
Process is an independent program in execution with its own memory space. Thread is the smallest unit of execution within a process.
Processes define ownership on resources, while threads may share access to the same variables, code, or files. All threads operate in their process’ address space, while different processes have different address spaces. Processes have their context of execution saved in a PCB, while threads have an associated executed state which is the PCB extended with thread info.
What additional info does a thread’s associated executed state have in addition to the PCB?
Program counter, stack counter, return addresses of function calls, values of processor registers.
Why do threads operate faster than processors?
Thread creation and termination is much faster as no memory address space copy is needed.
Context switching between threads of same process is much faster, as no need to switch whole address space, only swap CPU registers content. Communication between threads is faster and more at programmer’s hand than between processes.
What are some reasons to introduce threads? What is their primary downside?
Reasons:
- Increase concurrency level with better performance.
- Use natural concurrency within a program.
Downside: No protection against other threads in the same process (risk of faults, memory sharing).
What are user threads?
Threads managed by a user-level library without kernel intervention, with the OS unaware of these threads.
What are kernel threads?
Threads managed and scheduled directly by the operating system kernel.
What are the four multithreading mapping models?
- Many-to-one Model.
- One-to-one Model.
- Many-to-many Model.
- Two-level Model.
What is the many-to-one multithreading mapping model?
Multiple user threads are created and managed by a user-level thread library.
All these threads are mapped to a single kernel thread. Kernel treats the entire process as a single thread, regardless of the amount of user threads. Implemented entirely in user space.
Advantages of many-to-one multithreading mapping model? Disadvantages?
Adv: No need for kernel involvement, fast and easy to deploy.
Disadv: Only one user accesses kernel at a given time; thus, multiple threads cannot run in parallel on multiple processors; blocking system call from one user thread blocks all user threads of the process.
What is the one-to-one multithreading model?
Each user-level thread maps to a kernel thread.
Kernel fully aware of all threads in process. Threads managed and scheduled by OS.
Advantages of one-to-one multithreading model? Disadvantages?
Adv: Allows for concurrency between all threads.
Disadv: All threads managed by kernel, with negative impacts on performance in case of many user threads.
What is the many-to-many model?
Limited number of kernel threads.
Multiple user threads mapped to a pool of kernel threads. User-level thread library schedules user threads onto available kernel threads, and kernel schedules kernel threads on the CPU.
Advantages of many-to-many multithreading model? Disadvantages? What is the concurrency level limited by in this model?
Advantages: Concurrency, bounded performance cost for kernel.
Disadvantage: More complex to implement. Number of kernel threads.
What is the two-level multithreading model?
Maps multiple user threads to a smaller number of kernel threads (like in many-to-many model).
Certain user threads can also be bound directly to kernel threads (like in one-to-one model).
How does thread switching as a kernel activity? How about as handled by user-level thread libraries?
- Kernel maintains execution state of thread, works similar to process switching.
- Library maintains execution state of threads, must obtain control in order to switch threads; responsibility of programmer to call library to yield execution.
What are thread pools? What are their advantages?
A collection of pre-initialised threads that can be reused to execute tasks, avoiding the overhead of creating and destroying threads repeatedly.
Adv: Slightly faster to service a request with an existing thread than creating a new one; allows number of user threads in an application to be bounded by the size of the pool.
What are some common processor scheduling algorithms?
- First-Come First-Served (FCFS).
- Shortest-Job-First (SJF).
- Round Robin (RR).
- Priority Scheduling - Rate Monotonic (RM), Deadline Monotonic (DM), Earliest Deadline First (EDF).
- Multilevel Queue.
- Multilevel Feedback Queue.
What does the decision mode define?
When scheduling decisions are taken.
Preemptive vs Non-Preemptive Scheduling:
In preemptive scheduling, the OS can interrupt a running process to allocate CPU time to another process.
In non-preemptive scheduling, once a process starts executing, it runs until it completes or voluntarily relinquishes control of the CPU.
Time-Based vs Event-Based Scheduling:
In time-based scheduling, scheduling decisions are made based on a regular time slice or a clock interrupt.
In event-based scheduling, decisions are made in response to specific events rather than fixed time intervals, such as the arrival of new processes, completion of I/O operations, etc.
What is the priority function?
The function defining which ready tasks are chosen for execution next.