3,4 Flashcards

1
Q

What is parallelism? When it can take place?

A

The simultaneous execution of multiple processes or threads

When there exist multiple processors or cores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is concurrency? How does it differ from parallelism? Can concurrency take place on a single-core processor?

A

The ability of an operating system to manage multiple processes at the same time, allowing them to make progress independently

Processes are not executed simultaneously, but rather multiple tasks are dealt with at once

Yes, as only one process is actually being executed at any one time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are three disadvantages of processes?

A
  1. Creation overhead, e.g., in terms of memory space
  2. Complex inter-process communication
  3. Process-switching overhead (mode + context switch, including save/restore contexts of execution)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the difference between processes and threads? Definition, Ownership, Address Space, Info

A

Process is an independent program in execution with its own memory space. Thread is the smallest unit of execution within a process.

Processes define ownership on resources, while threads may share access to the same variables, code, or files.

All threads operate in their process’ address space, while different processes have different address spaces.

Processes have their context of execution saved in a PCB, while threads have an associated executed state which is the PCB extended with thread info.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What additional info does a thread’s associated executed state have in addition to the PCB?

A

Program counter, stack counter, return addresses of function calls, values of processor registers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why do threads operate faster than processors?

A

Thread creation and termination is much faster as no memory address space copy is needed

Context switching between threads of same process is much faster, as no need to switch whole address space, only swap CPU registers content

Communication between threads faster and more at programmer’s hand than between processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are some reasons to introduce threads? What is their primary downside?

A

Reasons:

Increase concurrency level with better performance

Use natural concurrency within a program

Downside:

No protection against other threads in the same process (risk of faults, memory sharing)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are user threads?

A

Threads managed by a user-level library without kernel intervention, with the OS unaware of these threads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are kernel threads?

A

Threads managed and schedule directly by the operating system kernel

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the four multithreading mapping models?

A

Many-to-one Model

One-to-one Model

Many-to-many Model

Two-level Model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the many-to-one multithreading mapping model?

A

multiple user threads are created and managed by a user-level thread library

all these threads are mapped to a single kernel thread

kernel treats the entire process as a single thread, regardless of amount of user threads

implemented entirely in user space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Advantages of many-to-one multithreading mapping model? Disadvantages

A

Adv:

no need for kernel involvement, fast and easy to deploy

Disadv:

only one user accesses kernel at a given time;

thus, multiple threads cannot run in parallel on multiple processors;

blocking system call from one user thread blocks all user threads of the process;

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the one-to-one multithreading model?

A

each user-level thread maps to a kernel thread

kernel fully aware of all threads in process

threads managed and scheduled by OS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Advantages of one-to-one multithreading model? Disadvantages?

A

Adv: allows for concurrency between all threads

Disadv: all threads managed by kernel, with negative impacts on performance in case of many user threads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the many-to-many model?

A

limited nr of kernel threads

multiple user threads mapped to a pool of kernel threads

user-level thread library schedules user threads onto available kernel threads, and kernel schedules kernel threads on the CPU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Advantages of many-to-many multithreading model? Disadvantages? What is the concurrency level limited by in this model?

A

Advantages: concurrency, bounded performance cost for kernel

Disadvantage: more complex to implement

Nr of kernel threads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the two-level multithreading model?

A

maps multiple user threads to a smaller number of kernel threads (like in many-to-many model)

certain user threads can also be bound directly to kernel threads (like in one-to-one model)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How does thread switching as a kernel activity? How about as handled by user-level thread libraries?

A
  1. Kernel maintains execution state of thread, works similar to process switching
  2. Library maintains execution state of threads, must obtain control in order to switch threads; responsibility of programmer to call library to yield execution
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are thread pools? What are their advantages?

A

A collection of pre-initialised threads that can be reused to execute tasks, avoiding the overhead of creating and destroying threads repeatedly

Adv:
- slightly faster to service a request with an existing thread than creating a new one
- allows number of user threads in an application to be bounedd by the size of the pool

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are some common processor scheduling algorithms?

A

First-Come First-Served (FCFS)

Shortest-Job-First (SJF)

Round Robin (RR)

Priority Scheduling - Rate Monotonic (RM), Deadline Monotonic (DM), Earliest Deadline First (EDF)

Multilevel Queue

Multilevel Feedback Queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What does the decision mode define?

A

When scheduling decisions are taken

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Preemptive vs Non-Preemptive Scheduling:

A

In preemptive scheduling, the OS can interrupt a running process to allocate CPU time to another process

In non-preemptive scheduling, once a process starts executing, it runs until it completes or voluntarily relinquishes control of the CPU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Time-Based vs Event-Based Scheduling:

A

In time-based scheduling, scheduling decisions are made based on a regular time slice or a clock interrupt. Timers used to determine the necessity of context switches.

In event-based scheduling, decisions are made in response to specific events rather than fixed time intervals, such as the arrival of new processes, completion of I/O operations, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the priority function?

A

the function defining which ready tasks are chosen for execution next

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is the arbitration rule?

A

Breaks ties between tasks

26
Q

How is the waiting time defined?

A

Total amount of time a process spends in the ready queue, waiting for CPU time to be allocated, excluding the time the process is actively executing.

27
Q

How is the response time defined?

A

The amount of time that elapses between when a user initiates a request (or a process is submitted) and when the system first produces a response or starts delivering output, i.e., when the process starts being executed.

28
Q

How is the turnaround time defined?

A

The total time taken to execute a particular process, from the time it is submitted to the time it completes execution, including all waiting time, execution time, and time spent in the ready queue.

29
Q

What is throughput?

A

The number of jobs or processes that are completed per unit of time

30
Q

What is the priority function of FCFS? Is it preemptive or non-preemptive? What is its arbitrartion rule?

A

Tasks are schedule in order of their arrival

Non-preemptive

Random choice among processes that arrive at the same time

31
Q

What is the priority function of SJF scheduling? What is its decision mode? What is its arbitrary rule? What is it optimal with relation to? What is its preemptive version called?

A

Tasks are schedule in terms of shortest execution time

Non-preemptive

Chronological or random ordering

Average waiting time

Shortest-Remaining-Time-First

32
Q

How does Round Robin Scheduling work? Arbitrartion Rule?

A

Each process in the ready queue is allocated a fixed time slice, i.e., a “quantum”

If a process completes within its quantum, it releases the CPU voluntarily.

If it does not, it is preempted, and the next process in the queue is given a turn to executed

Random arbitration

33
Q

What is priority scheduling? What is an issue with it?

A

A priority number is associated with each task that is either assigned by the programmer or computed through a function that depends on task properties. CPU is allocated to task with highest priority.

starvation, i.e., low priority processes never executing

34
Q

What is Rate Monotonic (RM) Priority Scheduling? Is it preemptive or non-preemptive? Arbitrartion function?

A

Processes are often periodic, i.e., activated every period T. RM priorities the processes with the shortest period T.

Preemptive

Chronological/Random

35
Q

What is Deadline Monotonic (DM) scheduling? Preemptive? Arbitration function?

A

Prioritises the processes with the shortest relative deadline D.

Preemptive

Chronological/Random ordering

36
Q

What is Earliest Deadline First (EDF)? Decision mode? Arbitration rule?

A

Highest priority assigned to process with shortest remaining time until its deadline

Preemptive

Chronological/Random

37
Q

What is multilevel queue scheduling? How is scheduling between the queues done?

A

Partitions the ready queue into various sub-queues, such as:
- system processes, interactive processes, interactive editing processes, batch processes, student processes
Each queue has its own scheduling algorithm

Scheduling between queues can be done with:
1. fixed priority scheduling (where high priority queues are served first, and low priority queues last)
2. time slice, where each queue gets a certain amount of CPU time which it can schedule amongst its processes.

38
Q

What is a multilevel feedback queue? What is a multilevel feedback queue scheduler defined in terms of?

A

Instead of processes being assigned to one, fixed sub-queue, tasks can be moved across sub-queues.

  1. Number of queues
  2. Scheduling algos for each queue
  3. Method used to determine when to upgrade/demote a task
  4. Method used to determine which queue a task will enter when that task needs service.
39
Q

What is a sequential task? What takes the place between the states?

A

A discrete sequence of tasks, e.g., observable in program code?

Indivisible, atomic steps/actions

40
Q

What is the single reference rule?

A

A statement in a programming language may be regarded as atomic if at most one reference to a shared variable occurs

41
Q

How to regard a non-atomic statement as atomic, e.g., S?

A

You write it as <S>, e.g., <x:=x+1></S>

42
Q

What is a trace? Does it maintain the internal execution of the individual tasks? What do the actions refer to?

A

A sequence of atomic actions, obtained by interleaving the execution of concurrent tasks

Yes

Assignments or tests

43
Q

What is a “possible trace”?

A

A trace in which all tests yield true

44
Q

What are shared variables?

A

Variables accessible to several tasks (processes/threads)

45
Q

What is a private variable?

A

Variable accessible only to a single task

46
Q

What is interference?

A

disturbing others’ assumptions about the state

47
Q

What is meant by race condition?

A

A situtation in which correctness depends on the execution order of concurrenct activities

48
Q

What is synchronisation?

A

Ordering the execution, i.e., forbidding certain traces

49
Q

What are the five requirements on synchronisation solutions? Explain each shortly

A
  1. Functional correctness - satisfying the given specification
  2. Minimal waiting - waiting only when correctness is in danger
  3. Absence of deadlocks - don’t manoeuvre the system into a state such that progress is no longer possible
  4. Abscence of livelocks - livelocks occurs when two or more processes continuously change their state in response to each other without making any progress
  5. Fairness in competition:
    - weakly, each contender should eventually be admitted to proceed
    - strongly, a bound on the waiting time for each contender can be placed
50
Q

How does Peterson’s Algorithm work? Write out the structure of the algorithm?

A

Uses a shared variable t to denote the turn. Uses two booleans bY and bX to ensure minimal waiting.

void Px {
while (true) {
bX = true
t = Y
while(bY=true AND t!=X) { skip;}
x = x+1
bX = false
}
}

51
Q

What are the limitations of Peterson’s Algorithm?

A

Can synchronise only two tasks, with extensions for more tasks existing but being more complex.

Busy-wait, which wastes CPU Cycles

Does not work for single core due to busy-wait, as the process on the core that is busy waiting will never give the CPU to the other process which is required to stop the busy wait

52
Q

What is a mutex?

A

A synchronisation primitive used to ensure that only one thread or process can access a shared resource at a time

53
Q

What is a mutex initialised to? What are its two operations?

A

1

lock(m): await(m>0) + m=m-1; checks if mutex is available, which is the case if m>0 ; m=m-1 process acquires the lock by decrementing m

unlock(m): m=m+1; releases the mutex by incrementing m, signalling that the mutex is now available for other processes to acquire

54
Q

What are some challenges with priority scheduling when protecting shared resource access?

A
  1. Blocking:
    - a low priority task obtaining a resource, and a high priority task then waiting on it
  2. Priority Inversion:
    - occurs when a lower-priority task holds a resource that is needed by a higher-priority task
    - involves the medium task, which preempts the lowest task, which means that the highest task has to wait for both to complete
55
Q

What is a solution to priority inversion that prevents a middle-priority task to preempt the lowest priority task that has the resource? What is the name of this protocol?

A

Adjusting the priority of the task T holding the resource to the maximum of the priority of any other task that is blocked on the allocated resources of P and its own priority.

Priority Inheritance Protocol

56
Q

Which of the following components of program state are shared across threads in a multithreaded process:

register values, heap memory, global variables, stack memory

A

Share heap memory and global variables

Do NOT share register values and stack memory

57
Q

Can a multithreaded solution using multiple user-level threads achieve better performance on a multiprocessor system than on a single processor system?

A

No, since the multithreaded sysstem cannot make use of the different processors simultaneously, as it see only a single process.

58
Q

Consider a multiprocessor system and a multithreaded program written using the many-to-many threading model. Let the number of user-level threads in the program be more than the number of processors in the system. Discuss the performance implications of the following scenarios.a. The number of kernel threads allocated to the program is less than the number of processors.b. The number of kernel threads allocated to the program is equal to the number of processors.c. The number of kernel threads allocated to the program is greater than the number of processors but less than the number of user-level threads.

A

When the number of kernel threads is less than the number of processors, then some of the processors would remain idle since the scheduler maps only kernel threads to processors and not user-level threads to processors. When the number of kernel threads is exactly equal to the number of processors, then it is possible that all of the processors might be utilized simultaneously. However, when a kernel thread blocks inside the kernel (due to a page fault or while invoking system calls), the corresponding processor would remain idle. When there are more kernel threads than processors, a blocked kernel thread could be swapped out in favor of another kernel thread that is ready to execute, thereby increasing the utilization of the multiprocessor system.

59
Q

Do processes share global variables?

A

No, they do not.

60
Q

Which is the most general scheduling algorithm?

A

Multilevel Feedback-Queue Algorithm.