Operating Systems: Process Synchronisation & Concurrency Flashcards

1
Q

What is the ability for multiple programs or tasks to appear to be executed simultaneously called?

A

Concurrency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does concurrency refer to in an OS context?

A

Any form of interaction among processes or threads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How fundamental is concurrency to OS design?

A

Fundamental part.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

List the key aspects that concurrency includes.

A
  1. Communication amongst processes/threads.
  2. Sharing of, and competition for, system resources.
  3. Cooperative processing of shared data.
  4. Synchronisation of process/thread activities.
  5. Solving deadlock and starvation problems.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How does concurrency arise at different levels of execution streams?

A

It arises in a similar way at multiple levels:

  • Multi-programming.
  • Multi-threading.
  • Multi-processors.
  • Multi-computers.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is multi-programming in the context of concurrency?

A

The interaction between multiple processes running on one CPU, creating pseudo-parallelism (they appear to run simultaneously, but share one CPU).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is multi-threading in the context of concurrency?

A

Multi-threading is the interaction between multiple threads running within one process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is multi-threading in the context of concurrency?

A

The interaction between multiple threads running within one process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is meant by multi-processors regarding concurrency?

A

Multi-processors involve multiple CPUs running multiple processes/threads, providing real parallelism (true simultaneous execution).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a multi-computer setup in concurrency terms?

A

Refers to multiple independent computers running distributed processes/threads that interact.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Do concurrency principles differ greatly across multi-programming, multi-threading, multi-processors, and multi-computers?

A

No. The principles of concurrency are basically the same in all these categories.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the three basic interaction types among processes or threads?

A
  1. Unaware of each other.
  2. Directly aware of each other.
  3. Indirectly aware of each other.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

In concurrency, what does it mean when processes or threads are “unaware of each other”?

A

They must use shared resources independently, without interfering, and leave them intact for others.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

In concurrency, what does it mean when processes or threads are “indirectly aware of each other”?

A

They work on common data and build
some result together via the data (“stigmergy” in biology), meaning they share data structures or files without direct communication but still affect each others work.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

In concurrency, what does it mean when processes or threads are “directly aware of each other”?

A

They cooperate by communicating, for example, by exchanging messages.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Which two problems must concurrency mechanism address regarding processes and resources?

A

Deadlock (where processes block each other indefinitely), and starvation (where a process never gets needed resources).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Why is synchronisation important in concurrency?

A

Synchronisation ensures that processes/threads coordinate their actions properly (e.g., to avoid data corruption or inconsistent states).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is a primary requirement when multiple processes/threads need to exchange information?

A

They require mechanism for communication (e.g., messages, shared memory, signals) to ensure data is passed safely and efficiently.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How does concurrency affect resource management?

A

Processes/threads may share and compete for system resources (CPU, memory , I/O devices), requiring scheduling, locks, or other coordination tools.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What does cooperative processing entail in concurrency?

A

It involves multiple processes/threads working together on the same data or tasks, often requiring synchronisation to maintain consistency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is the classic synchronisation problem involving two processes known as “producer and consumer”?

A

It is a classic scenario where one process produces data and the other process consumes it, highlighting the need for coordination to avoid race conditions and data inconsistencies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What are the register-level steps for counter++ in the Producer?

A
  1. register1 = counter
  2. register1 = register1 + 1
  3. counter = register1
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What are the register-level steps for counter– in the Consumer?

A
  1. register2 = counter
  2. register2 = register2 - 1
  3. counter = register2
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Show a possible interleaving of instructions for Producer and Consumer when counter = 5.

A
  1. Producer loads counter (5) into register1
  2. Producer increments register1 → (now 6)
  3. Consumer loads counter (still 5 at this instant) into register2
  4. Consumer decrements register2 → (now 4)
  5. Producer writes back register1 (6) into counter
  6. Consumer writes back register2 (4) into counter
  7. Producer writes register1 (still 6) into counter again → final counter = 6

(This sequence illustrates how the final counter can be inconsistent with the expected result if the operations interleave.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is a race condition in the context of concurrency?
When multiple processes read and write shared data, and the final outcome depends on the specific order in which their instructions execute.
26
When do race conditions generally occur?
Whenever resources and/or data are shared by processes or threads, particularly if those processes are unaware or only indirectly aware of each other.
27
What is the main problem caused by race conditions?
Unpredictable or incorrect results, because the final state of the shared data depends on timing and the interleaving of operations.
28
Why do we synchronize processes or threads?
Synchronization aims to guard against race conditions by coordinating the order and timing of operations on shared data.
29
What does “inconsequential vs. consequential” refer to in race conditions?
Inconsequential: The final outcome may vary, but it does not cause significant issues. Consequential: The difference in outcome does matter, possibly leading to serious errors or inconsistent states.
30
What is the general strategy to avoid race conditions?
The strategy is to reduce interleaving by grouping critical instructions into indivisible (atomic) blocks, preventing other processes from interrupting partway through.
31
What is a critical section in concurrency?
A critical section (or critical region) is a portion of code that may only be executed by one process or thread at a time, thereby preventing race conditions on the shared data in that section.
32
What is the critical section problem in concurrent processing?
The challenge of creating a protocol that allows multiple processes (or threads) to cooperate while sharing resources, such that certain conditions (Mutual Exclusion, Progress, and Bounded Waiting) are always satisfied.
33
Which three conditions must a protocol satisfy to solve the Critical Section Problem?
1. Mutual exclusion: If one process is executing in its critical section, no other process can do so simultaneously. 2. Progress: If no process is in the critical section, a process that wishes to enter should not be prevented indefinitely. 3. Bounded waiting: A process wishing to enter its critical section must not be blocked forever (no indefinite postponement).
34
What is the desired effect regarding mutual exclusion in a critical section scenario?
When one thread (e.g., Thread A) enters the critical section, it prevents the other thread (Thread B) from entering until Thread A exits, after which Thread B can safely enter.
35
Illustrate the sequence showing mutual exclusion with Thread A and Thread B.
1. Thread A arrives at the critical region gate first. 2. Thread A enters the critical section, blocking Thread B from entering. 3. Thread A exits the critical section; Thread B can then enter. 4. Thread B enters the critical section after A is done.
36
How can mutual exclusion be achieved in practice?
Through various synchronization solutions (e.g., disabling interrupts, lock variables, atomic operations, software protocols like Peterson’s algorithm) that enforce exclusive access to the critical section.
37
What are some common potential solutions to the Critical Section Problem?
1. Disabling hardware interrupts. 2. Simple (unprotected) lock variable. 3. Indivisible lock variable (TSL). 4. No-TSL toggle for two threads. 5. Peterson’s no-TSL, no alternation
38
What is the method of “disabling hardware interrupts” in concurrency?
A technique where a thread/process turns off the CPU’s interrupt mechanism upon entering the critical section.
39
What does the method of “disabling hardware interrupts” in concurrency ensure?
That no other process can be scheduled on that CPU, effectively granting mutual exclusion.
40
What are the drawbacks of the method of “disabling hardware interrupts” in concurrency?
A faulty or hung process could lock the entire system. It blocks all interrupts, so no context switches can occur, preventing unrelated tasks from running. It does not extend to multiprocessor environments (only disables interrupts on one CPU).
41
What is the “simple (unprotected) lock variable” approach to concurrency control?
It uses a shared variable (often set to 0 or 1) to indicate if the critical section is free or in use.
42
What does the “simple (unprotected) lock variable” approach to concurrency ensure?
That if the variable is 0, a thread sets it to 1 before entering its critical section, then sets it back to 0 upon exiting.
43
What are the drawbacks of the “simple (unprotected) lock variable” approach to concurrency?
There is a race condition between checking the lock and setting it, because these steps are not atomic. Multiple threads could see the lock as 0 before any one has set it to 1, causing both to enter the critical section simultaneously.
44
What is an “indivisible lock variable (TSL)” in concurrency?
TSL stands for “Test-and-Set Lock,” a special hardware instruction that performs the “check and set” operation atomically (in a single, uninterruptible step).
45
What is an example of the “indivisible lock variable (TSL)” in concurrency?
Example “key” analogy: a thread can “pick up” the lock if it’s free; if another thread tries at the same time, it cannot pick it up because the operation is indivisible.
46
What are the advantages of the “indivisible lock variable (TSL)” in concurrency?
Truly enforces mutual exclusion, since no interleaving can occur during the test-and-set. Works across multiple CPUs, provided each CPU supports the TSL instruction.
47
What are the drawbacks of the “indivisible lock variable (TSL)” in concurrency?
Requires special hardware support for the atomic instruction. Can still lead to busy waiting (spinlocks) if not paired with a blocking mechanism.
48
What is the “no-TSL toggle” approach for two threads in concurrency?
Uses a shared toggle (0/1) interpreted differently by each thread: if the toggle is set a certain way, it indicates which thread can enter the critical section.
49
What does the “no-TSL toggle” approach for two threads in concurrency ensure?
That each thread “sees” the toggle state differently, entering the critical section if the toggle is in its favour.
50
What are the drawbacks of the “no-TSL toggle” approach for two threads in concurrency?
Avoids race conditions inside the critical section, but can cause lockup outside the section (one thread might be forced to wait even if the other is not in the critical section). Violates progress or bounded waiting in certain scenarios, as one thread’s schedule could block the other unnecessarily.
51
What is “Peterson’s no-TSL, no alternation” algorithm for concurrency?
A purely software-based solution for two threads that uses: 1. Two lock flags (one per thread) to indicate intent to enter the critical section. 2. A shared mask (toggle) to decide which thread has priority if both try to enter simultaneously.
52
How does the “Peterson’s no-TSL, no alternation” algorithm for concurrency work?
1. Thread A sets its flag, then sets the mask to “favour B.” 2. Thread B sets its flag, then sets the mask to “favour A.” 3. Whichever thread last sets the mask effectively yields to the other, ensuring only one can proceed into the critical section.
53
What are the advantages of the “Peterson’s no-TSL, no alternation” algorithm for concurrency?
Guarantees mutual exclusion, progress, and bounded waiting in software alone.
54
What are the drawbacks of the “Peterson’s no-TSL, no alternation” algorithm for concurrency?
More overhead compared to an atomic hardware approach. Still limited to a two-thread case in its classic form.
55
How does disabling hardware interrupts achieve mutual exclusion?
1. When Thread A enters the critical region, it disables all interrupts so no other thread can be scheduled. 2. Thread A finishes its critical section, then re-enables interrupts. 3. Thread B (or any other thread) can then be scheduled to enter the critical section.
56
What are the drawbacks of disabling hardware interrupts as a mutual exclusion solution?
There is no guarantee the thread will ever exit its critical section (it could crash or loop forever). Meanwhile, no other tasks (even unrelated ones) can run because interrupts are off. This approach does not work on multiprocessor systems (only stops interrupts on one CPU). It becomes a physical indivisible lock rather than a logical one.
57
How does an unprotected simple lock variable work?
1. If the lock is 0, a thread sets it to 1 and enters the critical section. 2. When the thread finishes, it resets the lock to 0, allowing another thread to enter.
58
Why does a simple (unprotected) lock variable still suffer from race conditions?
There is a gap between checking the lock (testing if it is 0) and setting the lock to 1. Another thread could sneak in during that gap, see the lock as 0, and also set it to 1, resulting in both threads entering the critical section simultaneously.
59
In simple terms, what is the core problem with an unprotected simple lock?
Because testing the lock and setting it happen in two separate steps, a second thread can interleave those steps and break mutual exclusion.
60
How does an indivisible (atomic) lock variable, often using the TSL instruction, address the race condition problem?
It combines the test and set operations into one atomic hardware instruction (Test-and-Set Lock, or TSL), ensuring no other thread can interrupt between the test and the set.
61
Explain the “key” analogy for an indivisible lock variable.
A thread obtains a unique key in a single, indivisible action. If one thread holds the key, no other can acquire it at the same time. Upon leaving the critical section, the thread releases the key so another thread can take it.
62
How does the no-TSL toggle for two threads attempt to achieve mutual exclusion?
1. There is a shared toggle (0 or 1) that means “go” for one thread and “stop” for the other. 2. If the toggle is in your favour, you enter the critical section without changing it. 3. When you exit, you flip it, allowing the other thread to enter next.
63
What issue arises with the no-TSL toggle approach for two threads?
If one thread is still busy outside its critical region (doing other work or is interrupted), the toggle can block the wrong thread, preventing it from entering even though the other is not in the critical section. This violates the idea that threads should not be excluded outside the critical section and can lead to inefficiency or lockup.
64
What does Peterson’s “no-TSL, no alternation” algorithm involve?
It uses two lock flags (one for each thread) and an additional mask toggle. Each thread indicates its desire to enter the critical section by setting its flag. They use the mask to decide who gets priority if both want to enter simultaneously. Once a thread finishes, it resets its flag, allowing the other thread to proceed.
65
Show the typical sequence of entering and exiting the critical section in Peterson’s algorithm.
1. Thread A sets its lock flag, updates the mask to point to Thread B. 2. Thread B sets its lock flag, also updating the mask. 3. If they arrive at the same time, the last thread to push the mask effectively yields to the other. 4. Once a thread finishes, it resets its flag, and the other thread may enter.
66
How does Peterson’s algorithm handle the scenario where both threads are interrupted and then race to set their flags and push the mask?
Even if both set their lock flags and race to push the mask, the last push decides who waits, ensuring mutual exclusion holds, only one thread can enter the critical section at a time.
67
Summarize the effectiveness of each critical section solution.
1. Disabling hardware interrupts: Avoids the race inside the section but can freeze the system; not viable on multiprocessors. 2. Simple lock variable: Fails due to race conditions in the test/set gap. 3. Indivisible lock variable (TSL): Works but needs special hardware instructions. 4. No-TSL toggle: Avoids race inside the critical section but can cause lockup outside. 5. Peterson’s no-TSL, no alternation: Works purely in software, but has higher overhead than a hardware-based atomic lock.
68
What is “busy waiting” in the context of critical section solutions?
A situation where a process/thread continuously executes a loop while waiting for some condition to become true, rather than blocking and yielding the CPU.
69
List the main drawbacks of busy waiting.
It wastes CPU time since the waiting thread remains Ready and actively loops. It can cause priority inversion, where a higher-priority thread spinning may starve a lower-priority thread. It prevents the CPU from doing other productive work in the meantime.
70
What is a better alternative to busy waiting in critical section solutions?
Instead of “spin-waiting,” the waiting process should block or yield the CPU. This approach (e.g., using a mutex) lets the system schedule other threads, avoiding wasted CPU cycles.
71
How does a mutex improve upon busy-wait solutions?
A mutex uses atomic operations but, on contention, the thread blocks itself rather than spinning, thereby reducing CPU overhead and preventing starvation from continuous polling.
72
Why introduce synchronization mechanisms to achieve mutual exclusion without busy waiting?
Busy waiting wastes CPU cycles because a waiting process continuously polls the condition in a loop. Synchronization allows processes/threads to block until a signal is received, avoiding excessive resource consumption. This can be done with simple signals or mutex locks, where a process is suspended until another process signals it.
73
Can processes cooperate directly without a traditional “critical section” structure?
Yes. Processes can send simple signals to each other (e.g., “I’m done” or “Go ahead”) so that one process waits until it receives a signal from the other. In some cases, however, a critical section (like a mutex) may still be indispensable.
74
What is a semaphore in concurrency control?
A low-level synchronization primitive used to coordinate access to shared resources. It’s essentially a special integer variable that is accessed only through specific atomic operations.
75
What are the three possible operations on a semaphore?
1. Initialization to a non-negative integer value. 2. Wait (often called P or down), which decrements the semaphore. If it becomes negative, the calling process blocks. 3. Signal (often called V or up), which increments the semaphore. If the value becomes non-negative, one previously blocked process (if any) is unblocked.
76
What happens under the Wait operation on a semaphore?
The semaphore’s value is decremented by 1. If the new value is negative, the calling process/thread blocks (cannot proceed until another process issues a Signal).
77
What happens under the Signal operation on a semaphore?
The semaphore’s value is incremented by 1. If the value transitions from negative to non-negative, one of the processes/threads that was blocked on this semaphore is unblocked (allowed to resume).
78
What is the difference between a counting semaphore and a binary semaphore?
A counting semaphore can hold a value >= 0 and can track multiple units of a resource. A binary semaphore is restricted to 0 or 1, effectively acting like a mutex lock for a single resource unit.
79
List the three points of uncertainty when processes use semaphores.
1. There is no way to know in advance if the calling process will block upon Wait. 2. If a Signal operation unblocks another process, there is no way to know which blocked process is awakened. 3. When a process calls Signal, there is no way to know if this will unblock another process at all (if no process is waiting).
80
What is the main drawback of using semaphores extensively in large programs?
They can lead to spaghetti-like lock control, where many semaphores are scattered across the codebase, making it difficult to reason about correctness, prevent deadlocks, and maintain the code.
81
What is a monitor, and how does it differ from a low-level primitive like a semaphore?
A monitor is a higher-level synchronization construct that combines mutual exclusion with condition variables in a single abstraction. It encapsulates shared data and the procedures that operate on that data, ensuring only one thread can be active in the monitor at any time.
82
How do monitors encapsulate data structures for synchronization?
The data structure that requires protection is kept private within the monitor. Other threads cannot directly access it; they must call the monitor’s public methods, which enforce the necessary synchronization automatically.
83
Describe the basic workflow of threads entering and exiting a monitor.
1. A thread calls one of the monitor’s methods to enter. 2. If another thread is already executing within the monitor, the new caller is blocked and placed in a waiting queue. 3. When the current thread exits, one thread from the queue is unblocked and allowed to enter next.
84
Why are monitors considered a more elegant solution than semaphores for some applications?
Monitors automatically provide mutual exclusion within their methods, reducing the risk of scattering lock/unlock (signal/wait) pairs around the code. This can significantly reduce the complexity and risk of spaghetti lock control compared to using semaphores directly everywhere.
85
Why are “classic synchronization problems” important in concurrency?
They illustrate common pitfalls and highlight core concurrency issues such as mutual exclusion, deadlock, starvation, and resource sharing. Any new synchronization mechanism must be able to solve these classic problems.
86
Why is producer–consumer considered a “classic” synchronization problem?
It demonstrates how processes that produce data and processes that consume data can interleave in ways that cause inconsistencies, requiring mechanisms like semaphores or monitors to ensure proper bounded buffer handling and mutual exclusion.
87
What is the Readers–Writers Problem in concurrency?
1. Multiple readers can simultaneously read the shared data/file. 2. Only one writer can write at a time. 3. If any writer is writing, no reader can be reading. It focuses on coordinating reads and writes in such a way that readers do not block each other, but writers still get exclusive access when needed.
88
How does the Readers–Writers Problem differ from standard mutual exclusion or the producer–consumer scenario?
It emphasizes conditional concurrency: multiple threads can proceed in parallel if they are only reading. Typical mutual exclusion forces “one thread at a time,” but here we allow concurrency among readers, yet maintain exclusivity for writers.
89
What is the Dining Philosophers Problem?
Five philosophers sit around a table. Each philosopher has a plate and one fork. To eat, a philosopher needs two forks. The goal is to ensure all philosophers can share forks without causing deadlock, starvation, or livelock.
90
List some constraints or methods to prevent deadlock/starvation in the Dining Philosophers Problem.
Limit the number of philosophers eating simultaneously to four (i.e., at most N-1 can hold forks). A philosopher may pick up forks only if both are available (the action of picking must be in a critical section). Use an asymmetric solution: Odd-numbered philosopher picks up the left fork, then the right. Even-numbered philosopher picks up the right fork, then the left.
91
What is the Single Lane Bridge Problem in concurrency terms?
There is a narrow bridge allowing traffic in only one direction at a time. Cars traveling in opposite directions cannot use the bridge simultaneously. A safety violation occurs if cars from both directions enter at once. It involves resource contention (the bridge) and managing efficiency.
92
What are the primary issues to address in the Single Lane Bridge Problem?
1. Deadlock: Cars from both directions might enter, blocking each other indefinitely. 2. Starvation: Continuously favouring one direction may starve the other. 3. Efficiency: Balancing minimal waiting with maximum bridge use.
93
Which concepts form the essential foundation for understanding concurrency?
1. Concurrency (multiple execution flows that can overlap in time). 2. Critical Section Problem (ensuring exclusive access to shared resources). 3. Mutual Exclusion (only one process can be active in a critical section). 4. Race Condition (outcome depends on the timing/interleaving of processes). 5. Hardware/Software Solutions (interrupt disabling, semaphores, monitors, etc.) and their trade-offs. 6. Classic Synchronization Problems (producer–consumer, readers–writers, dining philosophers, single-lane bridge).