FINALS Flashcards

1
Q

What are the two main parts of a process and what do they encapsulate?

A

Threads encapsulate concurrency (the active part), while address spaces encapsulate protection (the passive part).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the difference between a heavyweight process and a lightweight process?

A

A heavyweight process is a process with its own resources, a lightweight process is a thread with shared resources within the same process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a context switch and how does it relate to process multiplexing?

A

A context switch involves saving the state of the currently running process and loading the state of a new process. This is essential for process multiplexing, which allows multiple processes to share CPU time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Explain the concept of an execution stack and its significance in thread execution.

A

The execution stack is a data structure that stores temporary results, parameters, and return addresses during function calls. It enables recursive execution and is crucial for modern programming languages.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Describe the purpose and functionality of the ThreadFork() operation in multithreaded programming.

A

ThreadFork() creates a new thread to execute a specific function with given arguments, enabling concurrent task execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How does the operating system use a timer interrupt for fair scheduling among threads?

A

The timer interrupt periodically forces a context switch, allowing the OS to schedule other threads and prevent monopolization of the CPU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the advantages and disadvantages of using threads compared to processes?

A

Threads are lightweight and require less overhead than processes. However, threads share the same address space, which can cause synchronization issues.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Differentiate between internal and external events that can cause a thread to yield control.

A

Internal events occur within the thread, such as I/O blocking. External events, like timer interrupts, originate outside the thread.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does Simultaneous Multithreading (SMT)/Hyperthreading enhance CPU performance?

A

SMT/Hyperthreading creates virtual cores by duplicating the register state, allowing multiple threads to execute concurrently on one core, improving efficiency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a working set, and how does it impact context switching overhead?

A

The working set is the actively used memory of a process. Larger working sets increase context switching overhead as more data must be saved and restored.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Describe the five states in the lifecycle of a process.

A

New: The process is being created.
Ready: Waiting to run.
Running: Executing instructions.
Waiting: Paused for an event.
Terminated: Finished execution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

main() {
ThreadFork(ComputePI, “pi.txt” ));
ThreadFork(PrintClassList, “classlist.txt”));
}
What is the purpose of the ThreadFork() function in the code snippet?

A

ThreadFork() starts a new thread to execute a specific function with provided arguments. For example, ThreadFork(ComputePI, “pi.txt”) creates a thread to execute ComputePI with “pi.txt”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Explain the concept of an execution stack and its importance in programming.

A

The execution stack stores temporary data like function parameters, local variables, and return addresses. It enables recursive calls and scoped variables, essential for structured programming.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the steps involved in saving and restoring the CPU state during a context switch?

A
  1. Save the current thread’s state (PC, registers, stack pointer) to its TCB.
    1. Load the new thread’s state from its TCB into the CPU.
      This ensures threads resume correctly from where they left off.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why is saving and restoring the CPU state crucial during a context switch?

A

It maintains thread isolation and program correctness. Without it, data corruption and unpredictable behavior could occur.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the concept of a yield() operation in thread execution?

A

Yield() allows a thread to voluntarily relinquish the CPU, giving other threads a chance to execute, promoting fairness and responsiveness.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Glossary

Process

A

Independent execution unit with its own memory and resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Glossary

Thread

A

Lightweight execution unit within a process sharing resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Concurrency

A

Simultaneous execution of tasks for efficiency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Thread Control Block (TCB)

A

Stores thread state and scheduling info.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Context Switch

A

Saves/restores thread states for CPU sharing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Yield()

A

Voluntary relinquishment of CPU by a thread.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Interrupt

A

Signal interrupting normal execution for a specific task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Timer Interrupt

A

Periodic interrupt for OS scheduling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Multithreading

A

Using threads within a process for concurrency.

26
Q

Hyperthreading

A

Hardware creating virtual cores for concurrent thread execution.

27
Q

Synchronization

A

Mechanisms for safe shared resource access.

28
Q

Mutex

A

Lock allowing one thread access to a resource.

29
Q

Semaphore

A

Signal controlling access to limited resources.

30
Q

Race Condition

A

Unpredictable behavior from thread timing issues.

31
Q

Deadlock

A

Threads blocked indefinitely, waiting for each other’s resources.

32
Q

Explain the concept of hyper-threading and its potential performance implications.

A

Hyper-threading allows a single physical core to execute multiple threads concurrently by duplicating certain CPU components. While it increases throughput, contention for shared resources like ALUs can limit performance gains.

33
Q

Differentiate between multiprocessing and multiprogramming.

A

Multiprocessing uses multiple CPUs for parallel task execution, while multiprogramming allows multiple jobs to share a single CPU through time-sharing.

34
Q

What are the characteristics of independent threads?

A

Independent threads have no shared state with others, making them deterministic and reproducible. Their execution order does not affect the results.

35
Q

Why might cooperating threads be preferred in certain situations?

A

Cooperating threads share resources and can achieve speedup through overlapping I/O and computation, e.g., multiple ATMs accessing a single bank account.

36
Q

How does a threaded web server handle multiple client requests?

A

A threaded web server creates a new thread for each incoming client request, allowing concurrent processing and better utilization of system resources.

37
Q

What is the purpose of a thread pool in a web server?

A

A thread pool manages a fixed number of worker threads, preventing resource exhaustion during traffic spikes. Incoming requests are queued and processed by available threads.

38
Q

What potential problems can arise from shared state among concurrent threads?

A

Shared state can lead to race conditions, inconsistencies, and unpredictable behavior due to the non-deterministic nature of thread interleaving.

39
Q

Why are atomic operations crucial in concurrent programming?

A

Atomic operations complete without interruption, ensuring data consistency and predictable behavior when multiple threads access shared resources.

40
Q

What challenges exist in debugging programs with cooperating threads?

A

Non-deterministic execution can lead to “Heisenbugs” that are difficult to reproduce. Extensive testing with different interleaving scenarios is required.

41
Q

Multiprocessing

A

Parallel task execution using multiple CPUs.

42
Q

Multiprogramming

A

Time-shared execution of multiple jobs on one CPU.

43
Q

Atomic Operation

A

an indivisible operation that either completes entirely or does not occur at all. It ensures data consistency in concurrent programming by preventing interruptions during execution.

44
Q

Thread Pool

A

Pre-created threads for handling incoming tasks efficiently.

45
Q

Denial of Service (DoS)

A

Overloading a server with requests to disrupt service.

46
Q

Interleaving

A

Non-deterministic thread instruction execution order.

47
Q

Heisenbug

A

Bug that alters/disappears during debugging attempts.

48
Q

Why is it challenging to achieve synchronization using only atomic load and store operations?

A

Atomic load and store operations guarantee atomicity only for individual memory accesses. However, synchronization often involves multi-step operations, like checking and updating a flag, which can be interrupted, leading to race conditions.

49
Q

Describe the “Too Much Milk” problem and its significance in understanding synchronization issues.

A

The “Too Much Milk” problem illustrates the difficulties of coordination between threads accessing a shared resource. It emphasizes the need for synchronization mechanisms to prevent redundant actions while ensuring the task is performed when needed.

50
Q

Why is Solution #1 to the “Too Much Milk” problem, which involves leaving a note, insufficient?

A

Solution #1 fails because it does not ensure atomicity. A thread may be interrupted after checking for the note but before performing the action, allowing other threads to take the same action unnecessarily.

51
Q

What are the key issues with using interrupt enable/disable as a locking mechanism?

A

Disabling interrupts can delay critical event handling and is unsuitable for multiprocessor systems, where globally disabling interrupts is complex and inefficient.

52
Q

Explain the purpose of a “guard” variable in the improved lock implementation using test&set.

A

A “guard” variable ensures only one thread attempts to acquire the lock at a time. It helps reduce contention and prevents threads from continuously busy-waiting.

53
Q

What are the advantages and disadvantages of using test&set for implementing locks?

A

Advantages: Works on multiprocessors, allows user-level locks, and avoids prolonged interrupt disabling.
Disadvantages: Leads to busy-waiting, wasting CPU cycles, and can cause priority inversion.

54
Q

Describe the concept of “busy-waiting” and why it is generally undesirable.

A

Busy-waiting occurs when a thread continuously checks the status of a lock without performing useful work. It wastes CPU resources and can block other threads from progressing.

55
Q

How does a semaphore differ from a simple integer variable in the context of concurrency?

A

Semaphores are accessed exclusively through atomic P() and V() operations, ensuring controlled modification and preventing race conditions. Simple integers lack this atomicity and control.

56
Q

Provide an example of how semaphores can be used to enforce scheduling constraints between threads.

A

A semaphore initialized to 0 can be used for a “joiner” thread to wait using P() until a “worker” thread signals completion using V(). This ensures the joiner proceeds only after the worker finishes.

57
Q

Discuss the difficulties in designing and debugging concurrent programs, focusing on the challenges of non-determinism and race conditions.

A

Non-determinism in concurrent programs arises because thread execution order varies, leading to unpredictable outcomes. Race conditions occur when threads access shared resources without proper synchronization, causing data corruption. Debugging is difficult due to the intermittent and non-reproducible nature of these issues.

58
Q

Compare and contrast the use of disabling interrupts and using test&set for implementing locks. Highlight the trade-offs and suitability of each approach in different scenarios.

A

Disabling interrupts ensures mutual exclusion but is inefficient in multiprocessor systems and delays critical events. Test&set is more scalable, allowing user-level locks, but causes busy-waiting and potential priority inversion. Use disabling interrupts in simple, single-core systems and test&set for multiprocessor environments.

59
Q

Explain the concept of priority inversion. Describe how it can occur with locks and discuss potential solutions or mitigation strategies.

A

Priority inversion occurs when a high-priority thread is blocked waiting for a resource held by a lower-priority thread, while a medium-priority thread preempts the lower-priority thread. Solutions include priority inheritance, where the lower-priority thread temporarily inherits the higher priority to complete its task.

60
Q

Describe in detail how semaphores can be used to solve the “producer-consumer” problem, where one thread produces data and another thread consumes it. Explain how semaphores ensure both synchronization and buffer management.

A

Semaphores enforce synchronization using a full semaphore to track available items and an empty semaphore for available space. The producer signals full after producing, and the consumer signals empty after consuming. This prevents overfilling and ensures correct order.

61
Q

Analyze the different correctness properties that must be considered when designing concurrent programs. Provide specific examples of these properties and discuss their implications for program behavior and reliability.

A

Correctness properties include safety (no two threads enter a critical section simultaneously), liveness (threads eventually proceed), and fairness (equal access to resources). For example, mutexes ensure safety, but improper implementation can cause deadlocks, impacting liveness. Fair scheduling avoids starvation, maintaining system reliability.