Quiz 3 Flashcards
What is a race condition?
An undesirable condition that occurs when a device or system attempts to perform two or more operations at the same time, but because of the nature of the device or system, the operations must be done in the proper sequence to be done correctly.
One example of this are two processes are creating child processes using the fork() system call. Race condition on kernel variable next_available_pid which represents the next available process identifier (pid). Unless there is mutual exclusion, the same pid could be assigned to two different processes.
What is the critical section problem?
Consider a system of n processes {p0, p1,…p n-1}
Each process has a critical section segment of code
-Process may be changing common variables, updating tables, writing to files etc
-When one process in critical section, no other may be in its critical section
The critical section problem is to design protocol to handle this.
Each process must ask permission to enter critical section in entry section, may follow critical section with exit section, then remainder section.
What are some solutions to the critical-section problem?
- Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical section.
- Progress - If no progress is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely.
- Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their criticial sections after a process has made a request to enter its critical section and before that request is granted.
- Assume that each process executes at a nonzero speed
- No assumption concerning relative speed of the n processes.
What are preemptive and nonpreemptive?
Preemptive means that the kernel allows preemption of processes in critical-section handling when running in kernel mode.
Non-preemptive means that it runs until it exits kernel mode, blocks, or voluntarily yields CPU.
-Essentially free of race conditions in kernel mode
What is a semaphore?
-Synchronization tool that provides more sophisticated ways for process to synchronize their activities.
-Semaphore S - integer value
-Can only be accessed via two indivisible operations
- wait() and signal()
- wait(S) {
while(S <= 0)
; //busy wait
S–;
}
-signal(S) {
S++
}
How can semaphores be used?
-Counting semaphore - Integer value can range over an unrestricted doman
-Binary semaphore - Integer value can range only between 0 and 1
-Can solve various synchronization problems.
-Consider P1 and P2 that requires S1 to happen before S2.
-Create a semaphore “synch” initialized to 0
P1:
S1;
signal(synch);
P2:
wait(synch);
S2;
P2 waits until it gets the signal from P1 that S2 can now be run.
How can semaphores be implemented?
- Must guarantee that no two processes can execute the wait() and signal() on the same semaphore at the same time
- Thus the implementation becomes the critical section problem where the wait and signal code are placed in the critical section
- Could now have busy waiting in critical section implementation
- But implementation code is short
- Little busy waiting if critical section rarely occupied
- Could now have busy waiting in critical section implementation
- Note that applications may spend lots of time in critical sections and therefore this is not a good solution.
How can semaphores be implemented without busy waiting?
- With each semaphore there is an associated waiting queue.
- Each entry in a waiting queue has a value (of type integer), and a pointer to the next record in the list.
- Two operations
- block - place the process invoking the operation on the appropriate waiting queue
- wakeup - remove one of the processes in the waiting queue and place it in the ready queue
What is the bounded-buffer problem?
The problem is to make sure that the producer won’t try to add data into the buffer if it’s full and that the consumer won’t try to remove data from an empty buffer.
There are n buffers, each can hold one item. Semaphore mutex initialized to the value 1, semaphore full initialized to the value 0, semaphore empty initialized to the value n.
What is the readers-writers problem?
- A data set is shared among a number of concurrent processes
- Readers - only read the data set, do not perform updates
- Writers - can both read and write
- Problem - allow multiple readers to read at the same time
- Only one single writer can access the shared data at the same time
- Several variations of how readers and writers are considered - all involve some form of priorities
- Shared data
- Data set, semaphore rw_mutex initialized to 1, semphore mutex initialized to 1, read_count initialized to 0.
What are the variations in the readers-writers problem?
- First variation - no reader kept waiting unless writer has permission to use shared object
- Second variation - once writer is ready, it performs the write ASAP.
- Both may have starvation leading to more variations
- Problem is solved on some systems by kernel providing reader-writer locks.
What is the dining-philosophers problem?
- Philosophers spend their lives alternating thinking and eating.
- Don’t interact with their neighbors, occasionally try to pick up two chopsticks (one at a time) to eat from bowl.
- Need both to eat, then realease both when done.
- In the case of 5 philosophers
- Shared data
- Bowl of rice (data set)
- Semaphore chopstick[5] initialized to 1
- Shared data
What is deadlock?
When a process is waiting for a resource that will never become available. For example if process 1 is using resource 1 and process 2 is using resource 2, but process 1 is waiting to use resource 2 and process 2 is waiting to use resource 1. This creates a loop and no process will ever get what they want, so they will never release their resources to be used, so there will be deadlock.
When can deadlock arise?
Where there four conditions hold simultaneously:
- Mutual Exclusion: Only one process at a time can use a resource
- Hold and wait: A process holding at least one resource is waiting to acquire additional resources held by other processes
- No preemption: A resource can be released only voluntarily by the process holding it, after that process has completed its task.
- Circular wait: There exists a set {P0, P1, …, Pn} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, ,,, Pn-1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource held by P0.
What is a resource allocation graph?
- A set of vertices V and a set of edges E.
- V is partitioned into two types
- P = {P1, P2,…, Pn}, the set consisting of all the processes in the system.
- R = {R1, R2, …., Rm}, the set consisting of all the resource types in the system.
- Request edge - Directed edge Pi –> Rj
- Assignment edge - Directed edge Rj –> Pi