C191-Terms-Chapter-5 Flashcards
Concurrency
the act of multiple processes (or threads) executing at the same time. When multiple physical CPUs are available, the processes may execute in parallel.
On a single CPU, concurrency may be achieved by time-sharing.
When concurrent processes access a common data area, the data must be protected from simultaneous change by two or more processes. Otherwise the updated area may be left in an inconsistent state.
critical section
a segment of code that cannot be entered by a process while another process is executing a corresponding segment of the code.
A software solution to the CS problem.
Any solution to the critical section (CS) problem must satisfy the following requirements:
Guarantee mutual exclusion: Only one process may be executing within the CS.
Prevent lockout: A process not attempting to enter the CS must not prevent other processes from entering the CS.
Prevent starvation: A process (or a group of processes) must not be able to repeatedly enter the CS while other
processes are waiting to enter.
Prevent deadlock: Multiple processes trying to enter the CS at the same time must not block each other indefinitely.
Process cooperation
many applications require the cooperation of processes to solve a common goal. A typical scenario is when one process produces data needed by another process.
The producer process must be able to inform the waiting process whenever a new data item is available.
In turn, the producer must wait for an acknowledgement from the consumer, indicating that the consumer is ready to accept the next data item.
mutex lock
A mutual exclusion lock; the simplest software tool for assuring mutual exclusion.
We use the mutex lock to protect critical sections and thus prevent race conditions.
That is, a process must acquire the lock before entering a critical section; it releases the lock when it exits the critical section.
The acquire() function acquires the lock, and the release() function releases the lock.
A mutex lock has a boolean variable available whose value indicates if the lock is available or not. If the lock is available, a call to acquire() succeeds, and the lock is then considered unavailable.
A process that attempts to acquire an unavailable lock is blocked until the lock is released.
contended
A term describing the condition of a lock when a thread blocks while trying to acquire it.
Locks are either contended or uncontended. A lock is considered contended if a thread blocks while trying to acquire the lock. If a lock is available when a thread attempts to acquire it
uncontended
A term describing a lock that is available when a thread attempts to acquire it.
If a lock is available when a thread attempts to acquire it, the lock is considered uncontended.
high contention
Contended locks can experience either high contention (a relatively large number of threads attempting to acquire the lock).
low contention
a relatively small number of threads attempting to acquire the lock.
short duration
Given that waiting on a lock requires two context switches—a context switch to move the thread to the waiting state and a second context switch to restore the waiting thread once the lock becomes available—the general rule is to use a spinlock if the lock will be held for a duration of less than two context switches.
busy waiting
A practice that allows a thread or process to use CPU time continuously while waiting for something. An I/O loop in which an I/O thread continuously reads status information while waiting for I/O to complete.
While a process is in its critical section, any other process that tries to enter its critical section must loop continuously in the call to acquire().
This continual looping is clearly a problem in a real multiprogramming system, where a single CPU core is shared among many processes.
Busy waiting also wastes CPU cycles that some other process might be able to use productively.
spinlock
A locking mechanism that continuously uses the CPU while waiting for access to the lock.
Spinlocks do have an advantage, however, in that no context switch is required when a process must wait on a lock, and a context switch may take considerable time.
In certain circumstances on multicore systems, spinlocks are in fact the preferable choice for locking.
If a lock is to be held for a short duration, one thread can “spin” on one processing core while another thread performs its critical section on another core.
On modern multicore computing systems, spinlocks are widely used in many operating systems.
The software solution to the CS problem has several major shortcomings:
The solutions works only for 2 processes. When 3 or more processes need to share a CS, a different solution must be developed.
The solution is inefficient. While one process is in the CS, the other process must wait by repeatedly testing and setting the synchronization variables. The waiting process is only wasting CPU and memory resources without accomplishing any useful work.
The solution addresses only competition among processes. To address problems of process cooperation, entirely different solutions must be devised.
Semaphores
general-purpose primitives that allow solving a variety of synchronization problems in a systematic manner.
A semaphore s is a non-negative integer variable that can be accessed using only two special operations, P and V.
V(s): increment s by 1
P(s): if s > 0, decrement s by 1, otherwise wait until s > 0
The implementation of P and V must guarantee that:
If several processes simultaneously invoke P(s) or V(s), the operations will occur sequentially in some arbitrary order.
If more than one process is waiting inside P(s) for s to become > 0 , one of the waiting processes is selected to complete the P(s) operation.
The selection can be at random but a typical implementation maintains the blocked processes in a queue processed in FIFO order.
The CS problem using semaphores
A single semaphore, initialized to 1, is sufficient to solve the problem for any number of processes.