Locks Flashcards
Explain the term race condition
It is a situation where the correctness of several operations depends on the order in which the operations are executed.
Enumerate and explain the requirements for a valid synchronization solution
- Mutual exclusion: At most one thread can be in the CS at any time
- Progress: No thread running outside of the CS may block another thread from getting in
- Bounded Waiting: Once a thread starts trying to enter the CS, there is a bound in the number of times other threads get in
- Performance: The time overhead added by using the lock for different cases of no/low/high contention
Critical Section
CS is a code sequence that might result in a race condition when executed concurrently => protect CS from concurrent execution
does disabling interrupts result in mutual exclusion? (advantages and disadvatages)
the kernel only switches threads on interrupts
have per thread do-not-disturb bit
+ easy and convenient in the kernel
- only works in single-core systems: disabling interrupts on one CPU doesn’t affect other CPUs
- only feasible in kernel: user code shouldn’t have the privilege to turn off interrupts (infinite loop, hogging the CPU)
Locking with TestAndSet
int TestAndSet (int *old_ptr, int new) {
int old = *old_ptr;
*old_ptr = new;
return old;
}
while (TestAndSet(&lock->flag,1) == 1)
fetch the old value at old_ptr
store new into old_ptr
return the old value
spin-wait while lock is held
Locking with CompareAndSwap
int CompareAndSwap (int *addr, int expected, int new) {
int actual = *addr;
if (actual == expected)
*addr = new;
return actual;
}
while(CompareAndSwap(&lock->flag, 0,1) == 1);
Idea: test whether the value at the address specified by ptr is equal to expected; if so, update the memory location pointed by ptr with the new value
Lock with Load-linked/Store-conditional
Which conditions of synchronization are met with atomic store operations
+ mutual exclusion: only one thread can enter CS
+ progress: only thread within the CS hinders others from getting in
- bounded waiting: no upper bound
- performance: wasted time while spinning on the CPU
Spinlock limitations
- if the CS is large or many threads try to enter, spinlocks might not be a good choice as many threads actively wait spinning
- if threads on different cores use the lock
- can behave unexpectedly when processes are scheduled with static priorities
Disadvantages of yield() instead of spinning
- the cost of a context switch can be high
- bad performance in case of many threads
Disadvantages of busy waiting
- wastes resources when threads wait for locks
- stresses the cache coherence protocol when used across cores
- can cause priority inversion problem
What is the idea behind using blocking locks
- threads should sleep on locks if they are occupied
- wake up threads one at a time when lock becomes free
Explain two-phase locks
- try to get into the CS with a userspace spinlock (phase 1)
- leave the CS in userspace if no other thread is waiting, otherwise wake up a blocked thread
- if the CS us busy, use a syscall to put the thread to sleep (phase 2)
- the thread is only woken up when the lock becomes free later
Explain critical section, entry section, exit section, and remainder section
CS: code region in which a thread accesses some common data such as a shared variable. As soon as one thread is executing inside a CS, no other thread is allowed to execute in the same CS
Entry section: a thread must request permission to enter the critical section
Exit section: signal completion of the CS, so to allow other threads to enter
All other code is called the remainder section
A CS is enclosed by an entry section and an exit section
How could a race condition be avoided?
They can be avoided if it is ensured that “critical operations” are performed atomically. To ensure atomicity, a synchronization primitive can be used, e.g. locks. A synchronization primitive ensures that at most one thread can be inside a critical section.