Quiz 2 Flashcards
Choose whether spinlocks (implemented as described in class) are a good idea / bad idea under the following situations and why do you think it’s a good or bad idea:
Situation:
Correct locking for a critical condition on a single processor system
Good or Bad Idea? Why?
Good
Yes, spinlocks provide mutual exclusion correctly.
Choose whether spinlocks (implemented as described in class) are a good idea / bad idea under the following situations and why do you think it’s a good or bad idea:
Situation:
Correct locking for a critical condition on a multiprocessor system
Good or Bad Idea? Why?
Good
And they also work on multiprocessor systems.
Choose whether spinlocks (implemented as described in class) are a good idea / bad idea under the following situations and why do you think it’s a good or bad idea:
Situation:
Fairness when multiple threads are attempting to get the lock
Good or Bad Idea? Why?
Bad
Spinlocks are not fair. It is possible that threads could starve.
Choose whether spinlocks (implemented as described in class) are a good idea / bad idea under the following situations and why do you think it’s a good or bad idea:
Situation:
- Low contention for the lock
- Locks are held for a short time
Good or Bad Idea? Why?
Good
This is an ideal situation for spinlocks. There is no syscall overhead for spinlocks, so if there is a good chance you won’t be spinning for very long, this will be faster.
Choose whether spinlocks (implemented as described in class) are a good idea / bad idea under the following situations and why do you think it’s a good or bad idea:
Situation:
- High contention for the lock
- Locks are held for a long time
Good or Bad Idea? Why?
Bad
This is a bad situation for spinlocks, the CPU will spend most of the time checking on the status of the lock.
Consider the following snapshot of a system:
Process | Allocation | Max | Available
| A B C | A B C | A B C
P1 | 2 3 0 | 2 9 1 |
P2 | 2 0 0 | 3 4 3 |
P3 | 0 0 1 | 3 7 5 |
P4 | 0 2 0 | 2 3 3 |
P5 | 1 2 1 | 2 2 2 |
Part A: Show that the system is in a safe state by demonstrating an order in which the processes may complete.
Here is one safe ordering: P5, P2, P4, P1, P3
Consider the following snapshot of a system:
Process | Allocation | Max | Available
| A B C | A B C | A B C
P1 | 2 3 0 | 2 9 1 |
P2 | 2 0 0 | 3 4 3 |
P3 | 0 0 1 | 3 7 5 |
P4 | 0 2 0 | 2 3 3 |
P5 | 1 2 1 | 2 2 2 |
Part B: If a request from process P4 arrives for (0, 1, 3) can the request be granted immediately? If it can, so the system is in a safe state by demonstrating an order in which the processes my complete.
This request should be delayed as the system currently does not have enough of resource C.
Consider the following snapshot of a system:
Process | Allocation | Max | Available
| A B C | A B C | A B C
P1 | 2 3 0 | 2 9 1 |
P2 | 2 0 0 | 3 4 3 |
P3 | 0 0 1 | 3 7 5 |
P4 | 0 2 0 | 2 3 3 |
P5 | 1 2 1 | 2 2 2 |
Part C: If a request from process P3 arrives for (0, 0, 2) can the request be granted immediately? If it can, so the system is in a safe state by demonstrating an order in which the processes my complete. For this problem assume that system is in the same state as it was before the request in Part B came in.
This request should also be delayed. If P3 get the request, that would leave the system with (2, 3, 0). No other process would be able to finish, so we have deadlock. Sad!
Thread 1: will try to grab locks 1 and 2 in arbitrary order.
Thread 2: will try to grab locks 1 and 2 in a fixed order, 1 then 2.
Could this deadlock?
Yes: T1 holds L2 and needs L1; T2 holds L1 and needs L2
Thread 1: will try to grab locks 1 and 2 in a fixed order, 1 then 2.
Thread 2: will try to grab locks 1 and 2 in a fixed order, 2 then 1.
Could this deadlock?
Yes: T1 holds L1 and needs L2; T2 holds L2 and needs L1
Thread 1: will try to grab locks 1 and 2 in a fixed order, 1 then 2.
Thread 2: will try to grab locks 1, 2, and 3 in a fixed order, 1, then 2, then 3.
Could this deadlock?
No: Resources are ordered and requested in order
Thread 1: will try to grab locks 1 and 2 in arbitrary order.
Thread 2: will try to grab locks 2 and 3 in arbitrary order.
Thread 3: will try to grab locks 1 and 3 in arbitrary order.
Could this deadlock?
Yes: T1 holds L1 and needs L2; T2 holds L2 and needs L3; T3 holds L3 and needs L1
Thread 1: will try to grab locks 1 and 2 in arbitrary order.
Thread 2: will try to grab locks 2 and 3 in arbitrary order.
Thread 3: will try to grab lock 1.
Could this deadlock?
No: T3 will eventually give up L1; T2 will always be able to get L3 so will eventually give up L2
Thread 1: will try to grab locks 1 and 2 in a fixed order, 1 then 2.
Thread 2: will try to grab locks 2 and 3 in a fixed order, 2 then 3.
Thread 3: will try to grab locks 1 and 3 in a fixed order, 1 then 3.
Could this deadlock?
No: Resources are ordered and requested in order
Thread 1: will try to grab locks 1 and 2 in a fixed order, 1 then 2.
Thread 2: will try to grab locks 2 and 3 in a fixed order, 2 then 3.
Thread 3: will try to grab locks 1 and 2 in arbitrary order.
Could this deadlock?
Yes: T1 holds L1 and needs L2; T3 holds L2 and needs L1
Assume we have a new hardware instruction called the Load-and-Store-Zero, which does the following atomically (here is C pseudo-code):
int LoadAndStoreZero(int *addr) { int old = *addr; *addr = 0; return old; }
Let’s use this instruction to build a spinlock. Complete the following three functions by filling in the blank lines. You can shorten any calls to the above instruction as LASZ().
void lock_init(int *lock) { // What do we initialize our lock to? *lock = \_\_\_\_\_\_\_\_\_; } void lock_acquire(int *lock) { while(\_\_\_\_\_\_\_\_\_\_\_\_); } void lock_release(int *lock) { *lock = \_\_\_\_\_\_\_\_\_\_; }
int LoadAndStoreZero(int *addr) { int old = *addr; *addr = 0; return old; } --- void lock_init(int *lock) { // What do we initialize our lock to? *lock = \_\_\_1\_\_\_; } void lock_acquire(int *lock) { while(\_\_\_\_LASZ(lock) == \_\_\_0\_\_\_); } void lock_release(int *lock) { *lock = \_\_\_1\_\_\_; }
The code below provides a “solution” to the producer/consumer problem. This solution uses a single lock (m) and two condition variables (fill and empty).
Consumer: while (1) { mutex_lock(&m); if (numfull == 0) { wait(&fill, &m); } tmp = get(); signal(&empty); mutex_unlock(&m) }
Producer: for (i = 0; i < 10; i++) { mutex_lock(&m); if (numfull == MAX) { wait(&empty, &m); } put(); signal(&fill); mutex_unlock(&m); }
We’ve made some changes to the condition variable library:
- wait() is changed so that it never wakes up unless signaled (in other words, there are no spurious wakeups).
- signal() is changed so that it immediately transfers control to a waiting thread (if there is one), thus implicitly passing the lock to the waiting thread as well. Does this change affect the correctness of the producer/consumer solution above?
Explain below why the above code is either correct or incorrect.
This code is correct.
This is an example of Hoare semantics. Since we guarantee that the signaled thread runs immediately (and has the lock) there is no chance the condition it was signaled on has changed. We do not need to check the condition again. Because this semantic is harder to implement, virtually every system uses Mesa semantics (no guarantees that the state will be unchanged).
Assume we want to build a semaphore out of locks and condition variables. We have some basic code pieces lying around:
- while (s [LTS]= 0)
- while (s >= 0)
- while (s > 0)
- while (s [LTS] 0)
- while (s == 0)
- s–;
- s++;
- cond_wait(&cond, &lock);
- cond_signal(&cond);
- mutex_lock(&lock);
- mutex_unlock(&lock);
Your job is to implement some different semaphore variants with these code chunks. For example, a rather useless code sequence would be “6, 7”, which would decrement s by 1 (“s–”) and then increment it by 1 (“s++”).
Assume you are implementing a binary semaphore. Thus, you have to implement two routines: sem wait() and sem post(). For the parts below, you can just list the code numbers, you don’t have to write out the actual code.
(a) What are the sequence of operations you need to implement sem wait()?
A binary semaphore is just a lock, so operation number 10.
Assume we want to build a semaphore out of locks and condition variables. We have some basic code pieces lying around:
- while (s [LTS]= 0)
- while (s >= 0)
- while (s > 0)
- while (s [LTS] 0)
- while (s == 0)
- s–;
- s++;
- cond_wait(&cond, &lock);
- cond_signal(&cond);
- mutex_lock(&lock);
- mutex_unlock(&lock);
Your job is to implement some different semaphore variants with these code chunks. For example, a rather useless code sequence would be “6, 7”, which would decrement s by 1 (“s–”) and then increment it by 1 (“s++”).
Assume you are implementing a binary semaphore. Thus, you have to implement two routines: sem wait() and sem post(). For the parts below, you can just list the code numbers, you don’t have to write out the actual code.
(b) What are the sequence of operations you need to implement sem post()?
11
Assume we want to build a semaphore out of locks and condition variables. We have some basic code pieces lying around:
- while (s [LTS]= 0)
- while (s >= 0)
- while (s > 0)
- while (s [LTS] 0)
- while (s == 0)
- s–;
- s++;
- cond_wait(&cond, &lock);
- cond_signal(&cond);
- mutex_lock(&lock);
- mutex_unlock(&lock);
Your job is to implement some different semaphore variants with these code chunks. For example, a rather useless code sequence would be “6, 7”, which would decrement s by 1 (“s–”) and then increment it by 1 (“s++”).
Assume you are implementing a binary semaphore. Thus, you have to implement two routines: sem wait() and sem post(). For the parts below, you can just list the code numbers, you don’t have to write out the actual code.
Now assume you need to build a generalized semaphore. The semaphore is initialized with the
following code:
sem_init(int value) {
s = value;
}
where “s” is the variable that holds the value of the semaphore.
(c) What are the operations you need to implement sem wait()?
10, 6, 4, 8, 11 or 10, 1, 8, 6, 11
Assume we want to build a semaphore out of locks and condition variables. We have some basic code pieces lying around:
- while (s [LTS]= 0)
- while (s >= 0)
- while (s > 0)
- while (s [LTS] 0)
- while (s == 0)
- s–;
- s++;
- cond_wait(&cond, &lock);
- cond_signal(&cond);
- mutex_lock(&lock);
- mutex_unlock(&lock);
Your job is to implement some different semaphore variants with these code chunks. For example, a rather useless code sequence would be “6, 7”, which would decrement s by 1 (“s–”) and then increment it by 1 (“s++”).
Assume you are implementing a binary semaphore. Thus, you have to implement two routines: sem wait() and sem post(). For the parts below, you can just list the code numbers, you don’t have to write out the actual code.
Now assume you need to build a generalized semaphore. The semaphore is initialized with the
following code:
sem_init(int value) {
s = value;
}
where “s” is the variable that holds the value of the semaphore.
(d) What are the operations you need to implement sem post()?
10, 7, 9, 11
Given the following code: int counter = 0; mythread(void *arg) { int i; for (i = 0; i < 10000; i++) { counter = counter + 1; } return NULL; } Assume two threads A and B are running the above thread at the same time. Circle all possible values of counter below after both threads have completed. A. 5000 B. 10000 A thread got clobbered on every single loop C. 15000 Some loops got clobbered D. 20000 One thread ran to completion before the other one started E. 25000
B. 10000 A thread got clobbered on every single loop
C. 15000 Some loops got clobbered
D. 20000 One thread ran to completion before the other one started
Given the following code: int counter = 0; mythread(void *arg) { int i; for (i = 0; i < 10000; i++) { counter = counter + 1; } return NULL; } Assume two threads A and B are running the above thread at the same time.
The assembly code to update the counter for both threads is shown below. counter is held in memory at address 0x8049a1c.
A1: mov 0x8049a1c, %eax
A2: add $0x1, %eax
A3: mov %eax, 0x8049a1c
B1: mov 0x8049a1c, %eax
B2: add $0x1, %eax
B3: mov %eax, 0x8049a1c
Show an interleaving of threads A and B that would give a correct update of the counter by both threads. Lists the steps by thread and instruction (e.g, A1, …).
A1, A2, A3, B1, B2, B3 or B1, B2, B3, A1, A2, A3
Given the following code: int counter = 0; mythread(void *arg) { int i; for (i = 0; i < 10000; i++) { counter = counter + 1; } return NULL; } Assume two threads A and B are running the above thread at the same time.
The assembly code to update the counter for both threads is shown below. counter is held in memory at address 0x8049a1c.
A1: mov 0x8049a1c, %eax
A2: add $0x1, %eax
A3: mov %eax, 0x8049a1c
B1: mov 0x8049a1c, %eax
B2: add $0x1, %eax
B3: mov %eax, 0x8049a1c
Show an interleaving of threads A and B that would give an incorrect update of the counter by both threads. Lists the steps by thread and instruction (e.g, A1, …).
Any interleaving where A1-A3 or B1-B3 is not done atomically. Here is one possible interleaving: A1, B1, A2, A3, B2, B3
Given the following code: int counter = 0; mythread(void *arg) { int i; for (i = 0; i < 10000; i++) { counter = counter + 1; } return NULL; } Assume two threads A and B are running the above thread at the same time.
The assembly code to update the counter for both threads is shown below. counter is held in memory at address 0x8049a1c.
A1: mov 0x8049a1c, %eax
A2: add $0x1, %eax
A3: mov %eax, 0x8049a1c
B1: mov 0x8049a1c, %eax
B2: add $0x1, %eax
B3: mov %eax, 0x8049a1c
We decide to protect the above code with a semaphore. What does the semaphore need to be initialized to?
We initialize it to 1 as we are using it as a lock.