Quiz 2 Flashcards

1
Q

Choose whether spinlocks (implemented as described in class) are a good idea / bad idea under the following situations and why do you think it’s a good or bad idea:

Situation:
Correct locking for a critical condition on a single processor system

Good or Bad Idea? Why?

A

Good

Yes, spinlocks provide mutual exclusion correctly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Choose whether spinlocks (implemented as described in class) are a good idea / bad idea under the following situations and why do you think it’s a good or bad idea:

Situation:
Correct locking for a critical condition on a multiprocessor system

Good or Bad Idea? Why?

A

Good

And they also work on multiprocessor systems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Choose whether spinlocks (implemented as described in class) are a good idea / bad idea under the following situations and why do you think it’s a good or bad idea:

Situation:
Fairness when multiple threads are attempting to get the lock

Good or Bad Idea? Why?

A

Bad

Spinlocks are not fair. It is possible that threads could starve.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Choose whether spinlocks (implemented as described in class) are a good idea / bad idea under the following situations and why do you think it’s a good or bad idea:

Situation:

  1. Low contention for the lock
  2. Locks are held for a short time

Good or Bad Idea? Why?

A

Good

This is an ideal situation for spinlocks. There is no syscall overhead for spinlocks, so if there is a good chance you won’t be spinning for very long, this will be faster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Choose whether spinlocks (implemented as described in class) are a good idea / bad idea under the following situations and why do you think it’s a good or bad idea:

Situation:

  1. High contention for the lock
  2. Locks are held for a long time

Good or Bad Idea? Why?

A

Bad

This is a bad situation for spinlocks, the CPU will spend most of the time checking on the status of the lock.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Consider the following snapshot of a system:

Process | Allocation | Max | Available
| A B C | A B C | A B C
P1 | 2 3 0 | 2 9 1 |
P2 | 2 0 0 | 3 4 3 |
P3 | 0 0 1 | 3 7 5 |
P4 | 0 2 0 | 2 3 3 |
P5 | 1 2 1 | 2 2 2 |

Part A: Show that the system is in a safe state by demonstrating an order in which the processes may complete.

A

Here is one safe ordering: P5, P2, P4, P1, P3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Consider the following snapshot of a system:

Process | Allocation | Max | Available
| A B C | A B C | A B C
P1 | 2 3 0 | 2 9 1 |
P2 | 2 0 0 | 3 4 3 |
P3 | 0 0 1 | 3 7 5 |
P4 | 0 2 0 | 2 3 3 |
P5 | 1 2 1 | 2 2 2 |

Part B: If a request from process P4 arrives for (0, 1, 3) can the request be granted immediately? If it can, so the system is in a safe state by demonstrating an order in which the processes my complete.

A

This request should be delayed as the system currently does not have enough of resource C.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Consider the following snapshot of a system:

Process | Allocation | Max | Available
| A B C | A B C | A B C
P1 | 2 3 0 | 2 9 1 |
P2 | 2 0 0 | 3 4 3 |
P3 | 0 0 1 | 3 7 5 |
P4 | 0 2 0 | 2 3 3 |
P5 | 1 2 1 | 2 2 2 |

Part C: If a request from process P3 arrives for (0, 0, 2) can the request be granted immediately? If it can, so the system is in a safe state by demonstrating an order in which the processes my complete. For this problem assume that system is in the same state as it was before the request in Part B came in.

A

This request should also be delayed. If P3 get the request, that would leave the system with (2, 3, 0). No other process would be able to finish, so we have deadlock. Sad!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Thread 1: will try to grab locks 1 and 2 in arbitrary order.
Thread 2: will try to grab locks 1 and 2 in a fixed order, 1 then 2.

Could this deadlock?

A

Yes: T1 holds L2 and needs L1; T2 holds L1 and needs L2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Thread 1: will try to grab locks 1 and 2 in a fixed order, 1 then 2.
Thread 2: will try to grab locks 1 and 2 in a fixed order, 2 then 1.

Could this deadlock?

A

Yes: T1 holds L1 and needs L2; T2 holds L2 and needs L1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Thread 1: will try to grab locks 1 and 2 in a fixed order, 1 then 2.
Thread 2: will try to grab locks 1, 2, and 3 in a fixed order, 1, then 2, then 3.

Could this deadlock?

A

No: Resources are ordered and requested in order

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Thread 1: will try to grab locks 1 and 2 in arbitrary order.
Thread 2: will try to grab locks 2 and 3 in arbitrary order.
Thread 3: will try to grab locks 1 and 3 in arbitrary order.

Could this deadlock?

A

Yes: T1 holds L1 and needs L2; T2 holds L2 and needs L3; T3 holds L3 and needs L1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Thread 1: will try to grab locks 1 and 2 in arbitrary order.
Thread 2: will try to grab locks 2 and 3 in arbitrary order.
Thread 3: will try to grab lock 1.

Could this deadlock?

A

No: T3 will eventually give up L1; T2 will always be able to get L3 so will eventually give up L2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Thread 1: will try to grab locks 1 and 2 in a fixed order, 1 then 2.
Thread 2: will try to grab locks 2 and 3 in a fixed order, 2 then 3.
Thread 3: will try to grab locks 1 and 3 in a fixed order, 1 then 3.

Could this deadlock?

A

No: Resources are ordered and requested in order

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Thread 1: will try to grab locks 1 and 2 in a fixed order, 1 then 2.
Thread 2: will try to grab locks 2 and 3 in a fixed order, 2 then 3.
Thread 3: will try to grab locks 1 and 2 in arbitrary order.

Could this deadlock?

A

Yes: T1 holds L1 and needs L2; T3 holds L2 and needs L1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Assume we have a new hardware instruction called the Load-and-Store-Zero, which does the following atomically (here is C pseudo-code):

int LoadAndStoreZero(int *addr) {
int old = *addr;
*addr = 0;
return old;
}

Let’s use this instruction to build a spinlock. Complete the following three functions by filling in the blank lines. You can shorten any calls to the above instruction as LASZ().

void lock_init(int *lock) {
// What do we initialize our lock to?
*lock = \_\_\_\_\_\_\_\_\_;
}
void lock_acquire(int *lock) {
while(\_\_\_\_\_\_\_\_\_\_\_\_);
}
void lock_release(int *lock) {
*lock = \_\_\_\_\_\_\_\_\_\_;
}
A
int LoadAndStoreZero(int *addr) {
int old = *addr;
*addr = 0;
return old;
}
---
void lock_init(int *lock) {
// What do we initialize our lock to?
*lock = \_\_\_1\_\_\_;
}
void lock_acquire(int *lock) {
while(\_\_\_\_LASZ(lock) == \_\_\_0\_\_\_);
}
void lock_release(int *lock) {
*lock = \_\_\_1\_\_\_;
}
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

The code below provides a “solution” to the producer/consumer problem. This solution uses a single lock (m) and two condition variables (fill and empty).

Consumer: 
while (1) {
mutex_lock(&m);
if (numfull == 0) {
wait(&fill, &m);
}
tmp = get();
signal(&empty);
mutex_unlock(&m)
}
Producer: 
for (i = 0; i < 10; i++) {
mutex_lock(&amp;m);
if (numfull == MAX) {
wait(&amp;empty, &amp;m);
}
put();
signal(&amp;fill);
mutex_unlock(&amp;m);
}

We’ve made some changes to the condition variable library:

  1. wait() is changed so that it never wakes up unless signaled (in other words, there are no spurious wakeups).
  2. signal() is changed so that it immediately transfers control to a waiting thread (if there is one), thus implicitly passing the lock to the waiting thread as well. Does this change affect the correctness of the producer/consumer solution above?

Explain below why the above code is either correct or incorrect.

A

This code is correct.

This is an example of Hoare semantics. Since we guarantee that the signaled thread runs immediately (and has the lock) there is no chance the condition it was signaled on has changed. We do not need to check the condition again. Because this semantic is harder to implement, virtually every system uses Mesa semantics (no guarantees that the state will be unchanged).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Assume we want to build a semaphore out of locks and condition variables. We have some basic code pieces lying around:

  1. while (s [LTS]= 0)
  2. while (s >= 0)
  3. while (s > 0)
  4. while (s [LTS] 0)
  5. while (s == 0)
  6. s–;
  7. s++;
  8. cond_wait(&cond, &lock);
  9. cond_signal(&cond);
  10. mutex_lock(&lock);
  11. mutex_unlock(&lock);

Your job is to implement some different semaphore variants with these code chunks. For example, a rather useless code sequence would be “6, 7”, which would decrement s by 1 (“s–”) and then increment it by 1 (“s++”).

Assume you are implementing a binary semaphore. Thus, you have to implement two routines: sem wait() and sem post(). For the parts below, you can just list the code numbers, you don’t have to write out the actual code.

(a) What are the sequence of operations you need to implement sem wait()?

A

A binary semaphore is just a lock, so operation number 10.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Assume we want to build a semaphore out of locks and condition variables. We have some basic code pieces lying around:

  1. while (s [LTS]= 0)
  2. while (s >= 0)
  3. while (s > 0)
  4. while (s [LTS] 0)
  5. while (s == 0)
  6. s–;
  7. s++;
  8. cond_wait(&cond, &lock);
  9. cond_signal(&cond);
  10. mutex_lock(&lock);
  11. mutex_unlock(&lock);

Your job is to implement some different semaphore variants with these code chunks. For example, a rather useless code sequence would be “6, 7”, which would decrement s by 1 (“s–”) and then increment it by 1 (“s++”).

Assume you are implementing a binary semaphore. Thus, you have to implement two routines: sem wait() and sem post(). For the parts below, you can just list the code numbers, you don’t have to write out the actual code.

(b) What are the sequence of operations you need to implement sem post()?

A

11

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Assume we want to build a semaphore out of locks and condition variables. We have some basic code pieces lying around:

  1. while (s [LTS]= 0)
  2. while (s >= 0)
  3. while (s > 0)
  4. while (s [LTS] 0)
  5. while (s == 0)
  6. s–;
  7. s++;
  8. cond_wait(&cond, &lock);
  9. cond_signal(&cond);
  10. mutex_lock(&lock);
  11. mutex_unlock(&lock);

Your job is to implement some different semaphore variants with these code chunks. For example, a rather useless code sequence would be “6, 7”, which would decrement s by 1 (“s–”) and then increment it by 1 (“s++”).

Assume you are implementing a binary semaphore. Thus, you have to implement two routines: sem wait() and sem post(). For the parts below, you can just list the code numbers, you don’t have to write out the actual code.

Now assume you need to build a generalized semaphore. The semaphore is initialized with the
following code:
sem_init(int value) {
s = value;
}
where “s” is the variable that holds the value of the semaphore.

(c) What are the operations you need to implement sem wait()?

A

10, 6, 4, 8, 11 or 10, 1, 8, 6, 11

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Assume we want to build a semaphore out of locks and condition variables. We have some basic code pieces lying around:

  1. while (s [LTS]= 0)
  2. while (s >= 0)
  3. while (s > 0)
  4. while (s [LTS] 0)
  5. while (s == 0)
  6. s–;
  7. s++;
  8. cond_wait(&cond, &lock);
  9. cond_signal(&cond);
  10. mutex_lock(&lock);
  11. mutex_unlock(&lock);

Your job is to implement some different semaphore variants with these code chunks. For example, a rather useless code sequence would be “6, 7”, which would decrement s by 1 (“s–”) and then increment it by 1 (“s++”).

Assume you are implementing a binary semaphore. Thus, you have to implement two routines: sem wait() and sem post(). For the parts below, you can just list the code numbers, you don’t have to write out the actual code.

Now assume you need to build a generalized semaphore. The semaphore is initialized with the
following code:
sem_init(int value) {
s = value;
}
where “s” is the variable that holds the value of the semaphore.

(d) What are the operations you need to implement sem post()?

A

10, 7, 9, 11

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q
Given the following code:
int counter = 0;
mythread(void *arg) {
int i;
for (i = 0; i < 10000; i++) {
counter = counter + 1;
}
return NULL;
}
Assume two threads A and B are running the above thread at the same time. Circle all possible values of counter below after both threads have completed.
A. 5000
B. 10000 A thread got clobbered on every single loop
C. 15000 Some loops got clobbered
D. 20000 One thread ran to completion before the other one started
E. 25000
A

B. 10000 A thread got clobbered on every single loop
C. 15000 Some loops got clobbered
D. 20000 One thread ran to completion before the other one started

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q
Given the following code:
int counter = 0;
mythread(void *arg) {
int i;
for (i = 0; i < 10000; i++) {
counter = counter + 1;
}
return NULL;
}
Assume two threads A and B are running the above thread at the same time. 

The assembly code to update the counter for both threads is shown below. counter is held in memory at address 0x8049a1c.

A1: mov 0x8049a1c, %eax
A2: add $0x1, %eax
A3: mov %eax, 0x8049a1c

B1: mov 0x8049a1c, %eax
B2: add $0x1, %eax
B3: mov %eax, 0x8049a1c

Show an interleaving of threads A and B that would give a correct update of the counter by both threads. Lists the steps by thread and instruction (e.g, A1, …).

A

A1, A2, A3, B1, B2, B3 or B1, B2, B3, A1, A2, A3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q
Given the following code:
int counter = 0;
mythread(void *arg) {
int i;
for (i = 0; i < 10000; i++) {
counter = counter + 1;
}
return NULL;
}
Assume two threads A and B are running the above thread at the same time. 

The assembly code to update the counter for both threads is shown below. counter is held in memory at address 0x8049a1c.

A1: mov 0x8049a1c, %eax
A2: add $0x1, %eax
A3: mov %eax, 0x8049a1c

B1: mov 0x8049a1c, %eax
B2: add $0x1, %eax
B3: mov %eax, 0x8049a1c

Show an interleaving of threads A and B that would give an incorrect update of the counter by both threads. Lists the steps by thread and instruction (e.g, A1, …).

A

Any interleaving where A1-A3 or B1-B3 is not done atomically. Here is one possible interleaving: A1, B1, A2, A3, B2, B3

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q
Given the following code:
int counter = 0;
mythread(void *arg) {
int i;
for (i = 0; i < 10000; i++) {
counter = counter + 1;
}
return NULL;
}
Assume two threads A and B are running the above thread at the same time. 

The assembly code to update the counter for both threads is shown below. counter is held in memory at address 0x8049a1c.

A1: mov 0x8049a1c, %eax
A2: add $0x1, %eax
A3: mov %eax, 0x8049a1c

B1: mov 0x8049a1c, %eax
B2: add $0x1, %eax
B3: mov %eax, 0x8049a1c

We decide to protect the above code with a semaphore. What does the semaphore need to be initialized to?

A

We initialize it to 1 as we are using it as a lock.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q
Given the following code:
int counter = 0;
mythread(void *arg) {
int i;
for (i = 0; i < 10000; i++) {
counter = counter + 1;
}
return NULL;
}
Assume two threads A and B are running the above thread at the same time. 

The assembly code to update the counter for both threads is shown below. counter is held in memory at address 0x8049a1c.

A1: mov 0x8049a1c, %eax
A2: add $0x1, %eax
A3: mov %eax, 0x8049a1c

B1: mov 0x8049a1c, %eax
B2: add $0x1, %eax
B3: mov %eax, 0x8049a1c

Do you need to protect the variable i with the semaphore? Why or why not?

A

We don’t need to protect i because it is a local variable allocated in each thread’s own private stack.

27
Q

Banker’s Algorithm:

Process | Allocation | Max | Need | Total Resources
| A B C | A B C | A B C | A B C
| 10 5 7
P0 | 0 1 0 | 7 5 7 |
P1 | 2 0 0 | 3 2 5 |
P2 | 3 0 2 | 9 0 4 |
P3 | 2 1 1 | 2 2 5 |

The above table shows four processes, P0 through P3. Each process needs a number of resources (of types A, B, and C) to complete. If a process obtains all the resources it needs, it will be able to finish and return its resources back to the system. The Allocation column shows how many of each resource is currently allocated for each process. The Max column shows the total number of each resource the process needs to be able to finish. The “Total Resources” box shows the total number of each resource the Operating System has. In other words, this is the number of resources that the OS had available before it allocated any of its resources to any process.

Part A: Using the definitions of the Banker’s Algorithm we went over in class, fill in the Need column in the above diagram.

A

Process | Allocation | Max | Need | Total Resources
| A B C | A B C | A B C | A B C
| 10 5 7
P0 | 0 1 0 | 7 5 7 | 7 4 7
P1 | 2 0 0 | 3 2 5 | 1 2 5
P2 | 3 0 2 | 9 0 4 | 6 0 2
P3 | 2 1 1 | 2 2 5 | 0 1 4

28
Q

Banker’s Algorithm:

Process | Allocation | Max | Need | Total Resources
| A B C | A B C | A B C | A B C
| 10 5 7
P0 | 0 1 0 | 7 5 7 | 7 4 7
P1 | 2 0 0 | 3 2 5 | 1 2 5
P2 | 3 0 2 | 9 0 4 | 6 0 2
P3 | 2 1 1 | 2 2 5 | 0 1 4

The above table shows four processes, P0 through P3. Each process needs a number of resources (of types A, B, and C) to complete. If a process obtains all the resources it needs, it will be able to finish and return its resources back to the system. The Allocation column shows how many of each resource is currently allocated for each process. The Max column shows the total number of each resource the process needs to be able to finish. The “Total Resources” box shows the total number of each resource the Operating System has. In other words, this is the number of resources that the OS had available before it allocated any of its resources to any process.

Part B: Show that the system is in a safe state by demonstrating an order in which the processes may complete.

A

First,write down what resources are currently available (not allocated to a process) in the system: 3 3 4 The only process that has the resources it needs to complete is P3. When P3 completes, available is: 5 4 5 Of the remaining processes, only P1 has the resources to run to completion. Available is now: 7 4 5. Then P2 may complete and available is now 10 4 7, which is enough resources for P0 to complete. So the ordering is:

P3 → P1 → P2 → P0

29
Q

Banker’s Algorithm:

Process | Allocation | Max | Need | Total Resources
| A B C | A B C | A B C | A B C
| 10 5 7
P0 | 0 1 0 | 7 5 7 | 7 4 7
P1 | 2 0 0 | 3 2 5 | 1 2 5
P2 | 3 0 2 | 9 0 4 | 6 0 2
P3 | 2 1 1 | 2 2 5 | 0 1 4

The above table shows four processes, P0 through P3. Each process needs a number of resources (of types A, B, and C) to complete. If a process obtains all the resources it needs, it will be able to finish and return its resources back to the system. The Allocation column shows how many of each resource is currently allocated for each process. The Max column shows the total number of each resource the process needs to be able to finish. The “Total Resources” box shows the total number of each resource the Operating System has. In other words, this is the number of resources that the OS had available before it allocated any of its resources to any process.

Part C: If a request from process P2 arrives for (1, 0, 1)can the request be granted immediately? If it can, show the system is in a safe state by demonstrating an order in which the processes may complete.

A

The request should not be granted. If the request is granted, Available would be: 2 3 3, and the need for P2 is 5 0 1. P3 can complete, returning its resources, increasing Available to 4 4 4. No other process can get the resources it needs to complete, so if the request is granted, deadlock may occur in the system.

30
Q

Banker’s Algorithm:

Process | Allocation | Max | Need | Total Resources
| A B C | A B C | A B C | A B C
| 10 5 7
P0 | 0 1 0 | 7 5 7 | 7 4 7
P1 | 2 0 0 | 3 2 5 | 1 2 5
P2 | 3 0 2 | 9 0 4 | 6 0 2
P3 | 2 1 1 | 2 2 5 | 0 1 4

The above table shows four processes, P0 through P3. Each process needs a number of resources (of types A, B, and C) to complete. If a process obtains all the resources it needs, it will be able to finish and return its resources back to the system. The Allocation column shows how many of each resource is currently allocated for each process. The Max column shows the total number of each resource the process needs to be able to finish. The “Total Resources” box shows the total number of each resource the Operating System has. In other words, this is the number of resources that the OS had available before it allocated any of its resources to any process.

Part D: If a request from process P0 arrives for (0, 2, 0) can the request be granted immediately? If it can, show the system is in a safe state by demonstrating an order in which the processes may complete. For this problem, assume that system is in the same state as it was before the request in Part B came in (i.e., the state of the system is as shown in the diagram above).

A

The request may be granted. A safe sequence of processes is: P3 → P1 → P2 → P0.

31
Q

Here is an attempted solution for the producer for the producer/consumer problem. What is wrong with the following code? You can assume there are no syntax errors with the code (i.e., it compiles correctly), all function calls return without any errors, and everything has been initialized correctly.

cond_t empty, fill;
mutex_t mutex;
void *producer(void *arg) {
int i;
for (i = 0; i < loops; i++) {
pthread_mutex_lock(&amp;mutex);
if (count == MAX)
pthread_cond_wait(&amp;empty, &amp;mutex);
put(i);
pthread_cond_signal(&amp;fill);
pthread_mutex_unlock(&amp;mutex);
}
}

What is wrong with the above code, and how would you fix it?

A

The if statement should be a while loop. The correct code is shown below. When a consumer signalled that a producer slot is available, there is no guarantee that the signaled producer runs right away (mesa semantics). It is possible for a newly arriving producer to run and fill that available slot, leading to a possible buffer overrun if the signaled producer does not recheck the value of count.

void *producer(void *arg) {
int i;
for (i = 0; i < loops; i++) {
pthread_mutex_lock(&amp;mutex);
while(count == MAX)
pthread_cond_wait(&amp;empty, &amp;mutex);
put(i);
pthread_cond_signal(&amp;fill);
pthread_mutex_unlock(&amp;mutex);
}
}
32
Q

Below is most of the implementation of a semaphore. Please help me finish it. You do not need to modify the sem_t struct or the sem_init() function, you only need to finish the implementation of sem_wait() and sem_post(). You may use the cond_wait() and cond_signal() functions (be sure to use the proper arguments when calling these functions), and can assume all function calls return normally (without any errors). You do not have to maintain the invariant that the value of the semaphore, when negative, reflects the number of waiting threads.

// Don't need to modify
typedef struct \_\_sem_t {
int value;
pthread_cond_t cond;
pthread_mutex_t lock;
} sem_t;
// Only one thread can call this once before use
// Don't need to modify
void sem_init(sem_t *s, int value) {
s->value = value;
cond_init(&amp;s->cond);
mutex_init(&amp;s->lock);
}
// Finish this implementation
void sem_wait(sem_t *s) {
mutex_lock(&amp;s->lock);
mutex_unlock(&amp;s->lock);
}
// Finish this implementation
void sem_post(sem_t *s) {
mutex_lock(&amp;s->lock);
mutex_unlock(&amp;s->lock);
}
A
// Don't need to modify
typedef struct \_\_sem_t {
int value;
pthread_cond_t cond;
pthread_mutex_t lock;
} sem_t;
// Only one thread can call this once before use
// Don't need to modify
void sem_init(sem_t *s, int value) {
s->value = value;
cond_init(&amp;s->cond);
mutex_init(&amp;s->lock);
}
// Finish this implementation
void sem_wait(sem_t *s) {
mutex_lock(&amp;s->lock);
while (s->value [LTS]= 0) {
cond_wait(&amp;s->cond, &amp;s->lock);
}
s->value--;
mutex_unlock(&amp;s->lock);
}
// Finish this implementation
void sem_post(sem_t *s) {
mutex_lock(&amp;s->lock);
s->value++;
cond_signal(&amp;s->cond);
mutex_unlock(&amp;s->lock);
}

Notes on the implementation: The semaphore should block if the value is less than zero, otherwise it should return right away. In sem_wait(), the decrement of value must come after the while loop. To see why, imagine a scenario where the semaphore is initialized to 0 and two threads immediately call sem_wait(). If the decrement comes before the cond_wait(), value will be -2. When another thread calls sem_post(), value will be -1, and the thread will stay in the while loop and call cond_wait() again.

Also note that cond_signal() only takes the condition variable as an argument. It does not take a lock.

33
Q

I was working on implementation of spin-locks, but I got interrupted by pre-registration advising and was unable to finish it. Please help me finish it by completing the missing line in lock() and the missing line in unlock().

// Assume this function executes atomically
int TestAndSet(int *old_ptr, int new) {
int old = *old_ptr; // fetch old value at old_ptr
*old_ptr = new; // store ’new’ into old_ptr
return old; // return the old value
}
typedef struct \_\_lock_t {
int flag;
} lock_t;
void init(lock_t *lock) {
// 0: lock is available, 1: lock is held
lock->flag = 0;
}
// Fill in the missing line below
void lock(lock_t *lock) {
while (\_\_\_\_\_\_\_)
; // spin-wait (do nothing)
}
// Fill in the missing line below
void unlock(lock_t *lock) {
\_\_\_\_\_\_;
}
A
// Assume this function executes atomically
int TestAndSet(int *old_ptr, int new) {
int old = *old_ptr; // fetch old value at old_ptr
*old_ptr = new; // store ’new’ into old_ptr
return old; // return the old value
}
typedef struct \_\_lock_t {
int flag;
} lock_t;
void init(lock_t *lock) {
// 0: lock is available, 1: lock is held
lock->flag = 0;
}
// Fill in the missing line below
void lock(lock_t *lock) {
while (TestAndSet(&amp;lock->flag, 1) == 1)
; // spin-wait (do nothing)
}
// Fill in the missing line below
void unlock(lock_t *lock) {
lock->flag = 0;
}
34
Q

In order for deadlock to occur, which of the following conditions need to hold? Circle all that apply.

A. Threads hold resources allocated to them while waiting for additional resources.
B. There is a circular wait for resources.
C. There is a total ordering on requesting resources.
D. Threads claim exclusive control of resources they require.
E. Resources cannot be forcibly removed from the threads that are holding them.

A

A. Threads hold resources allocated to them while waiting for additional resources.
B. There is a circular wait for resources.
D. Threads claim exclusive control of resources they require.
E. Resources cannot be forcibly removed from the threads that are holding them.

Note: putting a total ordering on resources and having all threads attempt to acquire resources strictly in that order, will prevent deadlock from occurring.

35
Q

You are trying to detect deadlock in your system. You run your deadlock detector and you find a cycle in your resource allocation graph. Which of the following are true statements? Circle all that apply.

A. If there is only one of each resource type, deadlock has occurred.
B. If there is only one of each resource type, deadlock might occur (but you are not sure).
C. If there are multiple resources of each type, deadlock has occurred.
D. If there are multiple resources of each type, deadlock might occur (but you are not sure).
E. If there are multiple resources of each type, there is no deadlock, but starvation might occur.

A

A. If there is only one of each resource type, deadlock has occurred.
D. If there are multiple resources of each type, deadlock might occur (but you are not sure).

36
Q

Under what conditions is it a good idea to use spin locks? Circle all that apply.

A. On a single cpu system with a non-preemptive scheduler due to the simplicity of the spin lock.
B. When locks are held for a long period of time.
C. When locks are held for a short period of time.
D. On a system with really fast context switches.
E. To avoid priority inversion when you have different priority threads.

A

C. When locks are held for a short period of time.

Note: choice A would lead to deadlock because the scheduler would never interrupt the spinning.

37
Q

Here’s your page table question. At least it’s not a page table lookup! Which of the following are true statements about page tables? Circle all that apply.

A. Multi-level page table lookups are faster than a single-level page table lookup.
B. Multi-level page tables, in general, take up less space than a single-level page table.
C. In addition to the Page Frame Number, a valid bit is usually stored in the Page Table Entry.
D. TLB address space identifier (ASID) are used to avoid having to flush the TLB on a context switch.
E. A direct mapped (E=1) TLB is prefered to a fully associative TLB.

A

B. Multi-level page tables, in general, take up less space than a single-level page table.
C. In addition to the Page Frame Number, a valid bit is usually stored in the Page Table Entry.
D. TLB address space identifier (ASID) are used to avoid having to flush the TLB on a context switch.

Note: A direct mapped TLB (one TLB entry per set) can lead to conflict misses. In a fully associative TLB, a TLB entry can go anywhere in the TLB.

38
Q

The solution to the Dining Philosophers problem avoids deadlock by eliminating which condition that is necessary for deadlock to hold?

A

Circular wait

39
Q

In the Concurrent Queues designed by Michael and Scott, what was the one “trick” they used to separate the head and tail operations (to maximize concurrency).

A

They insert a dummy node to keep the head and tail operations separate.

40
Q

In the approximate counter discussed in class, discuss the tradeoffs as you increase the threshold (S) in terms of performance and accuracy relative to a precise counter.

A

As S increases, so does performance, but accuracy decreases.

41
Q

If multiple threads are able to enter a critical section at the same time, a race condition may occur. What is a race condition and why is it a problem?

A

A race condition occurs when the specific results depend on the timing execution of the code. If context switches occur at in opportune moments, we may get incorrect results. The output of the program is no longer deterministic.

42
Q

List one advantage and one disadvantage of using a thread over a process to perform some computation

A

Advantage of using a thread: It is easy for threads to share data (all threads share the same address space).
Disadvantage of using a thread: Modifications to shared data must be coordinated or a race condition may result.

43
Q

True or False?:

Semaphores can be used to provide mutual exclusion.

A

True

Binary semaphores initialized to 1 behave exactly as locks.

44
Q

True or False?:

Condition variables by themselves can be used to provide mutual exclusion.

A

False

You need to use them with a lock, which is passed in as the second argument to cond_wait().

45
Q

True or False?:

The function pthread_join(tid) waits for a thread to finish running.

A

True

46
Q

True or False?:
When updating a variable shared by multiple threads, the sequence of Read, Modify, Write, should happen atomically for the program to be correct.

A

True

47
Q

True or False?:

A two phase lock is an example of a hybrid approach.

A

True

The first phase is spinning for the lock for a short period. If it is not able to acquire the lock by spinning, it then goes to sleep and waits for the lock. It is a hybrid of spin-locks and pthread locks.

48
Q

For this problem we are going to solve a synchronization task in three different ways under three different constraints. We have three threads that we want to run in a particular order, (t1, followed by t2, followed by t3). Each thread calls a worker function (t1 calls t1_work(), t2 calls t2_work(), and t3 calls t3_work()). Each thread should not call its work function until all its predecessor threads have returned from their work functions. In other words, t2_work() should not be called until t1_work() has been called and completed, and t3_work() should not be called until t1_work() and t2_work() have been called and completed. Assume that only these three threads (t1, t2, and t3) are created and the threads can be scheduled to run in any possible order. Try to solve each problem as simply and efficiently as possible under the given constraints. You can assume all function calls return normally, without any errors.

Part A: Alone in the wilderness

For this part you are given a struct (shown below) with a single mutex and a single integer. Think of yourself Bear Grylls being dumped in the wilderness with not a lot of resources. If Bear Grylls can start a fire with nothing more than a stick and his shoelaces, you can synchronize these threads with a mutex and an integer!

For this part you are allowed to call the following functions: mutex_unlock()and mutex_lock() (be sure to use the proper arguments when calling these functions).

// Do not modify
typedef struct \_\_worker_t {
pthread_mutex_t lock;
int value;
} worker_t;
// Called only once before any threads
// Do not modify
void worker_init(worker_t *w) {
w->value = 0;
mutex_init(&amp;w->lock);
}
// Thread 1 goes first
void t1(worker_t *w) {
t1_work();
}
// Thread 2 goes second
void t2(worker_t *w) {
t2_work();
}
// Thread 3 goes third
void t3(worker_t *w) {
t3_work();
}
A
// Do not modify
typedef struct \_\_worker_t {
pthread_mutex_t lock;
int value;
} worker_t;
// Called only once before any threads
// Do not modify
void worker_init(worker_t *w) {
w->value = 0;
mutex_init(&amp;w->lock);
}
// Thread 1 goes first
void t1(worker_t *w) {
mutex_lock(&amp;w->lock);
t1_work();
w->value++;
mutex_unlock(&amp;w->lock);
}
// Thread 2 goes second
void t2(worker_t *w) {
mutex_lock(&amp;w->lock);
while (w->value [LTS] 1) {
// release the lock so t1 can run
mutex_unlock(&amp;w->lock);
// get the lock back before testing
mutex_lock(&amp;w->lock);
}
t2_work();
w->value++;
mutex_unlock(&amp;w->lock);
}
// Thread 3 goes third
void t3(worker_t *w) {
mutex_lock(&amp;w->lock);
while (w->value [LTS] 2) {
// release the lock so others can run
mutex_unlock(&amp;w->lock);
// get the lock back before testing
mutex_lock(&amp;w->lock);
}
t2_work();
mutex_unlock(&amp;w->lock);
}

Notes: The value variable keeps track of which threads have completed running. Normally, you would not hold the lock when calling the work() functions. But in this problem, since there are only three threads that run only once (and in order), it is actually more efficient to hold onto the locks for the duration of the thread execution.

49
Q

For this problem we are going to solve a synchronization task in three different ways under three different constraints. We have three threads that we want to run in a particular order, (t1, followed by t2, followed by t3). Each thread calls a worker function (t1 calls t1_work(), t2 calls t2_work(), and t3 calls t3_work()). Each thread should not call its work function until all its predecessor threads have returned from their work functions. In other words, t2_work() should not be called until t1_work() has been called and completed, and t3_work() should not be called until t1_work() and t2_work() have been called and completed. Assume that only these three threads (t1, t2, and t3) are created and the threads can be scheduled to run in any possible order. Try to solve each problem as simply and efficiently as possible under the given constraints. You can assume all function calls return normally, without any errors.

Part B: Creature Comforts

For this part you are given a struct (shown below) with a single mutex, a single integer, and a single condition variable. How can we use this new condition variable to make the synchronization task more efficient?

For this part you are allowed to call the following functions: mutex_unlock(), mutex_lock(), cond_wait(), cond_signal(), and cond_broadcast() (be sure to use the proper arguments when calling these functions).

// Do not modify
typedef struct \_\_worker_t {
pthread_mutex_t lock;
pthread_cond_t cond;
int value;
} worker_t;
// Called only once before any threads
// Do not modify
void worker_init(worker_t *w) {
w->value = 0;
mutex_init(&amp;w->lock);
cond_init(&amp;w->cond);
}
// Thread 1 goes first
void t1(worker_t *w) {
t1_work();
}
// Thread 2 goes second
void t2(worker_t *w) {
t2_work();
}
// Thread 3 goes third
void t3(worker_t *w) {
t3_work();
}
A
// Do not modify
typedef struct \_\_worker_t {
pthread_mutex_t lock;
pthread_cond_t cond;
int value;
} worker_t;
// Called only once before any threads
// Do not modify
void worker_init(worker_t *w) {
w->value = 0;
mutex_init(&amp;w->lock);
cond_init(&amp;w->cond);
}
// Thread 1 goes first
void t1(worker_t *w) {
mutex_lock(&amp;w->lock);
t1_work();
w->value++;
cond_broadcast(&amp;w->cond);
mutex_lock(&amp;w->lock);
}
// Thread 2 goes second
void t2(worker_t *w) {
mutex_lock(&amp;w->lock);
while (w->value [LTS] 1)
cond_wait(&amp;w->cond; &amp;w->lock);
t2_work();
w->value++;
cond_signal(&amp;w->cond); // broadcast OK too
mutex_lock(&amp;w->lock);
}
// Thread 3 goes third
void t3(worker_t *w) {
mutex_lock(&amp;w->lock);
while (w->value [LTS] 2)
cond_wait(&amp;w->cond; &amp;w->lock);
t3_work();
mutex_lock(&amp;w->lock);
}

Notes: Since we have only one condition variable that both t2 and t3 are waiting on, we must use cond_broadcast() in t1. This will wake up both t2 and t3 and give both of them a chance to run. If we use a cond_signal(), the signal might go to t3, and t2 would never wake up.

50
Q

For this problem we are going to solve a synchronization task in three different ways under three different constraints. We have three threads that we want to run in a particular order, (t1, followed by t2, followed by t3). Each thread calls a worker function (t1 calls t1_work(), t2 calls t2_work(), and t3 calls t3_work()). Each thread should not call its work function until all its predecessor threads have returned from their work functions. In other words, t2_work() should not be called until t1_work() has been called and completed, and t3_work() should not be called until t1_work() and t2_work() have been called and completed. Assume that only these three threads (t1, t2, and t3) are created and the threads can be scheduled to run in any possible order. Try to solve each problem as simply and efficiently as possible under the given constraints. You can assume all function calls return normally, without any errors.

Part C: If all you have is a hammer, everything looks like a nail.

For this part you are given a struct (shown below) with two semaphores, but we are taking away your integer! Can we solve this synchronization problem with just semaphores, the swiss army knife of concurrency tools? Of course we can!

For this part you are allowed to call the following functions: sem_wait()and sem_post() (be sure to use the proper arguments when calling these functions). Also, you need to modify the worker_init() function to tell me the initial value of your semaphores.

// Do not modify
typedef struct \_\_worker_t {
sem_t s1, s2;
} worker_t;
// Called only once before any threads
// Initialize your semaphores below
void worker_init(worker_t *w) {
sem_init(&amp;w->s1, 0, \_\_\_\_);
sem_init(&amp;w->s2, 0, \_\_\_\_);
}
// Thread 1 goes first
void t1(worker_t *w) {
t1_work();
}
// Thread 2 goes second
void t2(worker_t *w) {
t2_work();
}
// Thread 3 goes third
void t3(worker_t *w) {
t3_work();
}
A
// Do not modify
typedef struct \_\_worker_t {
sem_t s1, s2;
} worker_t;
// Called only once before any threads
// Initialize your semaphores below
void worker_init(worker_t *w) {
sem_init(&amp;w->s1, 0, 0);
sem_init(&amp;w->s2, 0, 0);
}
// Thread 1 goes first
void t1(worker_t *w) {
// This is the first thread to run so
// don't need to wait for anything
t1_work();
// Let t2 know it can run
sem_post(&amp;w->s1);
}
// Thread 2 goes second
void t2(worker_t *w) {
// Wait for t1 to finish
sem_wait(&amp;w->s1);
t2_work();
// Let t3 know it can run
sem_post(&amp;w->s2);
}
// Thread 3 goes third
void t3(worker_t *w) {
// Wait for t2 to finish
sem_wait(&amp;w->s2);
t3_work();
}

Note: We lost our state variable value, but the semaphores have a value associated with them which can take the place of our state variable. You can see this is the simplest implementation of the three ways we have seen to synchronize these three threads. Semaphores are right tool for this type of synchronization task.

51
Q

Banker’s Algorithm:

Process | Allocation | Max | Need | Total Resources
| A B C | A B C | A B C | A B C
| 6 6 8
P0 | 0 1 0 | 3 5 2 |
P1 | 2 1 2 | 5 2 4 |
P2 | 2 1 2 | 2 3 4 |
P3 | 1 2 2 | 1 3 2 |

The above table shows four processes, P0 through P3. Each process needs a number of resources (of types A, B, and C) to complete. If a process obtains all the resources it needs, it will be able to finish and return its resources back to the system. The Allocation column shows how many of each resource is currently allocated for each process. The Max column shows the total number of each resource the process needs to be able to finish. The “Total Resources” box shows the total number of each resource the Operating System has. In other words, this is the number of resources that the OS had available before it allocated any of its resources to any processes.

Part A: Using the definitions of the Banker’s Algorithm we went over in class, fill in the Need column in the above diagram.

A

Banker’s Algorithm:

Process | Allocation | Max | Need | Total Resources
| A B C | A B C | A B C | A B C
| 6 6 8
P0 | 0 1 0 | 3 5 2 | 3 4 2 | Avail: 1 1 2
P1 | 2 1 2 | 5 2 4 | 3 1 2
P2 | 2 1 2 | 2 3 4 | 0 2 2
P3 | 1 2 2 | 1 3 2 | 0 1 0

52
Q

Banker’s Algorithm:

Process | Allocation | Max | Need | Total Resources
| A B C | A B C | A B C | A B C
| 6 6 8
P0 | 0 1 0 | 3 5 2 | 3 4 2 | Avail: 1 1 2
P1 | 2 1 2 | 5 2 4 | 3 1 2
P2 | 2 1 2 | 2 3 4 | 0 2 2
P3 | 1 2 2 | 1 3 2 | 0 1 0

The above table shows four processes, P0 through P3. Each process needs a number of resources (of types A, B, and C) to complete. If a process obtains all the resources it needs, it will be able to finish and return its resources back to the system. The Allocation column shows how many of each resource is currently allocated for each process. The Max column shows the total number of each resource the process needs to be able to finish. The “Total Resources” box shows the total number of each resource the Operating System has. In other words, this is the number of resources that the OS had available before it allocated any of its resources to any processes.

Part B: Show that the system is in a safe state by demonstrating an order in which the processes may complete.

A

P3 → P2 → P0 → P1
or
P3 → P2 → P1 → P0

53
Q

Banker’s Algorithm:

Process | Allocation | Max | Need | Total Resources
| A B C | A B C | A B C | A B C
| 6 6 8
P0 | 0 1 0 | 3 5 2 | 3 4 2 | Avail: 1 1 2
P1 | 2 1 2 | 5 2 4 | 3 1 2
P2 | 2 1 2 | 2 3 4 | 0 2 2
P3 | 1 2 2 | 1 3 2 | 0 1 0

The above table shows four processes, P0 through P3. Each process needs a number of resources (of types A, B, and C) to complete. If a process obtains all the resources it needs, it will be able to finish and return its resources back to the system. The Allocation column shows how many of each resource is currently allocated for each process. The Max column shows the total number of each resource the process needs to be able to finish. The “Total Resources” box shows the total number of each resource the Operating System has. In other words, this is the number of resources that the OS had available before it allocated any of its resources to any processes.

Part C: If a request from process P2 arrives for (0, 1, 1)can the request be granted immediately? If it can, show the system is in a safe state by demonstrating an order in which the processes may complete.

A

If we grant the request to P2, we have the following:
Avail: 1 0 1
P2 Need: 0 1 1
The other process have the same needs as shown above. No other processes can get the resources it needs to complete, so the system is not in a safe state, and the request should not be granted.

54
Q

Banker’s Algorithm:

Process | Allocation | Max | Need | Total Resources
| A B C | A B C | A B C | A B C
| 6 6 8
P0 | 0 1 0 | 3 5 2 | 3 4 2 | Avail: 1 1 2
P1 | 2 1 2 | 5 2 4 | 3 1 2
P2 | 2 1 2 | 2 3 4 | 0 2 2
P3 | 1 2 2 | 1 3 2 | 0 1 0

The above table shows four processes, P0 through P3. Each process needs a number of resources (of types A, B, and C) to complete. If a process obtains all the resources it needs, it will be able to finish and return its resources back to the system. The Allocation column shows how many of each resource is currently allocated for each process. The Max column shows the total number of each resource the process needs to be able to finish. The “Total Resources” box shows the total number of each resource the Operating System has. In other words, this is the number of resources that the OS had available before it allocated any of its resources to any processes.

Part D: If a request from process P1 arrives for (1, 0, 2) can the request be granted immediately? If it can, show the system is in a safe state by demonstrating an order in which the processes may complete. For this problem, assume that system is in the same state as it was before the request in Part B came in (i.e., the state of the system is as shown in the diagram above).

A

If we grant the request to P1, we have the following:
Avail: 0 1 0
P1 Need: 2 1 0
The other process have the same needs as shown above.
P3 → P2 → P0 → P1 can complete in this order, so the system is in a safe state, so the request can be granted.

55
Q

Check Quiz 3 question

A

2

56
Q

Wait for me! We have seen examples of using synchronization primitives to allow one thread to wait for another. For this problem, will will use a lock and a condition variable to implement two functions, wait() and work(). When called, the wait() function will not return until other threads have called work() at least three times. In other words, the wait() function should wait until the work() function has been called three or more times first. There is also a global variable, status, which you may also use if you find it helpful. You may assume that there is only one thread who calls wait(), and you do not care which threads call work(), only that it has been called at least 3 times before returning.

#include [LTS]stdio.h>
#include [LTS]pthread.h>
// Variables you can use in your implementation. Do not declare any other variables.
static volatile int status;
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
// Lock and condition variable functions you may use
// int pthread_mutex_lock(pthread_mutex_t *mutex);
// int pthread_mutex_unlock(pthread_mutex_t *mutex);
// int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex);
// int pthread_cond_signal(pthread_cond_t *cond);
// Called once before wait() and work() are called
void init() {
// Note, your lock and condition variable have already been initialized (see above).
// The only thing you need to do here is to set the initial state of status.
status = \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_;
}
// Return only if work() has been called at least three times. Should sleep if not able //
to return (i.e., no busy waiting).
void wait() {
\_\_\_\_\_\_\_\_\_\_\_\_\_
}
// Called to signify that a thread has done work
void work() {
\_\_\_\_\_\_\_\_\_\_\_\_\_
}
A
#include [LTS]stdio.h>
#include [LTS]pthread.h>
// Variables you can use in your implementation. Do not declare any other variables.
static volatile int status;
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t cond = PTHREAD_COND_INITIALIZER;
// Lock and condition variable functions you may use
// int pthread_mutex_lock(pthread_mutex_t *mutex);
// int pthread_mutex_unlock(pthread_mutex_t *mutex);
// int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex);
// int pthread_cond_signal(pthread_cond_t *cond);
// Called once before wait() and work() are called
void init() {
// Note, your lock and condition variable have already been initialized (see above).
// The only thing you need to do here is to set the initial state of status.
status = 0;
}
// Return only if work() has been called at least three times. Should sleep if not able //
to return (i.e., no busy waiting).
void wait() {
pthread_mutex_lock(&amp;lock); // critical section
while (status < 3) { // condition we should wait on
pthread_cond_wait(&amp;cond, &amp;lock);
}
pthread_mutex_unlock(&amp;lock);
}
// Called to signify that a thread has done work
void work() {
pthread_mutex_lock(&amp;lock); // critical section
status += 1;
if (status == 3) { // only signal when wait() should return
pthread_cond_signal(&amp;cond);
}
pthread_mutex_unlock(&amp;lock);
}
57
Q

Below is a partial implementation of the producer consumer (bounded buffer) problem. Please help me finish it.

int buffer[MAX];
int fill_ptr = 0, use_ptr = 0, count = 0;
cond_t empty, fill;
mutex_t mutex;
void put(int value) {
 buffer[fill_ptr] = value;
 fill_ptr = \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_; // fill in the blank
 count++;
}
int get() {
 int tmp = buffer[use_ptr];
 use_ptr = \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_; // fill in the blank
 count--;
 return tmp;
}
void *producer(void *arg) {
 for (int i = 0; i [LTS] loops; i++) {
 pthread_mutex_lock(&amp;mutex);
 while (count == MAX) {
 // add code here
 }
 put(i);
 // add code here
 pthread_mutex_unlock(&amp;mutex);
 }
}
void *consumer(void *arg) {
 for (int i = 0; i [LTS] loops; i++) {
 pthread_mutex_lock(&amp;mutex);
 while (count == 0) {
 // add code here
 }
 int tmp = get();
 // add code here
 pthread_mutex_unlock(&amp;mutex);
 printf("%d\n", tmp);
 }
}
A
int buffer[MAX];
int fill_ptr = 0, use_ptr = 0, count = 0;
cond_t empty, fill;
mutex_t mutex;
void put(int value) {
 buffer[fill_ptr] = value;
 fill_ptr = (fill_ptr + 1) % MAX;
 count++;
}
int get() {
 int tmp = buffer[use_ptr];
 use_ptr = (use_ptr + 1) % MAX;
 count--;
 return tmp;
}
void *producer(void *arg) {
 for (int i = 0; i [LTS] loops; i++) {
 pthread_mutex_lock(&amp;mutex);
 while (count == MAX) {
 pthread_cond_wait(&amp;empty, &amp;mutex);
 }
 put(i);
 pthread_cond_signal(&amp;fill);
 pthread_mutex_unlock(&amp;mutex);
 }
}
void *consumer(void *arg) {
 for (int i = 0; i [LTS] loops; i++) {
 pthread_mutex_lock(&amp;mutex);
 while (count == 0) {
 pthread_cond_wait(&amp;fill, &amp;mutex);
 }
 int tmp = get();
 pthread_cond_signal(&amp;empty);
 pthread_mutex_unlock(&amp;mutex);
 printf("%d\n", tmp);
 }
}
58
Q

In class, we talked about several different hardware instructions used to build locks. For this question, we’ll discuss two instructions, the load-linked instruction and the store-conditional instruction, which are used together to implement locks in some hardware architectures, such as arm. The load-linked instruction operates much like a typical load instruction and simply fetches a value from memory and places it in a register. The key difference comes with the store-conditional instruction, which only succeeds (and updates the value stored at the address just load-linked from) if no intervening store to the address has taken place. In the case of success, the store-conditional returns 1 and updates the value at ptr to value; if it fails, the value at ptr is not updated and a 0 is returned. The C pseudo code for these instructions are shown below.

 return *ptr;
}
int StoreConditional(int *ptr, int value) {
 if (no one has updated *ptr since the LoadLinked to this address) {
 *ptr = value;
 return 1; // success!
 } else {
 return 0; // failed to update
 }
}

Given the two functions above, where each function executes atomically with respect to itself, finish the implementation of the lock using these functions.

typedef struct \_\_lock_t {
 int flag;
} lock_t;
void init(lock_t *lock) {
 lock->flag = 0; // 0 indicates that lock is available, 1 that it is held
}
void unlock(lock_t *lock) {
 lock->flag = 0;
}
// Finish the implementation of lock
void lock(lock_t *lock) {
}
A
typedef struct \_\_lock_t {
 int flag;
} lock_t;
void init(lock_t *lock) {
 lock->flag = 0; // 0 indicates that lock is available, 1 that it is held
}
void unlock(lock_t *lock) {
 lock->flag = 0;
}
// Finish the implementation of lock
void lock(lock_t *lock) {
 while(1) {
 while (LoadLinked(&amp;lock->flag) == 1) // spin until the lock is free
 ;
 // Lock is now free, try to acquire the lock by setting flag to 1 with StoreConditional
 if (StoreConditional(&amp;lock->flag, 1) == 1) // success no one else has tried to get the lock
 return;
 // else we failed to get the lock, try again
 }
}
59
Q

Approximate counters, a lock based concurrent data structure, trades off less _______ for better _______. (be as specific as you can)

A

counter accuracy;

execution time

60
Q

The following code compiles with no errors or warnings, but when you run the thread code below your return value is giving you an unexpected (i.e., wrong) answer. What is the problem with the following code and how would you fix it? You don’t have to code up your fix, just explain how you would fix it.

typedef struct \_\_myarg_t {
int a;
int b;
} myarg_t;
typedef struct \_\_myret_t {
int x;
int y;
} myret_t;
void *mythread(void *arg) {
int val1, val2; // local vars allocated on the stack, local to each thread
myret_t ret; // mutual exclusion is not needed for these variables
myarg_t *m = (myarg_t *) arg;
// some code (not shown) to compute val1 and val2
ret.x = val1;
ret.y = val2;
return (void *) &amp;ret;
}
  1. What is wrong with the above code?
  2. How do we fix it?
  3. Give an example of where a spinlock lock would be preferable to a lock with sleep queues and why.
A
  1. The variable ret is a local variable allocated on the heap of the thread function.
  2. We can malloc space for the ret variable (and remember to free it when we are done), or (the horror!) use a global variable for the return structure.
  3. If we never hold on to the lock for very long, and there is not a lot of contention for the lock, a spinlock is often faster, as we don’t need a context switch, which can have considerable overhead.
61
Q
You have the following code:
// This function executes atomically
int FetchAndAdd(int *ptr) {
int old = *ptr;
*ptr = old + 1;
return old;
}
typedef struct \_\_lock_t {
int ticket;
int turn;
} lock_t;
void lock_init(lock_t *lock) {
lock->ticket = 0;
lock->turn = 0;
}
void lock(lock_t *lock) {
int myturn = FetchAndAdd(&amp;lock->ticket);
while (lock->turn != myturn)
; // spin
}
void unlock(lock_t *lock) {
lock->turn = lock->turn + 1; // Only the thread that has the lock calls this
// so no mutual exclusion needed here.
}
  1. Does this code guarantee mutual exclusion?
  2. Does this code implement locking fairly?
  3. Does this code have good performance on a single processor system?
A
  1. Yes, as long as FetchAndAdd is atomic.
  2. Yes, each thread gets the lock in the order that it called lock().
  3. No, it’s still a spinlock.
62
Q

The solution to the Dining Philosophers problem avoids deadlock by eliminating which condition(s) that are necessary for deadlock to hold? (circle all that apply)

A. Mutual exclusion
B. Hold and wait
C. No preemption
D. Circular wait // we have one philosopher acquire the forks in a different order, breaking the cycle.

A

D. Circular wait // we have one philosopher acquire the forks in a different order, breaking the cycle.

63
Q

Why is it a good idea to recheck the condition that caused a thread to call pthread_cond_wait() when that thread is woken up with a pthread_cond_signal()?

A

Most systems use Mesa semantics, which does not provide any guarantees that the state of the world hasn’t changed since signal() was called.

64
Q

True or False? + Explain:
For any problem you can solve with semaphores, you can also solve using condition variables (with a corresponding lock for the condition variables)

A

True.

Explain why or why not: Have a look at the book’s implementation of a semaphore. It uses condition variables and locks.