Synchronization - Monitors Flashcards

1
Q

What is a condition variable?

A

A variable which supports two operations that enables other types of synchronization: wait and signal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a Monitor?

A

Mutex constructs generated by the compiler.

Only one process is active in a monitor at any given time.

The monitor provides queuing for waiting procedures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Does signalling has effect if there are no waiting threads?

A

No.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

If only one procedure is active in a monitor, when a signal occurs, which procedure is being executed, the signalling process or the signalled?

A

There are two ways to continue:

  1. The signaled procedure will execute first. That is, signalling procedure is immidietly followed by block() or exit_monitor().
  2. The signalling procedure is allowed to proceed.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Do you remember how Bounded Buffer Producer/Consumer is implemented with Monitors?

It is a good example of the monitor struct; it has condition variables (full, empty). it has global integers and of course, procedures.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the problem with the implementation of ProducerConsumer using Monitors? and in which case it does work properly?

A
  • It works properly only if the signalled procedure is the next to enter(Hoare).*
    1. k producers are waiting. Meantime, k consumers enter one after the other such that only one producer is signalled. (instead of waking them all up - something that happens if the signalled procedure is the first to execute)*
    1. one producer, p1, is waiting. Meantime, a consumer enters and signals him. Another producer, p2, enters the monitor before p1 succeeded to increament the counter by 1. Eventually, p1 reaches the CS even though the buffer is full. (it behaves propely if the signalled procedure is the first to execute)*
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How many condition variables a monitor in java has?

A

Only a single implicit one

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the three synchronization operation in Java?

A

Wait

Notify

Notifyall

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How is Monitor used in order to maintain synchronization?

A

The monitor “compiler” has to automatically insert this code into compiled procedures(in Java the keyword “synchronized” makes that happen):

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Pay attention to a main problem with the implementation of Monitors using semaphores - deadlock. It happens due to lack of counting of the number of waiting procedures.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How deadlock is fixed in the implementation of monitors?

A

We simply add a count variable.

Before we enter the CS of some condition, we increase its counting, release mutex and then call down(C). anytime we finish, we check if count>0. if not, mutex is released if so, we decrease the counting and call up(C), so that the waiting procedure will access the CS and release mutex.

Note: the signalled procedure release mutex for the signalling procedure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How many condition variables we have in the implementation of monitors in Java?

A

No named condition variables, only a single implicit one

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what are the synchronization operations?

A

wait

notify

notifyall

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What Java.Util.Concurrent provides?

A

Explicit lock interface

multiple condition variables per lock

(condition interface with lock.newCondition())

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is a barrier?

A

A type of synchronization method. Any process/thread must stop at this point and cannot proceed until all other processes/threads reach this barrier.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a barrier useful for?

A

It is useful for computations the proceed in phases. For instance, we have 5 threads performing 5 tasks, 1 task for each, and we must have all these tasks finished before we print the results.

17
Q

What is the fetch&increment instruction?

A

An atomic operation:

fetch&increment(w)

prev = w

w = w + 1

return prev

18
Q

Why this simple barrier works while the barrier in the answer is not?

A

The barrier doesn’t work because of a simple case:

The two last threads execute fetch&increment(counter) before one of them reaches to the if statement.

In this case, in the if statement, for both, counter = n.

In the above example it does work because we save the result of fetch&increment in a local variable, and the result is necessarily different because the command is atomic.

19
Q

We want to maintain a synchronized data-base of writers and readers, such that multiple readers may access it simultaneously, but a writer needs an exclusive access.

what is the problem with the following implementation?

A

When multi readers try to read, a writer may suffer from starvation. If a reader captures the database, it is released only when the last reader finishes to read. A simple solution would be that no writer is kept waiting once it is “ready”.

20
Q

The attached algorithm gives writers priority. A new writer decreases Rdb making all arriving writers to be blocked.

Why this algorithm provides writers priority?

is the starvation problem for writers solved now?

A
  1. once Rdb is taken by a reader, all the readers are blocked and only the last writer to leave Wdb releases Rdb. But, if some reader takes Rdb, he will also release it, making again a race between the readers and the writers. To put it simple, once a writer takes hold of Rdb, writers write, one by one, while readers wait(there’s a race only between the writers). If a reader takes hold of Rdb, once he’s done reading a new race begins.(writer vs readers).
  2. It does not solve the problem because there may be 1000 waiting readers vs one writer and if Rdb is not fair, the readers will constantly take control of it leaving the writer to starve.
21
Q

How does starvation is solved using this improved algorithm?

A

We use a simple and efficient tool; we wrap the Rdb mutex with mutex2. This way, if new readers and a writer arrive at the time when some reader is within the gap of down(Rdb) and up(Rdb), all new readers wait on mutex2 while the writer waits on Rdb. The writer is released first and then some reader will be waiting on Rdb while the rest of the readers are still waiting on mutex2. In total, a writer might wait to at most one reader to finish reading before it enters the CS.

22
Q

What is the idea behind One-way tunnel?

what is the job of the wrapping mutex?

A

* It is a special case of readers/writers.

* we have two procedures which denotes direction: in and out

* the out procedure has priority over the in procedure. That is, if some threads are waiting on out, the in processes have to wait.

The wrapping mutex maintains the priority of the leaving processes over the arriving processes.

23
Q

What is the main problem with the Dining philosophers problem that might cause deadlock?

A

The need to lock two forks, one after the other. it might happen that each philosopher takes its left fork resulting with each philosopher waiting his right fork to be released

24
Q

We saw it can be simply solved using a general mutex. when someone wants to eat, AKA, takes his two forks, he must first own the mutex. when he wants to release the forks, he must first own the mutex, and then check whether his collegues to his left and right wait for one of his forks. if so, he alarms them.

Why is this solution not so good?

A

It is an inefficient solution:

we lost quite a lot of the concurrency we previously had.

we still might encounter starvation since taking hold of the general mutex is not fair.

25
Q

Of course that the implementation of the Dining Philosophers problem using a general mutex can be implemented also using monitor. just have a look at that.. recall, it is still not efficient.

A
26
Q

What is the RL solution for the Dining Philosophers problem? why is is starvation free?

A

even i -> philosopher i is R type:

He tries to catch his right lock first, and not releasing it until he manages to take hold of his left fork.

odd i ->philosopher i is L type:

He tries to catch his left lock first, and not releasing it until he manages to take hold of his right fork.

Is it starvation free?

It is starvation free because two adjacent Philosophers might try to catch the same fork, but this fork is the first fork they’re trying to catch. The first to catch will surely take hold of his other fork(because if his other fork is taken it means someone is eating and is about to release it).

Starvation might only happen if the same philosopher eats again and again without letting his neighbour to eat. Given that the fork is “fair”, this shall not happen.

27
Q

What is the cache coherent problem?

What is a Distribued shared-memory(DSM)?

A

Problem: Caches of all processors should see the same view of shared data im memory.

DSM - each core has its own private memory(local). If it needs to access other core’s memory he would go to the other core’s private memory(remote).

28
Q
  • local access - cache*
  • remote access - disk*
  • Cache-coherent system: when does remote access of v by p happen?*
A
    1. First time v is read.*
    1. v has been written by another process since p’s last access.*
29
Q

What is a local-spin algorithm?

A

An algorithm in which:

  1. all busy waiting is done by read-only loops of local-accesses.
  2. There is no interconnect traffic.
30
Q
  • Peterson’s 2-process is local-spin on a DSM machine? No!*
  • Peterson’s 2-process is local-spin on a CC machine? Yes!*
  • why?*
A

CC: when some process wants to read from a remote memory, it reads it from disk. Thus it has nothing to do with the lock on the cache(local memory). Same with writes.

DSM: when some process want to read from a remote memory, it means it wants to read from some local-memory(cache) of some other core. That is,an interconnect traffic might happen.

31
Q

The MCS queue-based algorithm maintains an order for the waiting processes using a queue and two atomic operations; Swap and Compare-And-Swap(attached).

How does it work?

  • Clue: the efficient method repeats itself;*
    1. atomic procedures help in maintaining order of requests. That is, to get a “number”.*
    1. If no one is currently in line - enter the cs.*
    1. If there is a waiting line, the current running processes releases a specific process, which is the process which holds the consecutive number.*
A
  • We have a queue. Each time a new process p asks for the lock, it checks the queue.*
  • If the queue is empty - it gets to the CS.*
  • if it is not - it uses Swap in order to get the last proccess which asked for the lock(predecessor), and pushes itself on top of the queue. then, it sets:*

p->locked = true. predecessor->next =** **p.

After that, it waits until predecessor finishes its task so that the later will set: predecessor->next->locked = false, and enable it to enter the CS.