P3L4 - Synchronization Constructs Flashcards

1
Q

Why do you need synchronization constructs OTHER than simple mutexes and condition variables?

A

They are not error proof!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

____ is one of the most basic synchronization constructs?

A

Spinlock

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the purpose of a spinlock?

A

To provide mutual exclusion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How are spinlocks different from mutexes since both provide mutual exclusion?

A

When a spinlock is locked, and a thread is attempting to lock it, the thread is NOT blocked. The thread is spinning (running on the CPU repeatedly checking to see if the lock has become free)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the use of semaphores?

A

To express COUNT RELATED synchronization requirements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are semaphores initialized with?

A

An integer value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What happens if thread arrives at a semaphore with a value of 0

A

It blocks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What happens if a thread arrives at a semaphore with a nonzero value?

A

It decrements the value and proceeds with execution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What do you call a semaphore initialized with a 1

A

A binary semaphore - Will only allow one thread at a time to pass

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

_____ is a construct that behaves similarly to a mutex but requires that the user only specify the type of access they wish to perform

A

Read/Write lock

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is an alternative name for read/write locks?

A

Shared/Exclusive locks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

T/F: Some implementations allow readers to convert their lock into a writer lock mid-execution?

A

True!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

____ are a higher level synchronization construct that allow us to avoid manually invoking lock/unlock operations

A

Monitors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

T/F: We need the checking of the lock value and the setting of the lock value to happen atomically

A

True - We need to be able to guarantee that only one thread a time can successfully obtain the lock

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

List some examples of atomic operations?

A
  1. test_and_set
  2. read_and_increment
  3. compare_and_swap
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does it mean to say that an operation is atomic?

A

It will happen completely or not at all

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What guarantees are made by the hardware for atomic instructions?

A
  1. That the operations will happen atomically

2. There will be mutual exclusion

18
Q

What are the two types of memory configurations for multiprocessor systems?

A
  1. Interconnect based

2. Bus-Based

19
Q

What is the main difference between bus based configuration and interconnect based configuration of memory?

A

In the interconnect-based configuration, multiple memory references can be in flight at a given moment whereas in the bus-based configuration the shared bus can only support one memory reference at a time

20
Q

T/F: In general, access to the cache data is faster than access to data in main memory?

A

True

21
Q

Define non-cache coherent architecture

A

If one CPU writes a new version to its cache, the hardware will NOT update the value across the other CPU caches

22
Q

Define cache-coherent architectures

A

Hardware will take care of all the necessary steps to ensure that the caches are coherent

23
Q

What are the two basic strategies by which the hardware can achieve cache coherence?

A
  1. Write-Invalidate

2. Write-Update

24
Q

_____ strategy will invalidate all cache entries once one CPU updates its copy. Future references to invalidated cache entries will have to pass through to main memory before being re-cached

A

Write-Invalidate

25
Q

_____ strategy will ensure that all cache entires are updated once one CPU updates its copy

A

Write-Update

26
Q

T/F: Atomic instructions on SMP are more expensive than on single CPU systems because of bus or I/C contention?

A

True!

27
Q

Why would you ever want a spin lock instead of a mutex, ie, why would you rather spin than context switch/block?

A

If the owner of the locked mutex is on the same CPU, a thread should context switch because otherwise the mutex will never be unlocked.

If the owner of the mutex is on a different CPU, the mutex can be unlocked while the thread spins. It can get the mutex without having to context switch.

28
Q

Which locks spins on the cached value of the lock rather than the atomic value?

A

test-and-test-and-set

29
Q

what is good about test-and-test-and-set?

A

+ Not as much contention as test-and-set

30
Q

What is bad about test-and-test-and-set?

A
- Worse latency than test-and-set
Contention is:
= with no cache-coherence
\+ better with write-update
-- much worse with write-invalidate. all threads will see the lock as free at the same time and try to get it.
31
Q

Which lock spins on the cached value with a delay?

A

Delay

32
Q

What is good about the delay lock?

A

+ Improves contention - delay means not every thread will execute the atomic at the same time.

33
Q

Which lock uses a random delay which is then tuned based on the perceived contention in the system?

A

Dynamic-Delay

34
Q

What is the rationale behind any delay lock?

A

Prevent every thread from trying to grab the lock at the same time when it is free

35
Q

Which lock controls which thread(s) see that the lock is free?

A

Queuing lock

36
Q

What is good and bad about the queue lock?

A

+ Minimal delay
+ The best contention
- Must store an array element for each thread rather than just one element for a lock
- Must support read_and_increment which is less common than test_and_set

37
Q

Which locks perform the best

and worst under high loads (more CPUs)?

A

best: The queue lock
worst: test-and-test-and-set (too much contention)

38
Q

Which locks perform the best and worst under low loads (fewer CPUs)?

A

best: test-and-test-and-set. (not enough CPUs to make contention bad)
worst: queue. (initial cost of having to use read-and-increment outweighs make its many-cpu optimizations not worth it)

39
Q

What is latency where locks are concerned?

A

How long it takes to get the lock when it is free

40
Q

What is delay where locks are concerned?

A

If you’re waiting for a lock, how long after it is released do you get it.