Shared Memory Programming Flashcards

1
Q

pragma omp parallel

A

Use the parallel directive to start multiple threads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

pragma omp critical

A

Protect sections using critical to ensure only one thread executes a block at a time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

pragma omp atomic

A

For operations that can be performed atomically to improve efficiency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

omp_lock_t lock;
omp_init_lock(&lock);
#pragma omp parallel
{
omp_set_lock(&lock);
// Critical section
omp_unset_lock(&lock);
}
omp_destroy_lock(&lock);

A

Manage access to critical sections more explicitly with locks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

pragma omp parallel for

A

Divide loop iterations among threads, avoiding loop-carried dependencies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

pragma omp parallel for private(x) shared(y)

A

Explicitly declare variable scope to avoid unintended data sharing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

pragma omp parallel for reduction(+:sum)

A

Parallelize reduction operations efficiently.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

pragma omp barrier

A

Synchronize all threads at a certain point.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A structured parallel region

A
  • One point of entry: Execution can only start at the beginning of the block.
  • One point of exit: Execution leaves the block at one specific point, ensuring a clear and predictable flow of control.
  • Limited branching: The only allowed branching out of the structured block is through an
    exit() call or reaching the end of the block.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

pragma omp parallel default(private)

A

By default, variables declared before a parallel block have shared scope among all threads, unless specified otherwise. However, you can change this default behavior using the default clause.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

pragma omp parallel for schedule(static, 1)

A

Distributes the iterations in a round-robin fashion among the threads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

pragma omp parallel for schedule(dynamic, 3)

A

Iterations are divided into chunks. When a thread finishes its chunk, it requests another, continuing until all iterations are completed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

pragma omp parallel for schedule(guided, 2)

A

Similar to dynamic, but the chunk size decreases as threads complete their tasks. This can lead to more efficient utilization of threads towards the end of the task list.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

pragma omp parallel for schedule(runtime)

export OMP_SCHEDULE=”dynamic,4”

A

The scheduling strategy is determined at runtime based on the OMP_SCHEDULE environment
variable. This allows flexibility as the same code can use different scheduling by just changing an environment variable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

pragma omp task

A

Tasks are created within a single execution block to ensure that only one thread is involved in task creation, avoiding unnecessary duplication of work.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Deadlocks

A

A situation where a set of processes is blocked, each waiting for an event that only another process in the set can cause

17
Q

Conditions for Deadlock

A
  1. Mutual Exclusion: Only one process can use a resource at any given time.
  2. Hold and Wait: A process holding at least one resource is waiting to acquire additional
    resources held by other processes.
  3. No Preemption: A resource can be released only voluntarily by the process holding it.
  4. Circular Wait: There exists a set of processes {P1, P2, …, Pn} such that P1 is waiting for
    a resource held by P2, P2 is waiting for a resource held by P3, and so on until Pn is waiting for a resource held by P1.
18
Q

Deadlock Prevention

A
  • Eliminate Mutual Exclusion (not always possible).
  • Avoid Hold and Wait by requiring processes to request all needed resources at once.
  • Preempt resources when needed.
  • Prevent Circular Wait by imposing a total ordering of resource types.
19
Q

Deadlock Avoidance

A
  • Banker’s Algorithm
  • Safe State Algorithm
  • Resource Ready Algorithm
20
Q

Deadlock Detection

A
  • Search for cycles in resource-allocation graph
  • If single, transform RAG into wait-for graph
  • If not, then use the Banker’s
21
Q

Recovery from Deadlock

A
  • Process Termination: Abort all deadlocked processes or abort them one at a time until
    the deadlock cycle is broken.
  • Resource Preemption: Take a resource from some other process and give it to another.
  • Selection and Rollback: Choose a “victim” to roll back (undo the process to a safe state) to break the deadlock, considering factors like the number of resources the process has consumed and the time the process has consumed resources.
  • Starvation: Ensuring that the same process is not always chosen as the victim during recovery from deadlock, to prevent it from being starved of resources.
22
Q

TestAndSet()

A

An atomic operation used to set a target boolean variable to TRUE
and return its previous value.

23
Q

Swap()

A

An atomic operation that exchanges the values of two boolean variables.

24
Q

Sephamore

A
  • Used like a lock, initialized to 1.
  • wait(S) : Decrements the semaphore if it’s positive, otherwise waits until it becomes positive.
  • signal(S) : Increments the semaphore, potentially unblocking a waiting thread. Solution:
25
Q

Counting Sephamore

A
  • Generalization of the binary semaphore, initialized to the number of available resources.
  • Used to control access to a set number of identical resources.
  • wait and signal operations are similar to binary semaphores but control multiple instances of a resource.
26
Q

Bounded-Buffer Problem
while (true) {
wait(empty);
wait(mutex);
// add item to buffer
signal(mutex);
signal(full);
}
// Consumer
while (true) {
wait(full);
wait(mutex);
// remove item from buffer
signal(mutex);
signal(empty);
}

A

This scenario involves a fixed number of buffer slots. The producer can add an item if a slot is empty, and the consumer can remove an item if a slot is full. Semaphores used:
- mutex (initialized to 1): Ensures exclusive access to add or remove items.
- full (initialized to 0): Tracks the number of full buffer slots.
- empty (initialized to n): Indicates the number of empty buffer slots.

27
Q

Readers-Writers Problem
while (true) {
wait(wrt);
// writing
signal(wrt);
}
// Reader
while (true) {
wait(mutex);
readcount++;
if (readcount == 1) wait(wrt);
signal(mutex);
// reading
wait(mutex);
readcount–;
if (readcount == 0) signal(wrt);
signal(mutex);
}

A

Involves shared data that cannot be accessed by others while being written but can be read by multiple readers simultaneously. Semaphores:
- wrt (initialized to 1): Allows one writer or multiple readers access.
- mutex (initialized to 1): Controls access to readcount variable.

28
Q

Dining Philosophers Problem

// Philosopher i
while (TRUE) {
wait(chopstick[i]);
wait(chopstick[(i + 1) % 5]);
// eat
signal(chopstick[i]);
signal(chopstick[(i + 1) % 5]);
// think
}

A

Five philosophers sit around a table with five chopsticks between them. Each must pick up the two nearest chopsticks to eat. Semaphores represent chopsticks (initialized to 1).