Process, Thread, and OpenMP Flashcards

1
Q

Flow dependency

A

Previous output needed as input
I1 : R1 <- R2 + R3
I2 : R5 <- R1 + R4

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Anti dependency

A

Value as input overwritten next
I1: R1 <- R2 + R3
I2: R2 <- R4 + R5

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Output dependency

A

Overwrite the same output variable
I1 : R1 <- R2 + R3
I2: R1 <- R4 + R5

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Data parallelism

A

Distribute the data structure evenly among the processors and let the processor perform the operation on its assigned elements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Owner-compute rule

A

partition the results of a computation and let the processor ‘owning’ the part of results perform the corresponding operations to compute the results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a process

A

A program in execution, the process comprises the executable with all info necessary for the execution of the program ( with its own loacal adress space(copies of data, etc)). Each process may consist of multiple threads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a thread

A

multiple independent controls flows are called threads ( = a sequence of instructions) Usually several threads share a common address space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Data Race

A

Since threads have no seperate memory, they can overwrite eachother and have no protection again each other. So between two synchronized points if at least one thread writes toa memory location from which at least one other thread reads. The result is not deterministic and you have race condition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

False Sharing

A

When different threads access data that resides in the same cache line, it can lead to performance problems because each time one thread writes to its data, the entire cache line is invalidated, causing other threads to reload the same cache line, even if they are not actually sharing data. To mitigate false sharing, developers can use techniques such as padding or aligning data structures to ensure that each data element accessed by different threads resides in separate cache lines, reducing cache contention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

user CPU time, System CPU time, and wall-clock time

A

1.The time the CPU spends for executing A
2.The time the CPU spends for executing A including the time of the operating system
3. elapsed wall-clocl time from begin to finish of A(inlcuding I/O, execution of other porgrams due to sime sharing)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Amdahl’s law

A

Amdahl’s Law states that the speedup of a program when running on multiple processors is limited by the portion of the program that cannot be parallelized.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Gustafson’s law

A

In Gustafson’s Law, the idea is that as the number of processors increases, the problem size or the amount of work to be done can be scaled up to make use of the available parallel processing power. As a result, the speedup is not limited by the inherently sequential portion but rather by the size of the problem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly