Correctness Flashcards

1
Q

What is the problem with round off error and parallel programs?

A

The order of operations can affect the final result depending on how many digits can be held by a given type

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How can we fight round of errors?

A
  • Use double precision instead of single precision floating point (still likely to occur, but it is mitigated), fewer
    flop/s but more accurate
  • Rearrange the calculations smallest to largest (time consuming, not really worth it)
  • Accept some level of tolerance. Define the size of error
    you’re willing to allow (usually very small) and stop the
    calculation when this happens.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is Deadlock?

A

• Occurs when two or more tasks wait for each other and each will not resume until some action is taken.
• e.g., people meet in corridor but cannot pass, and each wait for each other to move

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is Livelock?

A

• Occurs when the tasks involved in a deadlock take action to resolve the original deadlock but in such a way that there is still/another deadlock (&repeat forever). No tasks are able to resume
• e.g., people meet in corridor but cannot pass, each moves to the same side of the corridor, so still cannot pass, both move to the opposite side, so still cannot pass

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

When do Race Conditions occur?

A

• Race conditions occur when a program’s behaviour changes depending on the sequence or timing of events outside the control of the program itself e.g. thread scheduling
• This is likely to lead to incorrect results (if not a program crash).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the solutions for multiple write race conditions?

A

• Extra memory
• Thread-local copy of the variable(s), to be accumulated in the end. We will revisist this when studying OpenMP.
• Extra time
• Guard the variable with access-control mechanisms so that only one thread is allowed access at a time. We will look at these shortly.
• Use underlying support
• Either the programming language runtime, or the OS, or sometimes even the hardware might have support for “atomic” operations, i.e. the underlying system implements the access control to avoid race conditions.
• Frequently “the best” solution is to rethink the algorithm, so this scenario never appears.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are low level locks with an example?

A

Used for concurrent access
• OpenMP: omp_set_lock(…), omp_unset_lock) and others
• Fine-grained control (like above)
• Easy to end up with a dead lock

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the benefit of having a high level access control that implements locks underneath?

A

Control access but we don’t
have to worry about
deadlocks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are OpenMP critical sections used for?

A

•A code section can be marked as critical
•#pragma omp critical (name) {code}
• Introduces locks internally but we don’t have to worry about them.
•Perhaps the easiest but not necessarily fastest way to avoid race conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is OpenMP atomic used for?

A

•Works on a single assignment
•#pragma omp atomic <followed>
•Restricted to certain operations (=, +=, *= etc.)
•Mapos onto hardware when hardware support available
•Much faster because of these reasons
•Falls back to using an omp critical section if no hardware support available.</followed>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the trade of for using atomic or critical for correctness?

A

Atomics/critical work for correctness, but they are serialising a (hopefully small) portion of the code. These approaches trade off the concurrency issue for more time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a Reduction operator?

A

An operator that can take multiple inputs and produce a single result.

Must be associative (strictly) and commutative (can sometimes be relaxed)

Can reduce an array to a scalar value, this can be done in partial steps, where applying the operator to parts of the array can generate intermediates that can then be reduced further to get the result (follows from associativity and commutativity). e.g. sum (+ operator), product (* operator), logical AND, logical OR etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is Single Instruction Single Data? (SISD)

A

• Assumes that every instruction in the instruction stream corresponds to a single (pair of) element(s) in the data stream.
• Offers no parallelism at this level.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is Single Instruction Multiple Data? (SIMD)

A

• The instruction stream can have a single instruction, that corresponds to multiple (pairs of) elements in the data stream.
• Provides data-parallelism right at the hardware level.
• We’ve seen examples of this for vector processing – Intel SSE/AVX

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is Multiple Instruction Single Data? (MISD)

A

• Multiple autonomous simultaneously executing different instructions on different data.
• Multi-core/Multi-processor systems are all examples of this.
• Can be further divided into Single Program, Multiple Data (SPMD) and Multiple Program, Multiple Data (MPMD).
• Single Programming, Multiple Data:
• This module is mainly concerned with this sort of programming.
• MPI/OpenMP
• Focus is on data parallelism.
• Multiple Program, Multiple Data:
• Usually done in a client/server or master/worker paradigm.
• Commonly used frameworks are Hadoop/Spark/Dask.

• Most commonly seen when fault tolerance is vital.

• In the Space Shuttle flight-control computer, designers were worried about cosmic rays affecting the result of computations – they put in two processors that would redundantly carry out every computation, followed by a step that would ensure that the two results were the same before proceeding.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly