Lecture 3: Parallel Programming Using Shared Memory Flashcards

1
Q

What happens with virtual memory?

A

With virtual memory, the process gets the illusion it has access to 100% of the memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

True or false

Two programs run in 2 different processes, i.e. 2 different address space

A

True

Each process has its own address space so there is no interference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a thread?

A

A thread is a sequence of instructions executed on a CPU core. A process can consist of one or more threads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

True or false

Threads in the same process have different address spaces

A

False

All threads in the same process share the same address space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How do threads communicate?

A

Threads communicate using shared memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Match the c/c++ thread function

  1. Use pthread_create()
  2. pthread_exit()
  3. pthread_join()

A. to create and launch a thread
B. to wait for another thread to finish
C. to have the calling thread exit

A
  1. A
  2. C
  3. B
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What parameter can pthread_create take?

A

Takes as parameter the function to run and optionally its arguments i.e. children threads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the two ways of defining a thread in java?

A
  1. Class inherits from java.lang.Thread
  2. Class implements java.lang.Runnable. Lets you inherit from something else than Thread
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Match the java thread function

  1. run()
  2. Thread.start()
  3. Thread.join()

A. to wait for the thread to complete
B. defines what the thread does when it starts running
C. gets the thread going

A
  1. B
  2. C
  3. A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How is the order of execution of threads determined?

A

No control over the order of execution! The OS scheduler decides, it’s nondeterministic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is Data Parallelism? And when does it work best?

A

Divide computation into (nearly) equal sized chunks. Works best when there are no data dependencies between chunks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Where is Data Parallelism used?

A
  1. in multithreading
  2. at the instruction level in vector/array (SIMD) processors: CPUs (SSE, AVX)
  3. in GPGPUs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Give an example of Data Parallelism

A

Matrix multiply of n x n matrices

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is OpenMP used for and how is it useful?

A

OpenMP is used to achieve automatic parallelisation. Very low programmer effort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Explain the difference between implicit and explicit parallelism

A

Explicit parallelism:
The programmer explicitly spells out what should be done in parallel/sequence. Using threads or other high level notations e.g. OpenMP

Implicit parallelism:
No effort from the programmer, the system works out parallelism by itself. Done for example by some language able to make strong assumption about data sharing e.g. pure functions in functional languages

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the issues with automatic parallelisation?

A
  1. Works for simple programs, but dependency analysis can be very hard.
  2. Automatic parallelisation can be limited by data dependencies
  3. Must be conservative. If you cannot be certain that parallel version computes correct result, can’t parallelise
17
Q

How can this function be paralleised

for (int i=0; i<n-3; i++)
a[i] = a[i+3] + b[i];

A
  1. Positive offset: we read at each iteration what was in the array before the loop started
  2. We never read a value computed by the loop itself

Can parallelise by making a new version of array a: parrallel_for(int i=0; i<n-3; i++)
new_a[i] = a[i+3] + b[i];
a = new_a;

18
Q

How can this function be paralleised

for (int i = 5 ; i < n ; i++) {
a[i] += a[i-5] * 2 ;
}

A

limit parallelism to 5

19
Q

How do threads use Shared Memory?

A

Read and write in the same memory location e.g. variable, buffer through:
1. Global variable
2. Pointers and references

20
Q

How can threads communicate of there was no shared memory?

A

Need to communicate via message passing. Close to a distributed system

21
Q

Is shared memory implicit or explicit?

A

implicit