Lecture 3: Parallel Programming Using Shared Memory Flashcards
What happens with virtual memory?
With virtual memory, the process gets the illusion it has access to 100% of the memory
True or false
Two programs run in 2 different processes, i.e. 2 different address space
True
Each process has its own address space so there is no interference
What is a thread?
A thread is a sequence of instructions executed on a CPU core. A process can consist of one or more threads
True or false
Threads in the same process have different address spaces
False
All threads in the same process share the same address space
How do threads communicate?
Threads communicate using shared memory
Match the c/c++ thread function
- Use pthread_create()
- pthread_exit()
- pthread_join()
A. to create and launch a thread
B. to wait for another thread to finish
C. to have the calling thread exit
- A
- C
- B
What parameter can pthread_create take?
Takes as parameter the function to run and optionally its arguments i.e. children threads
What are the two ways of defining a thread in java?
- Class inherits from java.lang.Thread
- Class implements java.lang.Runnable. Lets you inherit from something else than Thread
Match the java thread function
- run()
- Thread.start()
- Thread.join()
A. to wait for the thread to complete
B. defines what the thread does when it starts running
C. gets the thread going
- B
- C
- A
How is the order of execution of threads determined?
No control over the order of execution! The OS scheduler decides, it’s nondeterministic
What is Data Parallelism? And when does it work best?
Divide computation into (nearly) equal sized chunks. Works best when there are no data dependencies between chunks.
Where is Data Parallelism used?
- in multithreading
- at the instruction level in vector/array (SIMD) processors: CPUs (SSE, AVX)
- in GPGPUs
Give an example of Data Parallelism
Matrix multiply of n x n matrices
What is OpenMP used for and how is it useful?
OpenMP is used to achieve automatic parallelisation. Very low programmer effort
Explain the difference between implicit and explicit parallelism
Explicit parallelism:
The programmer explicitly spells out what should be done in parallel/sequence. Using threads or other high level notations e.g. OpenMP
Implicit parallelism:
No effort from the programmer, the system works out parallelism by itself. Done for example by some language able to make strong assumption about data sharing e.g. pure functions in functional languages