L14 & L15 Flashcards
What are 2 advantages of std::thread over pthreads?
Type checking is possible
less repetitive code
How does the std lock, lock_guard, act?
locks at creating and unlocks at destruction
What is parallel STL? What is an advantage of it?
Parallel STL provides high level parallelism through the C++ standard library. It provides parallelized algorithms for performing computations concurrently on multi-core processors.
It can help prevent data races
What are the 4 execution policies in Parallel STL?
Sequenced_policy
Unsequenced_policy (SIMD)
Parallel_policy
Parallel_unsequenced_policy
how are Parallel STL algorithms related to functional programming
STL algorithms are compositional, meaning composable transformations can be made on data, rather than sequences of instructions
Explain std::future from the C standard library.
Used for accessing results of asynchronous operations through shared memory
Allows offload computation (std::async, std::packaged_task, std::promise)
What 3 level abstractions offer an alternative to high level and low level standard parallelism, outlining an advantage of each?
Parallel libraries: low effort from programmer:
Domain specific languages: easier to program, optimise and parallelise (ex: Matlab, Tensorflow
Algorithmic skeletons: focusses on how the computation flows not what it does, utilising composability
Is OpenMP an algorithmic skeleton?
No, because it is not particularly composable
What are some disadvantages of algorithmic skeletons?
Some parallelism is not structured
Synchronisation automated but still an overhead
What is the main goal for OpenMP?
Parallel performance for serial code with minimal effort
What is used to start a parallel region in OpenMP?
pragma omp parallel
- # pragma: tells the compiler this is a directive
- omp: pass the directive to the OpenMP system
- parallel: block that follows is a parallel region
What is reduction used for in a parallel for loop?
Reduction can be used to combine each thread’s results in a for loop
In OpenMP, what is used for data parallelisation? What is used for task parallelisiation?
Loop splitting (parallel for)
Sections & tasks (task farm)
Are the following variables shared or private:
a. Variable declared inside parallel region
b. Variable declared outside parallel region
a. shared
b. private
What are some ways that OpenMP allows for coordination between threads?
barrier
nowait
omp_(un)set_lock()
critical
atomic