Shared memory Flashcards

1
Q

two concepts for memory models

A
  1. coherency: access to one signle mem location
  2. consistency: access to several mem location
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

sequential consistency

A
  • the result of any execution is the same as if sequential
    2. order of operations of each indicivudual processor is same as in program
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

why doesnt sequential cosnistency work in parallel programs

A

-use of write buffers
- compiler optimizations
- everyt write has to be propagated

we need relaxed consistency to get good eprformance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

all types of consistency

A

processor
weak
release
sequential

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

processor consistency

A

writes by a thread are seen by all threads in the order they were issued
but different threads may see different order
enforced locally per thread

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

weak consistency

A

splits into sync and data operations
sync flushes memory pipeline

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

release consistency

A

splits sync operations in acquire and release
like a lock
before accessing a variable all acquire operations have to be done
before completing release all read/write must be done

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

whats openmp’s memory model?

A

kinda like weak

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

flush directive

A

pragma omp flush(list)

syncs data list of thread with main memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

performance issues of shmem

A
  1. thread overhead
  2. too little work per thread (load imbalance)
  3. sync overhead: less locks, global locks
  4. cache behaviour and locality: numa,bndwth, cache,
  5. thread and data locality
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

false sharing

A

2 threads access separate data on same cache line, a write on one data invalidates the cache line that then needs to be updated in the other threads, leading to resource waste

use padding in personal cache line

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

data locality

A

row wise

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

thread data locality

A

numa
first touch: first thread that touches a dat point will allocate it in closest memory
first access will allocate physical memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

UMA procon

A

PRO
- predictable

CON
- central memory becomes bottle neck
- high badnwth

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

NUMA procon

A

PRO
- more bwtch
- less bottlenecks

CON
- CC protocol has large overhead
- each memory access potentially invalidates data in cache

How well did you know this?
1
Not at all
2
3
4
5
Perfectly