Parallel Programming Questions Flashcards

1
Q
What is the correct order of operations for protecting a critical section using a binary semaphore?
A) release() followed by acquire()
B) acquire() followed by release()
C) wait() followed by signal()
D) signal() followed by wait()
A

C) wait() followed by signal()

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A race condition ____
A) results when several threads try to access the same data concurrently
B) results when several threads try to access and modify the same data concurrently
C) will result only if the outcome of execution does not depend on the order in which instructions are executed
D) none of the above

A

B) results when several threads try to access and modify the same data concurrently

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

An instruction that executes atomically ____
A) must consist of only one machine instruction
B) executes as a single, uninterruptible unit
C) cannot be used to solve the critical section problem
D) all of the above

A

B) executes as a single, uninterruptible unit

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A counting semaphore ____
A) is essentially an integer variable
B) is accessed through only one standard operation
C) can be modified simultaneously by multiple threads
D) cannot be used to control access to a thread’s critical sections

A

A) is essentially an integer variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
A mutex lock \_\_\_\_
A)  is exactly like a counting semaphore
B)  is essentially a boolean variable
C)  is not guaranteed to be atomic
D)  can be used to eliminate busy waiting
A

B) is essentially a boolean variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
In Peterson's solution, the \_\_\_\_ variable indicates if a process i is ready to enter its critical section.
A)  turn
B)  lock
C)  flag[i]
D)  turn[i]
A

C) flag[i]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
A \_\_\_ type presents a set of programmer-defined operations that are provided mutual exclusion within it.
A)  transaction
B)  signal
C)  binary
D)  monitor
A

D) monitor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
\_\_\_\_\_\_\_\_\_\_\_\_ occurs when a higher-priority process needs to access a data structure that is currently being accessed by a lower-priority process.
A)  Priority inversion
B)  Deadlock
C)  A race condition
D)  A critical section
A

A) Priority inversion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
What is the correct order of operations for protecting a critical section using mutex locks?
A) release() followed by acquire()
B) acquire() followed by release()
C) wait() followed by signal()
D) signal() followed by wait()
A

B) acquire() followed by release()

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Does brainscape sometimes say it saved your cards, only for you to return later and find out they weren’t actually saved?

A

Yes. Fuck brainscape.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q
\_\_\_\_\_ is not a technique for handling critical sections in operating systems.
A) Non-preemptive kernels
B) Preemptive kernels
C) Spinlocks
D) Peterson's solution
A

D) Peterson’s solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q
A solution to the critical section problem does not have to satisfy which of the following requirements?
A) mutual exclusion
B) progress
C) atomicity
D) bounded waiting
A

C) atomicity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q
A(n) \_\_\_\_\_\_\_ refers to where a process is accessing/updating shared data.
A) critical section
B) entry section
C) mutex
D) test-and-set
A

A) critical section

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

_____ can be used to prevent busy waiting when implementing a semaphore.
A) Spinlocks
B) Waiting queues
C) Mutex lock
D) Allowing the wait() operation to succeed

A

B) Waiting queues

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the purpose of the mutex semaphore in the implementation of the bounded-buffer problem using semaphores?
A) It indicates the number of empty slots in the buffer.
B) It indicates the number of occupied slots in the buffer.
C) It controls access to the shared buffer.
D) It ensures mutual exclusion.

A

D) It ensures mutual exclusion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
How many philosophers may eat simultaneously in the Dining Philosophers problem with 5 philosophers?
A) 1
B) 2
C) 3
D) 5
A

B) 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q
\_\_\_\_\_ is/are not a technique for managing critical sections in operating systems.
A) Peterson's solution
B) Preemptive kernel
C) Non-preemptive kernel
D) Semaphores
A

A) Peterson’s solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

When using semaphores, consider reversing the order of these two operations—first calling signal(), then calling wait(). What would be a possible outcome of this?
A) Starvation is possible.
B) Several processes could be active in their critical sections at the same time.
C) Mutual exclusion is still assured.
D) Deadlock is possible.

A

B) Several processes could be active in their critical sections at the same time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Which of the following statements is true?
A) Operations on atomic integers do not require locking.
B) Operations on atomic integers do require additional locking.
C) Linux only provides the atomic_inc() and atomic_sub() operations.
D) Operations on atomic integers can be interrupted.

A

A) Operations on atomic integers do not require locking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

The OpenMP #pragma omp critical directive ___________.
A) behaves much like a mutex lock
B) does not require programmers to identify critical sections
C) does not guarantee prevention of race conditions
D) is similar to functional languages

A

A) behaves much like a mutex lock

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q
Another problem related to deadlocks is \_\_\_\_\_\_\_\_\_\_\_\_.
A) race conditions
B) critical sections
C) spinlocks
D) indefinite blocking
A

D) indefinite blocking

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q
According to Amdahl's Law, what is the speedup gain for an application that is 60% parallel when we run it on a machine with four processing cores?
A) 1.82
B) .7
C) .55
D) 1.43
A

A) 1.82

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q
\_\_\_\_\_\_\_\_\_ involves distributing tasks across multiple computing cores.
A) Concurrency
B) Task parallelism
C) Data parallelism
D) Parallelism
A

B) Task parallelism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q
\_\_\_\_\_\_\_\_\_\_\_ is a formula that identifies potential performance gains from adding additional computing cores to an application that has a parallel and serial component.
A) Task parallelism
B) Data parallelism
C) Data splitting
D) Amdahl's Law
A

D) Amdahl’s Law

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

When OpenMP encounters the #pragma omp parallel directive, it ____.
A) constructs a parallel region
B) creates a new thread
C) creates as many threads as there are processing cores
D) parallelizes for loops

A

C) creates as many threads as there are processing cores

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

______________ leads to concurrency.

a) Serialization
b) Parallelism
c) Serial processing
d) Distribution

A

b) Parallelism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

A parallelism based on increasing processor word size.

a) Increasing
b) Count based
c) Bit based
d) Bit level

A

d) Bit level

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

A type of parallelism that uses micro architectural techniques.

a) instructional
b) bit level
c) bit based
d) increasing

A

a) instructional

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

MIPS stands for?

a) Mandatory Instructions/sec
b) Millions of Instructions/sec
c) Most of Instructions/sec
d) Many Instructions / sec

A

b) Millions of Instructions/sec

30
Q

The measure of the “effort” needed to maintain efficiency while adding processors.

a) Maintainablity
b) Efficiency
c) Scalabilty
d) Effectiveness

A

c) Scalabilty

31
Q

In distributed system, each processor has its own ___________

a) local memory
b) clock
c) both local memory and clock
d) none of the mentioned

A

c) both local memory and clock

32
Q

The capability of a system to adapt the increased service load is called ___________

a) scalability
b) tolerance
c) capacity
d) none of the mentioned

A

a) scalability

33
Q
The performance of a supercomputer is commonly measured in \_\_\_\_\_\_\_\_
A) FLOPS
B) MIPS
C) MIMD
D) Degree of multiprogramming
A

A) FLOPS

34
Q
The lowest cost of the computer is \_\_\_\_\_\_\_\_\_\_
A) Supercomputer
B) Cloud server
C) Cluster
D) Grid computer
A

C) Cluster

35
Q
Which model is conceptually similar to cloud computing?
A) Supercomputer
B) Grid computing
C) Multi-core computing
D) Parallel computing
A

B) Grid computing

36
Q
The difference between parallel computing and distributed computing is \_\_\_\_\_\_\_\_
A) The interaction among processors
B) The memory’s architecture
C) CPU’s design
D) The instruction design
A

B) The memory’s architecture

37
Q
The reason of causing the false sharing is \_\_\_\_\_
A) Memory consistency
B) Cache consistency 
C) Sequential program
D) Code consistency
A

B) Cache consistency

38
Q

What is not true about a distributed system?

a) It is a collection of processor
b) All processors are synchronized
c) They do not share memory
d) None of the mentioned

A

b) All processors are synchronized

39
Q

What are the characteristics of processor in distributed system?

a) They vary in size and function
b) They are same in size and function
c) They are manufactured with single purpose
d) They are real-time devices

A

a) They vary in size and function

40
Q

In a multi-processor configuration two coprocessors are connected to host 8086 processor. The instruction sets of the two coprocessors

a) must be same
b) may overlap
c) must be disjoint
d) must be the same as that of host

A

d) must be the same as that of host

41
Q

The main objective in building the multiprocessor is

a) greater throughput
b) enhanced fault tolerance
c) greater throughput and lower latency
d) none of the mentioned

A

c) greater throughput and lower latency

42
Q
Which level cache is the closest to the main memory?
A) L1 cache
B) L2 cache
C) L3 cache
D) memory-bus cache
A

C) L3 cache

43
Q
System Request Interface takes care of \_\_\_\_\_\_\_\_\_\_
A) Memory coherency
B) Signal synchronization
C) CPU response
D) Crossbar Interaction
A

A) Memory coherency

44
Q

Which statement is NOT correct about HyperTransport Technology
A) It is a variation of hyper thread technology for AMD and Nvidia chipsets
B) A bidirectional serial/parallel high-bandwidth, low-latency point-to-point link
C) A interconnection technology for computer processors
D) Best known as the system bus architecture of modern CPU

A

A) It is a variation of hyper thread technology for AMD and Nvidia chipsets

45
Q

Which statement is NOT correct about Symmetric MultiProcessor architectures?
A) SMPs are parallel computers in which all processors access a single logical memory
B) A portion of the memory is sometimes physically near each processor on the same board
C) Each processor’s cache controller makes memory request over the common memory bus
D) The common bus is shared among processors that utilize different instruction set.

A

D) The common bus is shared among processors that utilize different instruction set.

46
Q
The bottleneck of SMP architecture is \_\_\_\_\_\_\_
A) CPU instruction set
B) Memory size
C) Bus bandwidth
D) Cache size of processors
A

C) Bus bandwidth

47
Q

The bottleneck of SMP is measured by __________
A) CPU utilization
B) The number of memory request per unit time
C) L2 hit ratio
D) The number of processor unit

A

B) The number of memory request per unit time

48
Q

Which method can alleviate the bottleneck problem of SMP?
A) Utilizing multiple-core rather than single core
B) Ample L2 caching
C) Increasing CPU rate
D) Add more connections to SMP processors

A

B) Ample L2 caching

49
Q
Hardware multithreading does not share \_\_\_\_\_\_ among processes.
A) Memory bandwidth
B) TLB
C) Cache
D) Program counter
A

D) Program counter

50
Q

Snoopy bus protocols in SMP processors achieve __________
A) Data consistency between the cache memory and the shared memory
B) Bus clock synchronization between symmetric multiprocessor architectures
C) Register Consistency by adopting the write-invalidate and write-update policies
D) Detecting the intrusion, protecting the data integrity, and providing error tolerance

A

A) Data consistency between the cache memory and the shared memory

51
Q

Which statement is wrong about the crossbar interconnection?
A) Low latency and high throughput
B) More hardware circuits required
C) Easy expansion to large scales
D) Crossing wires do not connect unless a connection is shown

A

C) Easy expansion to large scales

52
Q
\_\_\_\_\_\_\_\_\_ is not a heterogeneous Chip Design?
A) Graphic processing units
B) Field Programmable Gate Arrays
C) AMD dual core 
D) Cell processor
A

C) AMD dual core

53
Q

Which statement is incorrect about IBM Cell processor?
A) Very fast on memory communication
B) Mature SMP architecture
C) PowerPC cores and specialized cores are mixed
D) It has dual memory bus connected to off chip RAM

A

B) Mature SMP architecture

54
Q
How many neighbors are connected in a 3D torus for standard inter-processor connection?
A) 4
B) 6
C) 7
D) 8
A

B) 6

55
Q
Which one is SISD?
A) Von Neumann computer
B) Pipelined computer 
C) Vector processor
D) Multiprocessor
A

A) Von Neumann computer

56
Q
Vector processor belongs to \_\_\_\_\_\_
A) SISD
B) MISD
C) SIMD
D) MIMD
A

C) SIMD

57
Q
In 2 D interconnection network, how many adjacent vertices per processor?
A) 2
B) 4
C) 6
D) 8
A

B) 4

58
Q

What is the usage for memory latency lamda?
A) Estimate local memory access time
B) Estimate the usual memory access time of the sequential processor
C) Estimate cache transmission rate
D) Estimate non-local memory access time

A

A) Estimate local memory access time

59
Q
The upper bound of the computing time in the parallel computing is 
A) P fold
B) Ts/P
C) PTs
D) 1/P
A

B) Ts/P

60
Q
Which communication model is the fastest? 
A) Shared memory
B) Non-shared memory
C) Message passing
D) Pipe
A

A) Shared memory

61
Q
Which one is not the source of performance loss for parallel systems?
A) non-parallelizable computation
B) CPU Idles
C) Contention for resources
D) Payload
A

D) Payload

62
Q
Compared with sequential program, which one is not the overhead of parallel computing?
A) Communication
B) Data volume 
C) Context switching
D) Thread synchronization
A

B) Data volume

63
Q
Which one is not the communication cost of shared memory?
A) Data marshalling
B) Coherency operation
C) Mutual exclusion
D) Contention
A

A) Data marshalling

64
Q
In the parallel environment, which one is not the reason of idling?
A) I/O bandwidth
B) Memory bound computation
C) Load imbalance
D) Semaphore queue
A

D) Semaphore queue

65
Q
Which is the most possible pattern to cause flow dependence?
A) Read after write
B) Write after read
C) Read after read
D) Write after write
A

A) Read after write

66
Q
Which one is not the metric to measure the performance for parallel systems?
A) Execution time
B) Speedup
C) Efficiency
D) Effective access time (EAT)
A

D) Effective access time (EAT)

67
Q

Which statement is incorrect about measuring the parallel performance?
A) Amdahl’s law is about speedup
B) Speedup is defined as the ratio of old execution time and new execution time
C) Speedup actually measures the efficiency of a parallel system
D) The speedup in Amdahl’s law is equivalent to the original definition of speedup

A

C) Speedup actually measures the efficiency of a parallel system

68
Q

Which statement is incorrect about speedup?
A) For the purpose of computing speedup, we always consider the best sequential program as the baseline
B) These algorithms may have different asymptotic runtimes and may be parallelizable to different degrees
C) The theoretical up-bound of speedup is p-fold
D) Superlinear speedup offers the outstanding parallel performance

A

D) Superlinear speedup offers the outstanding parallel performance

69
Q

Which statement is incorrect about superlinear speedup?
A) Since it seems counter intuitive, superlinear speedup barely occurs
B) One reason for superlinearity is less work done than corresponding serial algorithm
C) Superlinearity can yield more than p times faster than its sequential counterpart
D) The higher aggregate cache/memory bandwidth can result in better cache-hit ratios, and therefore superlinearity

A

A) Since it seems counter intuitive, superlinear speedup barely occurs

70
Q

Which statement is incorrect about the parallel efficiency?
A) Efficiency is a measure of the fraction of time for which a processing element is usefully employed
B) Mathematically, it is given by Efficiency = speedup / p
C) An ideal efficiency of p indicates linear speedup and that all processors are being used at full capacity
D) Extremely low efficiency shows the marginal benefit of scaling up the number of processors

A

C) An ideal efficiency of p indicates linear speedup and that all processors are being used at full capacity

71
Q

Which statement is correct about performance scalability?
A) Scalable performance is not hard to achieve in modern parallel systems
B) Increasing the number of processors often cause the high efficiency
C) Decreasing the number of processors often cause the low efficiency
D) Measuring the scalability often associate with the metric of efficiency

A

D) Measuring the scalability often associate with the metric of efficiency

72
Q

What is not implied by scalable performance?
A) Get better speedup and efficiency by using slower cores
B) Faster CPU and more chip-mate cores are efficient ways to scale up the parallel performance
C) A trade-off between communication costs and computation costs will not become significant until the number of processor is quite large
D) Scaling up the number of cores doesn’t mean higher efficiency

A

B) Faster CPU and more chip-mate cores are efficient ways to scale up the parallel performance