Threads Flashcards

1
Q

What is the flow of Multithreaded Server Architecture

A

CLIENT > requests to > SERVER > creates thread to service request >
SERVER > continues to listen for additional client requests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Four benefits of Threads

A

Resource sharing
Responsiveness
Scalability
Economy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A benefit of threading which may allow continued execution if part of the process is blocked ( which is important for UI)

A

Responsiveness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A benefit of threads wherein threads share resources of process
(Easier than shared memory or message passing)

A

Resource sharing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Benefit of threads since

thread creation is CHEAPER than process creation;
Thread switching has LOWER OVERHEAD than context switching

A

Economy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Beneift of thread wherein the process can take advantage of the multiprocessor architecture

A

Scalability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

_______ systems putting pressure on the programmer

A

Multicore / multiprocessor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Challenges faced because of using multicore / multiprocessor systems

A
Dividing activities
Balancing
Dtaa dependency
Data splitting
Testing and debugging
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

____ implies that a system can perform more than one task SIMULTANEOUSLY

A

Parallelism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

___ supports more than one task making progress
(Single processor only)
(Scheduler is responsible for this)

A

Concurrency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A type of parallelism wherein you distribute SUBSET of the SAME DATA and perform the SAME operation across multiple cores

A

Data parallelism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

A type of parallelism wherein you distribute threads across cores and perform DIFFERENT OPERATIONS per thread

A

Task parallelism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

An increase in the number of threads implies that ___

A

Architectural support for threading also grows

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

CPUs have cores and h____

Eg: oracle sparc t4

A

hardware threads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

For every thread in a Mutithreaded processes, there is a unique ____

A

Stack

Registers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

This identifies the PERFORMANCE GAINS from ADDING ADDITIONAL CORES to an applictaion which has serial and parallel components

A

Amdahl’s Law

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

The higher the serial portion of an application, the ___ the performance gained by adding additional cores

A

Lesser

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Formula for amdahl’s law

A

Speed-up = 1 / s-((1-s) / no of cores )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

As N reaches infinity, speedup approaches __

A

1/serial portion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A type of parallelism wherein you distribute SUBSET of the SAME DATA and perform the SAME operation across multiple cores

A

Data parallelism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A type of parallelism wherein you distribute threads across cores and perform DIFFERENT OPERATIONS per thread

A

Task parallelism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

An increase in the number of threads implies that ___

A

Architectural support for threading also grows

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

CPUs have cores and h____

Eg: oracle sparc t4

A

hardware threads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

For every thread in a Mutithreaded processes, there is a unique ____

A

Stack

Registers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

This identifies the PERFORMANCE GAINS from ADDING ADDITIONAL CORES to an applictaion which has serial and parallel components

A

Amdahl’s Law

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

The higher the serial portion of an application, the ___ the performance gained by adding additional cores

A

Lesser

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Formula for amdahl’s law

A

Speed-up = 1 / s-((1-s) / no of cores )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

As N reaches infinity, speedup approaches __

A

1/serial portion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

A type of parallelism wherein you distribute SUBSET of the SAME DATA and perform the SAME operation across multiple cores

A

Data parallelism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

A type of parallelism wherein you distribute threads across cores and perform DIFFERENT OPERATIONS per thread

A

Task parallelism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

An increase in the number of threads implies that ___

A

Architectural support for threading also grows

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

CPUs have cores and h____

Eg: oracle sparc t4

A

hardware threads

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

For every thread in a Mutithreaded processes, there is a unique ____

A

Stack

Registers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

This identifies the PERFORMANCE GAINS from ADDING ADDITIONAL CORES to an applictaion which has serial and parallel components

A

Amdahl’s Law

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

The higher the serial portion of an application, the ___ the performance gained by adding additional cores

A

Lesser

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Formula for amdahl’s law

A

Speed-up = 1 / s-((1-s) / no of cores )

37
Q

As N reaches infinity, speedup approaches __

A

1/serial portion

38
Q

Threads managed by USER LEVEL THREAD LIBRARIES

A

User threads

39
Q

Three primary thread libraries

A

POSIX pthreads
Windows threads
Java threads

40
Q

Threads supported by the kernel

Ex: all general purpose OS

A

Kernel threads

41
Q

Example of kernel threads

A

Windows
Mac OS
tru64 Unix

42
Q

A multi threading model wherein many user level threads are mapped to a single kernel thread
Give examples

A

Many to one

Solaris green thread
GNU portable thread

43
Q

A multithread model wherein creating a user level thread also creates a kernel thread

This model has more concurrency than many to one model but the number of threads are RESTRICTED due to OVERHEAD

Give examples

A

One to one model

Windows, linux, solaris 9 or later

44
Q

A mulithread model wherein many user threads are mapped to many kernel threads which allows the OS to create SUFFICIENT number of kernel threads

Give examples

A

Solaris prior to ver 9

Windows with ThreadFiber package

45
Q

A multithread model which is similar to m:m except it allows a user thread to be BOUND to a kernel thread

Give examples

A

IRIX
HP-UX
Tru64 UNIX
Solaris 8 and earlier

46
Q

___ provides programmers with API for creating and manageing threads

A

Thread library

47
Q

Two ways of implementing thread libraries

A

Library is ENTIRELY in USER SPACE

KERNEL-LEVEL LIBRARY supported by the OS

48
Q

A POSIX standard API for creating and synchronization of threads

SPECIFIES behaviour of the thread library; IMPLEMENTATION is up to the development of the lirbary

A

Pthreads

49
Q

Threads managed by the JVM

What are two ways to imlement this?

A

Java Threads

Extend thread class
Implement runnable interface
50
Q

Creation and management of threads are done by compilers and run-time libraries rather than programmers

What are three methods using this?

A

Implicit threading

ThreadPool
openMP
grand central dispatch

*other methods include Microsoft threading Building blocks (TBB) java.util.concurrent package

51
Q

This method creates a number of threads IN A POOL where they await to work

What are advantages of this method?

A

Thread Pools

  • slightly FASTER to service request with an existing thread than to create a new one
  • allows a number of threads to be BOUND to the size of the pool
  • CAN SEPARATE TASKS to be performed which allows strategies for running taskswh
52
Q

What API supports thread pools

A

Windows API

53
Q

A set of COMPILER DIRECTIVES and an API for c, c++, fortran which provides support for parallel programming in SHARED MEMORY environments

A

OpenMP

54
Q

Blocks of that can run in parallel that is identified by OpenMP

A

Parallel regions

55
Q

This compiler directive creates many threads as there are cores

A

pragma omp parallel

56
Q

This method is an apple technology for mac os x and ios which allows IDENTIFICATION OF PARALLEL SECTIONS and manages most of the details in THREADING

Where is the block placed if you were to use this method?

A

Grand central dispatch

^{} which are placed in the dispatch queue

57
Q

A type of dispatch queue wherein blocks are removed in FIFO order and queue is PER PROCESS called ___

A

Serial dispatch queues; main queue

58
Q

Type of dispatch queue wherein blcoks are removed in FIFO order but several may be removed at a time

There are three system wide queues with priorities __,__,__

A

Concurrent dispatch queue

Low, default, high

59
Q

5 Threading issues

A
Fork() and exec() system calls semantics
Signal handling
Thread cancellation
Thread local storage
Scheduler activations
60
Q

What is the thread issue of fork() and exec()?

A

Fork() may:

  1. Duplicate only the calling thread
  2. Duplicate all threads

Exec() replaces the running process including all threads

61
Q

This are used in the UNIX system to notify a process that a particular event has occured

What is used to process these?

A

Signals

Signal handlers

62
Q

How does a signal reach a signal handler?

A

Signal is generated by an event > signal is delivered to a process > signal is handled by 1 / 2 signal handlers (kernel or user defined)

63
Q

Every signal has a __ that a kernel runs when handling signals, ___ can override these

A

Default handlers ; user defined signal handlers

64
Q

What is the threading issue of signal handling?

A

Issue on where the signal should be delivered for multi-threaded

  • deliver to the thread which the signal applies
  • deliver to every thread in the process
  • deliver to certain threads in the process
  • assign specific thread to receive all incoming signals

*if single threaded, signal is delivered to the process

65
Q

A threading issue wherein you terminate a thread before it has finished wherein the thread to be cancelled is called the ___

A

Thread cancellation; target head

66
Q

A general approach of thread cancellation wherein you terminate the target head immediately

A

Asynchronous cancellation

67
Q

A general approach of thread cancellation wherein it allows the target head to periodically check if it should be cancelled

A

Deferred cancellation

68
Q

What is the issue on thread cancellation?

A

Invoking thread cancellation only REQUESTS cancellation but actual cancellation depends on thread state

If thread ‘s state for cancellation is disabled, cancellation remains pending until thread enables it

69
Q

What is the default type of thread cancellation wherein thread cancels only if it reaches CANCELLATION POINT

What handler is invoked by this type of cancellation?

Give an example

A

Deferred cancellation

Cleanup handler

Pthread_testcancel()

70
Q

This allows each thread to have its own copy of data which is useful if you do not have control over the creation of threads

A

Thread local storage

71
Q

Difference of tls and local variables

A

TLS are visible across function invocations while local variables are only visible during a single function

72
Q

A form of communication to MAINTAIN APPROPRIATE NUMBER OF KERNEL THREADS ALLOCATED TO AN APPLICATION

A

Scheduler activations

73
Q

A communication mechanism used by scheduler activations from the kernel to the ___ in the thread library

A

Upcalls

Upcall handler

74
Q

What is the issue in scheduler activations?

A

Scheduler activations uses an intermediate data structure called LIGHTWEIGHT PROCESS (LWP) between user and kernel threads which serves as a VIRTUAL PROCESSOR

  • Does each kernel thread need an lwp attached to it?
  • how many lwp to create
75
Q

The primary API for windows which implements 1:1 and kernel level thread

A

Windows API

76
Q

Each thread contains

A
  • thread id

- CONTEXT (register set, stacks, private storage area)

77
Q

WHat does the register set reoresent in a thread

A

State of processor

78
Q

What is the purpose of stacks in a thread?

A

Separate user and kernel stacks when thread runs user or kernel mode

79
Q

Whatis the purpose of private data storage area in a thread?

A

This area is used by RUN-TIME LIBRARIES and DYNAMIC LINK LIBRARIES(DLLs)

80
Q

A primary data structure of a thread which has a POINTER TO

  • the process to which thread belongs
  • to KTHREAD in kernel space
A

ETHREAD (executive thread block)

81
Q

A primary data structure of a thread which handles SCHEDULING AND SYNC info, KERNEL MODE STACK, pointer to TEB in kernel space

A

KTHREAD (Kernel thread block)

82
Q

A primary data structure of a thread which contains the thread id, USER MODE STACK, TLS, In user space

A

TEB (Thread environment block)

83
Q

What is linux’s term for thread?

A

Task

84
Q

Thread creation in linux is done by __ which allows child to share address space of parent task (process)

A

Clone() system call

85
Q

Flag control for clone() where file system info is shared

A

Clone_fs

86
Q

Flag control behaviour of clone same memory is shared

A

Clone_vm

87
Q

Flag control behaviour of clone where signal handlers are shared

A

Clone_sighand

88
Q

Flag control behaviour of clone() where set of open files are shared

A

Clone_files

89
Q

In linux threads, ___ points to PROCESS DATA STRUCTURES (SHARED OR UNIQUE)

A

struct task_struct