Multithreading/ Concurrency Flashcards

1
Q

Concurrency

A

the execution of the multiple instruction sequences at the same time. It happens in the operating system when there are several process threads running in parallel. The running process threads always communicate with each other through shared memory or message passing. Concurrency results in sharing of resources result in problems like deadlocks and resources starvation.

It helps in techniques like coordinating execution of processes, memory allocation and execution scheduling for maximizing throughput.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

motivations for allowing concurrent execution

A

Physical resource sharing : Multiuser environment since hardware resources are limited.

Logical resource sharing: Shared file(same piece of information).

Computation speedup: Parallel execution

Modularity: Divide system functions into separation processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Relationship Between Processes

       The Processes executing in the operating system is one of the following two types:
A

Independent Processes
Cooperating Processes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Independent processes

A

Its state is not shared to any other process.

The result of execution depends is only on the input state.

The result of the execution will always be the same for the same input.

The termination of the independent process will not terminate any other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Cooperating system

A

Its state is shared along other processes.

The result of the execution depends on relative execution sequence and cannot be predicted in advanced(Non-deterministic).

The result of the execution will not always be the same for the same input.

The termination of the cooperating process may affect other process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Operation on a process

A

Most systems support at least two types of operations that can be invoked on a process creation and process deletion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Process creation

A

A parent process and then children of that process can be created. When more than one process is created several possible implementations exist.

Parent and child can execute concurrently.

The Parents waits until all of its children have terminated.

The parent and children share all resources in common.

The children share only a subset of their parent’s resources.

The parent and children share no resources in common.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

process termination

A

A child process can be terminated in the following ways:

A parent may terminate the execution of one of its children for a following reasons:

The child has exceeded its allocation resource usage.

The task assigned to its child is no longer required.

If a parent has terminated than its children must be terminated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Principles of concurrency

A

Both interleaved and overlapped processes can be viewed as examples of concurrent processes, they both present the same problems.
The relative speed of execution cannot be predicted. It depends on the following:

The activities of other processes

The way operating system handles interrupts

The scheduling policies of the operating system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

problems in concurrency

A

Sharing global resources –
Sharing of global resources safely is difficult. If two processes both make use of a global variable and both perform read and write on that variable, then the order in which various read and write are executed is critical.

Optimal allocation of resources –
It is difficult for the operating system to manage the allocation of resources optimally.

Locating programming errors –
It is very difficult to locate a programming error because reports are usually not reproducible.

Locking the channel –
It may be inefficient for the operating system to simply lock the channel and prevents its use by other processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Advantages of concurrency

A

Running of multiple applications –
It enable to run multiple applications at the same time.

Better resource utilization –
It enables that the resources that are unused by one application can be used for other applications.

Better average response time –
Without concurrency, each application has to be run to completion before the next one can be run.

Better performance –
It enables the better performance by the operating system. When one application uses only the processor and another application uses only the disk drive then the time to run both applications concurrently to completion will be shorter than the time to run each application consecutively.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Disadvantages of concurrency

A

It is required to protect multiple applications from one another.

It is required to coordinate multiple applications through additional mechanisms.

Additional performance overheads and complexities in operating systems are required for switching among applications.

Sometimes running too many applications concurrently leads to severely degraded performance.
Issues of Concurrency :

Non-atomic –
Operations that are non-atomic but interruptible by multiple processes can cause problems.

Race conditions –
A race condition occurs of the outcome depends on which of several processes gets to a point first.

Blocking –
Processes can block waiting for resources. A process could be blocked for long period of time waiting for input from a terminal. If the process is required to periodically update some data, this would be very undesirable.

Starvation –
It occurs when a process does not obtain service to progress.

Deadlock –
It occurs when two processes are blocked and hence neither can proceed to execute.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

multithreading

A

the ability of a central processing unit (CPU) (or a single core in a multi-core processor) to provide multiple threads of execution concurrently, supported by the operating system. This approach differs from multiprocessing. In a multithreaded application, the threads share the resources of a single or multiple cores, which include the computing units, the CPU caches, and the translation lookaside buffer (TLB).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Advantages of multithreading

A

If a thread gets a lot of cache misses, the other threads can continue taking advantage of the unused computing resources, which may lead to faster overall execution, as these resources would have been idle if only a single thread were executed. Also, if a thread cannot use all the computing resources of the CPU (because instructions depend on each other’s result), running another thread may prevent those resources from becoming idle.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Disadvantages of multithreading

A

Multiple threads can interfere with each other when sharing hardware resources such as caches or translation lookaside buffers (TLBs). As a result, execution times of a single thread are not improved and can be degraded, even when only one thread is executing, due to lower frequencies or additional pipeline stages that are necessary to accommodate thread-switching hardware.

Overall efficiency varies; Intel claims up to 30% improvement with its Hyper-Threading Technology, while a synthetic program just performing a loop of non-optimized dependent floating-point operations actually gains a 100% speed improvement when run in parallel. On the other hand, hand-tuned assembly language programs using MMX or AltiVec extensions and performing data prefetches (as a good video encoder might) do not suffer from cache misses or idle computing resources. Such programs therefore do not benefit from hardware multithreading and can indeed see degraded performance due to contention for shared resources.

From the software standpoint, hardware support for multithreading is more visible to software, requiring more changes to both application programs and operating systems than multiprocessing. Hardware techniques used to support multithreading often parallel the software techniques used for computer multitasking. Thread scheduling is also a major problem in multithreading.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Coarse-grained multithreading

A

The simplest type of multithreading occurs when one thread runs until it is blocked by an event that normally would create a long-latency stall. Such a stall might be a cache miss that has to access off-chip memory, which might take hundreds of CPU cycles for the data to return. Instead of waiting for the stall to resolve, a threaded processor would switch execution to another thread that was ready to run. Only when the data for the previous thread had arrived, would the previous thread be placed back on the list of ready-to-run threads.

17
Q

Fine grained multithreading

A

The purpose of Fine grained multithreading is to remove all data dependency stalls from the execution pipeline. Since one thread is relatively independent from other threads, there is less chance of one instruction in one pipelining stage needing an output from an older instruction in the pipeline. Conceptually, it is similar to preemptive multitasking used in operating systems; an analogy would be that the time slice given to each active thread is one CPU cycle.

18
Q

Simultaneous multithreading

A

The most advanced type of multithreading applies to superscalar processors. Whereas a normal superscalar processor issues multiple instructions from a single thread every CPU cycle, in simultaneous multithreading (SMT) a superscalar processor can issue instructions from multiple threads every CPU cycle. Recognizing that any single thread has a limited amount of instruction-level parallelism, this type of multithreading tries to exploit parallelism available across multiple threads to decrease the waste associated with unused issue slots.