Part 2 Review Questions Flashcards
What is a thread?
In computer science, a thread of execution is the smallest sequence of programmed instructions that a scheduler can manage independently, which is typically a part of the operating system.
What is a “heavy-weight process?”
A normal process under an Operating System (OS) is a “heavy-weight process.” The OS provides an independent address space for each such process to keep different users and services separated. Switching from one such process to another is time- consuming, and this task is performed by the Memory Management Unit (MMU).
Why do we call a thread a “light-weight process (LWP)?”
A thread is called a Light-Weight Process (LWP) because it runs under the address space of a regular (heavy-weight) process, and LWPs under the same process may share, e.g., variables. Switching from one LWP to another is much faster than switching from one heavy-weight process to another because there is less to manage, and the MMU is not involved.
What is the difference between a thread and a process?
A- Threads within the same process run in shared memory space, while processes run in separate memory spaces.
B- Processes are independent of one another, and they don’t share their codes, data, and OS resources, like processes. As a result, threads share with other threads their code section, data section, and OS resources (like open files and signals). But, like a process, a thread has its program counter (PC), register set, and stack space.
Are there situations that we use “multithreading?”
Multithreading has many advantages, but in the following two cases, multithreading is preferable over a single thread process:
A- Processing power: If you have a multi-core computer system, multithreading is preferable.
B- Multithreading avoids priority inversion where a low priority activity such as accessing the disk blocks a high priority activity, such as user interface to respond to a request.
What is an example where having a single thread is preferred over multithreading?
If we are waiting for a user response or we are waiting for data to arrive over the network, it is useless to assign several threads waiting for the same thing.
How would a web server act under a multithreading system?
The server listens for a new client to ask for a transaction. Then the server would assign a thread to the requesting client and starts listening for the next client.
What is the difference between running four threads on a single-core processor and running the same number of threads on a double-core processor?
On a single-core processor, all of the threads take a turn in a round-robin fashion. This is known as “concurrency.” On a double core processor, two threads run on one core, and the other two would run on the second core. This parallel running of threads on multiple cores is known as “parallelism.”
What are the four benefits of multithreading?
A- Responsiveness: If a process is divided among multiple threads, then if one
part of the process is blocked, the other parts could go on.
B- Resource sharing: different threads of a process can share the code and memory of that process.
C- Economy: Starting a new thread is much easier and faster than creating a new process.
D- Scalability: A multithreaded process runs faster if we transfer it to a hardware platform with more processors.
What are the challenges that programmers face when they design the code for multiprocessors?
A- Dividing activities: finding areas that can be divided into separate and concurrent tasks.
B- Balance: programmers must ensure that different tasks are of the same value in terms of complexity and execution time.
C- Data splitting: Data should be split, in a balanced manner, among already split concurrent tasks.
D- Data dependency: The programmer should make sure that different tasks that are running concurrently do not have data dependence.
E- Testing and debugging: Many different execution paths are possible and more complicated than testing single-threaded applications.
What are the two types of parallelism?
A- Data parallelism: Data is divided into subsets, and each subset is sent to different threads. Each thread performs the same operations.
B- Task parallelism: The whole data is sent to different threads, and each thread does a separate operation.
How do we compute speedup using Amdhal’s Law?
Question #15
Suppose that 50% of a task can be divided equally among ten threads and each thread will run on a different core. A) What will be the speedup of this multithreading system as compared to running the whole task as a single thread? B) What will be the speedup of part (A) if we could send 90% of the job to ten threads?
A- 1.8
B- 5.26
What is the upper bound in Amdahl’s law?
The upper bound means that no matter how much you increase the number of threads (N), the speedup would not go beyond speedup = 1/s. For example, if the serial part of the code is 1% at most, the speedup would be at most 1/0.01 or 100, no matter how many processors you use. Hence, if the serial part is 1%, the upper bound of speedup for such code is 100.
In the context of “Amdahl’s law,” what is the meaning of the “diminishing returns?”
The upper bound of the speedup = 1/s is still an optimistic estimation. When the number of processors and threads increases, the overhead of handling them increases too. Too much increase in the number of threads could cause a loss and the speed up may fall below 1/s. This is know as a diminishing return, which says that sometimes a smaller number of threads could result in a higher performance.
What are the three popular user-level thread libraries?
POSIX, pthreads, Windows, and Java.
What is the relationship between user threads and kernel threads?
User threads run within a user process. Kernel threads are used to provide privileged services to processes (such as system calls). The kernel also uses them to keep track of what is running on the system, how much of which resources are allocated to what process, and to schedule them. Hence, we do not need to have a one-to-one relationship between user threads and kernel threads.
A) In the relationship between the user and kernel threads, what is the “many- to-one model?”
B) What is the shortcoming of this model?
A) Before the idea of threads become popular, OS kernels only knew processes. An OS would consider different processes and consider each a separate entity. Each process was assigned a working space and could produce system calls and ask for services. Threading in the user-space was not dealt with by the OS. With User mode threading, support for threads was provided by a programming library, and the thread scheduler was a subroutine in the user program itself. The operating system would see the process, and the process would schedule its threads by itself.
B) If one of the user threads needed a system call, such as a page fault, then the other threads were blocked.
What is the “one-to-one” threading model? What are its advantages and shortcomings?
Each user thread is assigned a kernel thread. Hence, we can achieve more concurrency, and threads can proceed while one thread is blocked. The disadvantage occurs when there are too many user threads, which may burden the performance of the operating system.
What is a “many-to-many” multithreading model?
The OS decides the number of kernel threads, and the user process determines the number of user threads. A process that runs on an eight-core processor would have more kernel threads than the one which runs on a quad-core processor. This model does not suffer from either of the shortcomings of the other two models.
What is pthread?
It is POSIX (portable OS interface) thread library providing programmers with an application program interface (API) for creating and managing threads.
What is synchronous threading?
After creating the threads, the parent has to wait for the children to terminate before it can resume operation.
For thread programming in C or C++ using pthread what header file should be included?
include
What does the following piece of code do?
Question #27
Question #27
It uses a function to
perform summation. The main program sequentially calls the function.