Final Study Flashcards

1
Q

What is a thread?

A

In computer science, a thread of execution is the smallest sequence of programmed instructions that a scheduler can manage independently, which is typically a part of the operating system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a “heavy-weight process?”

A

A normal process under an Operating System (OS) is a “heavy-weight process.” The OS provides an independent address space for each such process to keep different users and services separated. Switching from one such process to another is time- consuming, and this task is performed by the Memory Management Unit (MMU).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why do we call a thread a “light-weight process (LWP)?”

A

A thread is called a Light-Weight Process (LWP) because it runs under the address space of a regular (heavy-weight) process, and LWPs under the same process may share, e.g., variables. Switching from one LWP to another is much faster than switching from one heavy-weight process to another because there is less to manage, and the MMU is not involved.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the difference between a “Thread” and a “Process”?

A

Threads within the same process run in shared memory space, while processes run in separate memory spaces. Processes are independent of one another, and they don’t share their codes, data, and OS resources, like processes. As a result, threads share with other threads their code section, data section, and OS resources (like open files and signals). But, like a process, a thread has its program counter (PC), register set, and stack space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Are there situations that we use “multithreading?”

A

Multithreading has many advantages, but in the following two cases, multithreading is preferable over a single thread process: A- Processing power: If you have a multi-core computer system, multithreading is preferable. B-Multithreading avoids priority inversion where a low priority activity such as accessing the disk blocks a high priority activity, such as user interface to respond to a request.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is an example where having a single thread is preferred over multithreading?

A

If we are waiting for a user response or we are waiting for data to arrive over the network, it is useless to assign several threads waiting for the same thing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How would a web server act under a multithreading system?

A

The server listens for a new client to ask for a transaction. Then the server would assign a thread to the requesting client and starts listening for the next client.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the difference between running four threads on a single-core processor and running the same number of threads on a double-core processor?

A

On a single-core processor, all of the threads take a turn in a round-robin fashion. This is known as “concurrency.” On a double core processor, two threads run on one core, and the other two would run on the second core. This parallel running of threads on multiple cores is known as “parallelism.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the four benefits of multithreading?

A

A- Responsiveness: If a process is divided among multiple threads, then if one part of the process is blocked, the other parts could go on. B- Resource sharing: different threads of a process can share the code and memory of that process. C- Economy: Starting a new thread is much easier and faster than creating a new process. D- Scalability: A multithreaded process runs faster if we transfer it to a hardware platform with more processors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the challenges that programmers face when they design the code for multiprocessors?

A

A- Dividing activities: finding areas that can be divided into separate and concurrent tasks. B- Balance: programmers must ensure that different tasks are of the same value in terms of complexity and execution time. C- Data splitting: Data should be split, in a balanced manner, among already split concurrent tasks. D- Data dependency: The programmer should make sure that different tasks that are running concurrently do not have data dependence. E- Testing and debugging: Many different execution paths are possible and more complicated than testing single-threaded applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the two types of parallelism?

A

A- Data parallelism: Data is divided into subsets, and each subset is sent to different threads. Each thread performs the same operations. B- Task parallelism: The whole data is sent to different threads, and each thread does a separate operation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How do we compute “speedup” using Amdahl’s law?

A

Speedup = 1/(S+((1-S)/N)). In this formula, S is the portion of the task that has to be performed serially, and (1-S) is the part of the task that can be distributed on N processors. Speedup indicates how much faster the task is running on these N processors as compared to when it was running serially.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Suppose that 50% of a task can be divided equally among ten threads and each thread will run on a different core. A) What will be the speedup of this multithreading system as compared to running the whole task as a single thread? B) What will be the speedup of part (A) if we could send 90% of the job to ten threads?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the upper bound in Amdahl’s law?

A

upper bound means that no matter how much you increase the number of threads (N), the speedup would not go beyond Speedup = 1/N. For example, if the serial part of a code is 1% at most, the speedup would be at most 1/0.01 or 100, no matter how many processors you use. Hence, if the serial part is 1%, the upper bound of speedup for such a code is 100.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

n the context of “Amdahl’s law,” what is the meaning of the “diminishing returns?”

A

The upper bound of Speedup = 1/S is still an optimistic estimation. When the number of processors and threads increases, the overhead of handling them increases too. Too much increase in the number of threads could cause a loss, and the speedup may fall below 1/S. This is known as a diminishing return, which says that sometimes a smaller number of threads could result in higher performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the three popular user-level thread libraries?

A

POSIX pthreads, Windows, and Java.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the relationship between user threads and kernel threads?

A

User threads run within a user process. Kernel threads are used to provide privileged services to processes (such as system calls). The kernel also uses them to keep track of what is running on the system, how much of which resources are allocated to what process, and to schedule them. Hence, we do not need to have a one-to-one relationship between user threads and kernel threads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

A) In the relationship between the user and kernel threads, what is the “many- to-one model?”

B) What is the shortcoming of this model?

A

A) Before the idea of threads become popular, OS kernels only knew processes. An OS would consider different processes and consider each a separate entity. Each process was assigned a working space and could produce system calls and ask for services. Threading in the user-space was not dealt with by the OS. With User mode threading, support for threads was provided by a programming library, and the thread scheduler was a subroutine in the user program itself. The operating system would see the process, and the process would schedule its threads by itself.

B) If one of the user threads needed a system call, such as a page fault, then the other threads were blocked.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the “one-to-one” threading model? What are its advantages and shortcomings?

A

Each user thread is assigned a kernel thread. Hence, we can achieve more concurrency, and threads can proceed while one thread is blocked. The disadvantage occurs when there are too many user threads, which may burden the performance of the operating system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is a “many-to-many” multithreading model?

A

The OS decides the number of kernel threads, and the user process determines the number of user threads. A process that runs on an eight-core processor would have more kernel threads than the one which runs on a quad-core processor. This model does not suffer from either of the shortcomings of the other two models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is pthread?

A

It is POSIX (portable OS interface) thread library providing programmers with an application program interface (API) for creating and managing threads.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is synchronous threading?

A

After creating the threads, the parent has to wait for the children to terminate before it can resume operation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

For thread programming in C or C++ using pthread what header file should be included?

A

include

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What does the following piece of code do?

A

It uses a function to perform summation. The main program sequentially calls the function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What does the following piece of code do?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is the difference in the behavior of the above code in question 28 and the one in question 27?

A

the program of question 27 uses a function for summation. It is called, and the main program has to wait for it to finish. Then the main program can call it again. The program of question 28 uses threads. The main program creates all threads, and they run concurrently. The order that threads finish their work depends on the complexity of their job.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What does the following instruction do?

pthread_create(id, attributes, function, argument);

A

It creates a thread with a given id and a set of attributes. The thread will run a function and can carry an argument into that function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What does the following instruction do?

pthread_join(id, ReturnedValue);

A

The main process waits for the thread with the given ID to finish and return a variable. If no variable is to be brought back, then NULL is used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Does Windows support multithreading?

A

Windowssupports multithreading. It supports single or multiple thread creation and single or multiple thread joining.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What does “detaching” of thread mean?

A

If pthread_detach() is used inside a thread, it means that as soon as the thread is exited, its resources are released, and the system should not wait for this thread to join with its parent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is meant by implicit threading?

A

It is difficult to create hundreds and thousands of threads by the program. The solution is implicit threading, which transfers the creation and management of threading from application developers to compilers and run-time libraries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What are some famous “implicit threading” examples?

A

A) Thread Pools

B) Fork-Join

C) OpenMP

D) Grand Central Dispatch

E) Intel Threading Building Blocks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

How is the OpenMP application programming interface used?

A

The programmer determines which parts of the program could be implemented in a parallel manner. Then the OpenMP API is used and which has compiler directives and library routines. The compiled version of the program would have multiple threads to perform the code in a parallel manner.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What happens when fork() is used in a program?

A

It is the UNIX way of creating a new separate duplicate process. The new process consists of a copy of the address space of the original process. Both processes (the parent and the child) continue execution at the instruction after the fork(). fork() returns a code. The child process receives zero from the fork(). The parent process receives a nonzero ID of the child.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

When a fork() is executed, do the threads running under the parent process get duplicated?

A

The threads are not duplicated unless after fork() an exec() instruction is executed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is a “signal” in the context of threading?

A

Signals notify a process of occurrence of an event. A signal can be synchronous or asynchronous. A signal is generated, delivered, and then handled.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What are two examples of synchronous and asynchronous signals?

A

If an instruction in a thread performs division by zero, then it generates a synchronous signal. When a signal is generated by an event external to a running process, that process receives the signal asynchronously. An example of such a signal includes terminating a process with specific keystrokes (such as ).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Is it possible to create a thread and then cancel it?

A

Yes. The main thread could cancel the child thread unless the thread has disabled its cancelation. The thread can disable its cancelation capability if it is doing something critical. It is a good practice to enable the cancelation capability after a while and allow the parent to cancel the thread if needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

In the context of scheduling of the user threads, and assigning each of them to a kernel thread, what is a light-weight process (LWP)?

A

To the user- thread library, the LWP appears to be a virtual processor on which the application can schedule a user thread to run.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What is a cooperating process, and why is there a potential for data inconsistency?

A

A cooperating process can affect or be affected by other processes. They use shared memory to share data or to pass messages. The sequence of actions of cooperating processes could result in the wrong content of the shared memory. Wrong content of shared memory is referred to as data inconsistency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Why is there a need for synchronization?

A

Data inconsistency requires that the cooperating processes or threads access shared variables in an orderly manner, which is called synchronization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What is the “consumer-producer” problem in the context of synchronization? Explain the simple solution of using a “counter.”

A

Two processes are there. One process produces data, and the other process reads it (consumes it). There could be data inconsistency if the two participants don’t follow a rhythm. The producer should first write into a buffer and increase the counter; then, the consumer should read the data and decrease the counter.

43
Q

In the “producer-consumer” problem, for synchronous passing of data, what are the “blocking send” and “blocking receive” strategies?

A

If these two strategies are applied, then the sender is blocked until the receiver verifies the reception of the data. Also, the receiver is blocked from reading data until the sender verifies that new data is placed into the buffer.

44
Q

What is a “race condition” in the producer-consumer problem?

A

Both the producer and the consumer can access the shared variable “counter.” The producer will increment, and the consumer will decrement the variable, independent of each other. The final value of the counter depends on the order that these processes update the variable. The solution is not to allow both processes to access the shared variable concurrently.

45
Q

The “critical section” problem is a synchronization protocol. Explain its four steps.

A

1- Entry section: a process requests to enter its critical section.

2- Critical section: only one process can enter its critical section. A critical section is where the process shares or changes shared variables.

3- Exit section: after finishing its critical section, the process exits, and one of the other processes could enter its critical section.

4- Remainder section: A process could continue with other jobs following exiting the critical section. The remainder section of the processes could be done concurrently.

46
Q

What are the three conditions of the critical section protocol?

A

1- Mutual exclusion: only one process could be in its critical section.

2- Progress: If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely.

3- Bounded waiting: there should be a bound on the number of times that other processes could enter their critical section before a process that has requested should wait.

47
Q

What is Paterson’s solution for synchronization?

A

Paterson’s solution is for two processes. It assumes an atomic “load” and “store.” It requires two shared variables of int turn and boolean flag[2]. Each process can turn its flag bit to 1 to declare its intention for entering the critical section. Turn variable shows the process number that is allowed to enter its critical section. Each process i sets its flag to 1 and sets the turn variable to j (offer the other process to run). Then process i arrives at its critical section and keeps waiting there by testing the “turn” variable to see if it is its turn.

48
Q

What is Mutex, and how does it help synchronization?

A

Complicated implementations have done synchronizing threads by OS and hardware. Mutex is a synchronization primitive available to programmers. A mutex has a binary “lock,” which is accessed by acquire() and then is released by release().

49
Q

One thread is supposed to perform the following function:

A

A) This is thread ID. In this example, there are two threads.

B) A global variable called “lock” is generated of the type mutex. This variable is either 0 or 1.

C) This instruction reads the mutex lock, and if it is free, it will make it busy and starts its critical section.

D) This is the critical section of the thread. While this thread is busy doing its critical section, other threads cannot intervene because only this thread has the mutex lock.

E) When the thread finishes its critical section, it will release the lock, and other threads that are waiting for the lock can acquire it.

50
Q

What is a semaphore?

A

A semaphore is a signaling mechanism, and another thread can signal a thread that is waiting on a semaphore. Mutex allows a thread to hold the lock and then release the lock. Using Mutex, when the lock is released, any thread that is waiting could acquire the lock but using semaphore, a thread could pass the lock to a specific thread.

51
Q

What is an “atomic” operation, and why is it important to have atomic operations for synchronization?

A

Operations that are used for synchronization are more complex than usual operations. For example, wait() operation in mutex lock handling should read the mutex lock, and if it is “free,” then it should change the mutex lock to “unavailable.” A non-atomic operation may read the variable, and before it changes the value of the variable, another thread may have done the same thing. Then each thread thinks it has changed the lock value, and it owns the lock. An atomic operation allows only one thread to first read and then change the variable without interference from other threads.

52
Q

What are the purposes of wait() and signal() operations in the context of semaphores?

A

The wait() operation lets the thread check the semaphore lock until the holder releases the lock. The operation is atomic. If the lock that is read is free, then the lock is changed to “unavailable” for the rest of the threads. The signal() operation releases the semaphore lock to a designated semaphore holder.

53
Q

Explain the purpose of parts A to E.

A

A) defines variable sem_example as a binary semaphore.

B) Delays thread 1 to show that the order of generation of threads is not essential and when one thread is finished the other thread receives the semaphore.

C) A thread waits here till the semaphore is released.

D) This is the critical section of the thread.

E) This thread signals the release of the semaphore to the other holder of the semaphore.

54
Q

Is it possible to have several threads and activate them in a specific order?

A

Yes. Using semaphores each thread after finishing its critical section can signal another thread. Hence, a particular sequence of events can be generated.

55
Q

In the followings, a program is shown with three threads. What are the roles of the sections of the program that are labeled as A to F?

A

A) Three semaphore IDs are defined.

B) The thread that is to run routine1 waits here to receive a start signal through semaphore sem1.

C) When routine1’s critical section is finished, a semaphore is released to whoever holds sem3.

D) This part is the critical section of routine2.

E) All three semaphores are initiated. Sem1 is initiated to 1, which means that it can start initially. The other two semaphores are initiated to 0, and they have to wait until they are signaled for activation.

F) All semaphores are destroyed after their threads are joined with the main process.

56
Q

What are deadlock and starvation when multithreading is performed?

A

A) Deadlock occurs when each thread holds a part of the resources that other threads need. This causes indefinite wait for all of the threads.

B) Starvation occurs when one thread is indefinitely kept from operating.

57
Q

What are the four conditions that create a deadlock where avoiding any of these conditions prevents deadlocks?

A

1) Mutual exclusion: no two processes can operate simultaneously.
2) Hold & wait: each process is holding a part of its needed resources and is waiting for other processes to release the rest of its needed resources.
3) No-preemption: when a process is holding some resources, the process has the option of not releasing the resources until it finishes its job.
4) Circular wait: among n processes, each process is waiting for another process circularly.

58
Q

What are the three methods for handling deadlocks?

A

1) never allow the process to enter a deadlock.
2) Allow the processes to enter a deadlock and then handle the situation.
3) Ignore that there is a potential for deadlock occurrence.

59
Q

How could we guarantee that processes never enter a deadlock situation?

A

1) prevention 2) avoidance.

60
Q

How could we prevent deadlocks?

A

1) do not apply “mutual exclusion” in all situations. For example, if several processes want to access shared variables for reading, mutual exclusion is not used.
2) Hold & wait situation is avoided. A process can claim resources only when all of them are available. If a process is holding some resources and the process is not running, then the resources are requested back from the process.

61
Q

How could we apply deadlock avoidance?

A

Information about the needed resources and current resources of the processes is kept. Other information such as a period that a process is waiting or holding resources could help avoidance of deadlock. Using this information, OS could give out resources or stop processes or take resources from the processes.

62
Q

What is the purpose of the computer network?

A

A computer network is a set of computers connected to share resources. The resources could be categorized into information, hardware, and software tools.

63
Q

What are examples of “networks?”

A

1- Worldwide telephone networks,

2- Networks of radio and television stations,

3- the network of communication satellites,

4- Internet.

64
Q

What are the Benefits of computer networks as compared to a single computer?

A

1) Resource sharing: hardware resources and software tools of many computers are shared between the users of the network with higher efficiency.
2) Improved Reliability: If one computer (one node in the network) fails, the rest of the network is still functional, which makes the whole network more reliable.
3) Reduced Costs: Rather than collecting all computing power and storage capacity in one place, the network allows using the capabilities of other nodes.
4) Scalability: a network of computers, for example, in a data center, can be expanded or can be shrunk, as needed.
5) Information Sharing: contents of the storage devices can be shared and can be accessed. Also, users can contribute to the expansion and improvement of the information.

65
Q

What are some recent applications of Computer Networks?

A

1- Social Networking: users contribute new information, such as Facebook, Wikipedia, Instagram.

2) E-Commerce: supply and demand on the Internet (e-bay, Amazon, etc.).

3- Online entertainment: IPTV (IP Television).

4- Cloud computing: SaaS, PaaS, IaaS.

5) Ubiquitous Computing: Connection of the embedded sensors to the Internet (Sensors in the home for security, body area networks for health, sensors for the gas/electricity measurement, Internet of things (IoT)).

66
Q

How do we categorize networks based on the size of the region that they cover?

A

1- PAN (Personal Area Network),

2- LAN (Local Area Network),

3- MAN (Metropolitan Area Network),

4-WAN (Wide Area Network),

5-The Internet (the network of all networks)

67
Q

What are the two types of connections in a LAN? Explain each type.

A

1) wireless connection of devices to an access point. Convenience is high in this system due to the mobility of the connected devices.
2) wired connection of devices to an Ethernet switch. Performance is high in this type, but convenience is low to the bounding of devices to a fixed location.

68
Q

What is the standard of wireless communication?

A

IEEE 802.11.

69
Q

What is an example of MAN?

A

a network of fiber or Coaxial cables is distributed underground in part of the city by a service provider, such as Comcast. Each house or apartment is connected to the network via a “switch box.”

70
Q

What is an example for WAN?

A

different branches of a company in different parts of a country can connect by getting services from an ISP (Internet service provider.)

71
Q

What is the semantic gap in networks?

A

There is a semantic difference between the complex user application and the simple signals that will physically carry the content from the source to the destination.

72
Q

How do we resolve the semantic gap in networks?

A

A multilayer structure is used to translate the application to physical signals. The same multilayer structure is used at the receiver to convert the received signal to a practical application.

73
Q

In the context of the network layer model, explain the following: peer, interface, services, and protocol.

A

1- Peer: two corresponding layers in the receiver and sender sides.

2- Interface: a connection between two consecutive layers.

3- Services: The service defines what operations the layer is prepared to perform on behalf of its users, but it says nothing about how these operations are implemented. The lower layer is the service provider in the receiver, and the upper layer is the service user.

4- Protocol: A protocol is a set of rules governing the format and meaning of the packets or messages that are exchanged by the peer entities within a layer. Layers use protocols to implement their service definitions. Layers are free to change their protocols at will, provided they do not modify the service visible to their users.

74
Q

What is a “connectionless” service?

A

In a connectionless service, packets are injected into the network individually and routed independently of each other. No advance setup is needed. In this context, the packets are frequently called “datagrams” (to remind us of telegrams.) The intended content may be divided into packets, which are sent without a guarantee of delivery. The packets may reach their destination out of order. (Internet Protocol (IP) and User Datagram Protocol (UDP) are connectionless protocols.)

75
Q

What is a “connection-oriented” service?

A

A path from the source router to the destination router must be established before any data packets can be sent. This connection is called a VC (virtual circuit), in analogy with the physical circuits set up by the telephone system, and the network is called a virtual-circuit network. When a VC is established, all routers in the way should keep some information about this connection, which is in contrast with the connectionless service.

76
Q

Is transmission control protocol (TCP) a connection-oriented protocol?

A

The restricted definition of connection-oriented protocol says that a virtual circuit is formed, and the packets need only to mention the connection number (virtual circuit identifier). TCP is a packet switch network, but it can act as a connection-oriented protocol. Since we include sequence numbers and the packets are placed one after the other, we can say that TCP performs as a connection-oriented method.

77
Q

What primitives are used when a client machine wants to connect to a server machine? Draw a simple diagram.

A

The main primitives are: listen, accept, connect, send, receive, and disconnect.

78
Q

Explain the motivation behind the first ARPANET network and its main characteristics.

A

The telephone network was hierarchical, and users were at the end of the hierarchy. If a node in this tree-like structure were to fail, all users connected to it would be disconnected from the network. ARPANET was created in 1972 as a “packet-switched” and “decentralized” network. The nodes were computer centers of some universities and research centers. Each node was connected at least to two other nodes. Failure of a link in the network would not fail the network.

79
Q

What is a simple description of Internet architecture in the United States?

A

1- ISPs used the existing cable TV networks to form the backbone of the Internet.

2- Data packets are transmitted through cables and are switched by routers within each network.

3- ISPs, through business agreements, connected their networks.

4- Costumers are connected to the closest network by cables, fiber, dialup, Wi-Fi, or mobile phone networks.

5- Datacenters connect their cluster of servers to a network.

80
Q

What is a “handoff” in a mobile phone network?

A

When a user of a mobile phone leaves the coverage area of one “base station,” it will lose connection to that station. Before losing the connection, the base station hands over the user to the next base station.

81
Q

Why is the wireless connection of a computer to the network has a variable quality?

A

Fading occurs, which is the reduction of the signal strength of the “access point” as the distance from it is increased. Also, the reflected signal from the access point can interfere with the direct signal, which could cause fading. Hence, in some locations, the signal strength could be high, and in some places, a low power signal could be present.

82
Q

What is the OSI model, and what are the names of its layers?

A

Open Systems Interconnection model lists a sequence of services that are performed in the conversion of an application to signal for delivery over a network. The sequence of events is modeled as layers of the network. The list of layers consists of the following. 7- application, 6- presentation, 5- session, 4- transmission, 3- network, 2- data link, 1- physical.

83
Q

What is an example of an application, and what is an example of the application layer (L7) protocol?

A

Firefox program is a network application. HTTP is an example of the application layer protocol. The task of the application layer is to find the communication part of the Firefox program and convey that information to the next layer. For example, a Firefox screen contains plenty of data. When the user clicks on a link, the application layer delivers only that link address to the next layer.

84
Q

What are the primary responsibilities of the presentation layer (L6)?

A

Format of the characters is changed, and character string conversion is performed. For example, ASCII characters are converted to 8-bit decimal numbers. Encryption or decryption and data compression are also performed.

85
Q

What is the primary task of the session layer (L5)?

A

The session layer provides the mechanism for opening, closing, and managing a session between end-user application processes. In case of a connection loss, this protocol may try to recover the connection. If a connection is not used for a long period, the session-layer protocol may close it and re-open it.

86
Q

What is the responsibility of the transport layer (L4)?

A

A header is attached to the content that is received from the session layer. This protocol performs connection-oriented communication. The transport layer sends sequence numbers and acknowledgment. Also, flow control and multiplexing are performed here.

87
Q

What does the network layer (L3) do?

A

Routing is the responsibility of the network layer. IP addresses of the sender and receiver are added to the content that is received from the transport layer.

88
Q

What is the purpose of the data-link layer (L2)?

A

Error detection and correction is performed in this layer. Also, source and destination MAC addresses are added to the content that is received from the network layer.

89
Q

What is the physical layer (L1)?

A

It is a protocol that will send signals into a physical medium. Physical mediums are fiber, wire, and wireless connections. Protocols such as IEEE 802.11 for wireless and 802.3 for Ethernet are examples of this layer. Synchronization header (preamble and SDF) are also added in this layer.

90
Q

How is a “web address” (technically a URL) converted to an IP address?

A

A DNS (domain name server) converts a URL into an IP address.

91
Q

What are some examples of top-level domains?

A

.com, .edu, .gov, .net, .org

92
Q

What are some examples of top-level domains that indicate country codes?

A

.ca , .jp , .cn , .in

93
Q

The transport layer adds a header to the content that it receives from the session layer. What is the size of this header, and what are the major parts of it?

A

It consists of at least five 32-bit words. The major parts of the header consist of source and destination ports, sequence number, acknowledgment number, window size, and flags.

94
Q

What is the sequence number?

A

The transmitter mentions the address of the first byte of the segment that is sending to the receiver. This number is equal to the acknowledgment number that the receiver has previously sent to the sender. By using the sequence number, the receiver knows where to place the received segment.

95
Q

What is the acknowledgment number?

A

The receiver sends the starting address of the next segment to the sender. The transmitter will use this number to generate its following sequence number.

96
Q

What is the window size in the transport layer header of the OSI model?

A

The receiver sends the volume of the data segment that it expects the transmitter will send.

97
Q

What is the meaning of “congestion control?”

A

The capacity of the receiver may be low and cannot receive a large data segment. Also, the network connection may be handling a large number of users and high data traffic. Hence, algorithms are required to control the size of data segments.

98
Q

What is the “slow start algorithm?”

A

It is an algorithm for congestion control. It starts with small window size. If an acknowledgment is received, then the size of the window is doubled. If the transmission is timed out, then the last successful window size will be used.

99
Q

The network layer adds a header to the data content that it receives from the transport layer. What are the major parts of the IP header?

A

IP addresses of the source and destination, time to live, header checksum.

100
Q

What is the purpose of the “time to live” 8-bit field in the IP header?

A

It is set by the sender and shows the maximum number of routers that the package can go through. Each router decrements this number and passes the packet to the next router. If the TTL field becomes zero, then the packet is discarded.

101
Q

What is “fragmentation” in the IP protocol?

A

Routers may fragment an IP packet to prevent congestion. A 64KB may be partitioned into several fragments. The router adds information into the header such that the receiver could put the pieces back together to form the original data packet.

102
Q

What are the two main IP protocols, and what are the main differences?

A

IPv4 and IPv6. The main difference is the length of the IP address. In IPv4 addresses are 32 bits and in IPv6 addresses are 128 bits.

103
Q

In the following diagram, a fragmentation example is shown. A packet of 4000 bytes is fragmented into three pieces. Explain the offset, MF, and length of these three fragments.

A

the length of the packet is 4000 bytes. The data is 3980 bytes, and the header is 20 bytes. When we fragment it into three parts, each part contains a 20-byte header, and the rest is data.

104
Q
A