Lesson 2: Transport and Application Layers Flashcards
Why do we have a Transport Layer?
Because the Network Layer is a best offer delivery service. Also, the Network Layer relies on IP addresses which don’t segment packets per process at the receiving host.
What is Multiplexing and Demultiplexing?
The Transport layer uses ports as an addressing mechanism to distinguish the many host processes that share the same IP address.
Demultiplexing: The job of examining a set of fields (eg Source & Destination Port #) in the transport layer segment and then delivering the segment to the appropriate socket.
Multiplexing: The job of gathering data from different sockets, encapsulating each data chunk with header information for each segment, and then forwarding the segment to the network layer.
Describe Connectionless and Connection Oriented when it comes to Multiplexing/Demultiplexing.
Connectionless:
- UDP socket identifier is a two-tuple that includes destination IP address and destination port number.
- Network Layer datagram is sent with best effort deliver
- Demultiplexing involves looking at destination port number of received datagram and mapping to host UDP socket. Each host process will have it’s own UDP socket
- Doesn’t matter of the received segments come form different sources hosts/port numbers.
Connection Oriented:
- TCP socket identifier is a four tuple that includes source IP, source port, destination IP, and destination port.
- Uses a three-way handshake between client/server sockets to establish a connection.
- Host creates a socket for each source IP which allows it to demultiplex even if the source port number is the same from different receiving hosts.
What are the header similarities and differences between TCP and UDP?
- Both contain Source & Destination Port #s
- TCP is 32 bit and UDP is 64 bit
- UDP includes Length and Checksum
What are the pros/cons of UDP? What are the use cases?
UDP is a connectionless protocol that doesn’t require the setup of a connection (eg three-way handshake) before sending packets and it’s an unreliable.
UDP offer less delays and better control over sending data because there are not congestion controls and connection management overhead.
UPD is a better option for real time applications that are sensitive to delays such as DNS.
What are the stages for the TCP Three way Handshake and then the Connection Teardown?
Step 1: Sends an empty segment with SYN bit = 1 and an initial sequence number (eg seq = client_seq)
Step 2: Server receive client segment, allocates resources for connection and send back SYNACK segment. SYNACK segment has SYN bit = 1, ack field containing (ack = client_seg + 1), and an intiail squence number (eg seq = server_seq)
Step 3: Client recieves server SYNACK segment, allocates resources for connection/buffer and send an acknowledgement. Acknowledgement includes SYN bit = 0, ack = server_seq+1, and seq = client_seq+1
Connection Teardown:
Step 1: Client sends a segment with FIN bit = 1 to the server
Step 2: Server send ACK and starts closing connection
Step 3: Client send ACK to the server. Waits for a period of time and resends the ACK in case the first ACK was lost.
Lesson 2 Quiz 1:
As we have seen, UDP and TCP use port numbers to identify the sending application and destination application. Why don’t UDP and TCP just use process IDs rather than define port numbers?
Process IDs are specific to operating systems and therefore using process IDs rather than a specially defined port would make the protocol operating system dependent. Also, a single process can set up multiple channels of communications and so using the process ID as the destination identifier wouldn’t be able to properly demultiplex, Finally, having processes listen on well-known ports (like 80 for http) is an important convention.
Lesson 2 Quiz 2:
UDP and TCP use 1’s complement for their checksums. But why is it that UDP takes the 1’s complement of the sum – why not just use the sum? Exploring this further, using 1’s complement, how does the receiver compute and detect errors? Using 1’s complement, is it possible that a 1-bit error will go undetected? What about a 2-bit error?
To detect errors, the receiver adds the four words (the three original words and the checksum). If the sum contains a zero, the receiver knows there has been an error. While all one-bit errors will be detected, but two-bit errors can be undetected (e.g., if the last digit of the first word is converted to a 0 and the last digit of the second word is converted to a 1).
Describe Automatic Repeat Request (ARQ), Go-back-N, and Selective ACK.
Automatic Repeat Request (ARQ): Requiring the receiver to send ACKs for each segment. Send resends if they don’t receive an ACK within a period of time. This specifically called Stop and Wait ARQ. Problem with this is that it’s slow.
Go-back-N: When the receiver doesn’t receive a certain packet and discards all subsequent packets. The sender resends all packets starting from the one that wasn’t received. Down side is that a single packet error can cause a lot of unnecessary retransmissions.
Selective ACK: The receiver would acknowledge a correctly received packet even if it’s out of order. The out of order packets are buffered until all packets are received before delivering to the application layer. A timeout is used to detect loss of packets. Also, duplicate ACKs are used to detect Loss.
Fast retransmit: When a sender received 3 duplicate ACKs for a prior packet, it considers the current packet lost and resends.
How does TCP Flow Control work?
Goal - Make sure the receiver buffer doesn’t overflow.
Solution:
Receiver monitors the receive window (rwnd) and sends it to the sender with every segment/ACK.
rwnd = RcvBuffer - [LastByteRcvd - LastByteRead]
RcvBuffer = size of receiver buffer
LastByteRcvd = size of received data
LastByteRead = size of read data
Sender monitors UnACKed data sent which is equal to LastByte Sent - LastByteAcked.
LastByteSent - LastByteAcked need to be <= rwnd
Problem:
When rwnd = 0, the sender stops sending data
Solution: Sender continues to send segments of size 1 byte even when rwnd = 0
What is the objective of TCP Congestion Control? What are the goals of a good congestion control algorithm?
Objective - Control the transmission rate at the sender in order to avoid congestion in the network.
Goals of a good congestion control algorithm:
Efficiency - Get high throughput or network utilization
Fairness - Each user should get equal bandwidth.
Low delay - Low network delays
Fast convergence - Flow should be able to converge to its fair allocation fast.
What are the two approaches to TCP Congestion Control?
Network-assisted: Rely on the network layer to provide explicit feedback to the sender about network congestion.
For example: Routers could use ICMP to inform senders of congestion.
Under sever congestion, even ICMP packets could be lost and disabling the network feedback.
End-to-end congestion control: Hosts infer congestion from the network behavior and adapt the transmission rate.
Approach chosen by TCP.
But modern networks can provide explicit feedback to end-hosts via protocols such as ECN and QCN.
What are the congestion signals used by TCP AIMD?
Congestion signals used are 3 ACKs or a timeout.
How does a TCP sender limit the sending rate?
TCP uses a congestion window which is similar to the receive window used in flow control.
Congestion window = Max number of data that a sending host can have in transition (sent but not acknowledged)
Uses a Probe & Adapt approach:
- TCP increases the congestion window trying to achieve the available throughput.
- Once it detects congestion then the congestion window is decreased.
Number of unacknowledged data that sendar can have = min of the congestion window and receive window.
-LastByteSent - LastByteAcked <= min(cwnd, rwnd)
Summary: A TCP sender cannot send faster than the slowest component whether it’s the network or the receiving host.
How does Additive Increase/Multiplicative Decrease (AIMD) work?
Additive Increase: Starts with an initial congestion window size of 2, adding 1 as soon as the Ack arrives.
Multiplicative Decrease: When TCP detects congestion, the congestion window is cut in half.
Congestion signals used are 3 ACKs or a timeout.
Also leads to fairness in bandwidth sharing.