Transport Layer II Flashcards
• Connection-oriented transport • Transmission Control Protocol (TCP) • Segment structure • Reliable data transfer • Flow control • Connection management • Congestion control
Connection-oriented transport:
Transmission Control Protocol (TCP)
- Two processes must first “handshake” with each other before they can begin to
send data to each other.
o That is, they must send some preliminary segments to each other to establish the parameters of the resultant data transfer.
As part of TCP connection establishment, both sides of the connection will initialise many TCP state variables associated
with the TCP connection. - The TCP “connection” is a logical one with common state residing only in the TCPs in
the two communicating end systems.
o TCP protocol runs only in the end systems and not in the intermediate network elements (routers and link layer switches)
so they do not maintain TCP connection state. The intermediate routers are completely oblivious to TCP connections. - A TCP connection provides a full-duplex service.
o If there is a TCP connection between Process A on one host and Process B on another host, then application layer data can
flow from Process A to Process B at the same time as application layer data flows from Process B to Process A. - A TCP connection is always point-to-point.
o That is, between a single sender and a single receiver. - This connection-establishment procedure is often referred to as a three-way
handshake.
o Suppose a process running in one host wants to initiate a connection with another process in another host (recall the client
process initiates the connection). The client first informs the client transport layer that it wants to establish a connection to
a process in the server by sending a special TCP segment; the server responds with a second special TCP segment; and
finally the client responds again with a third special segment. The first two segments carry no payload (no application
layer data); the third of these segments may carry a payload
The TCP connection
A TCP connection consists of buffers, variables, and a socket connection to a process in one host and another. No buffers or variables are allocated to the connection in network elements between hosts. The client process passes data through the socket, which is handled by TCP. TCP directs the data to the connection’s send buffer and occasionally grabs chunks from the send buffer. The maximum amount of data in a segment is limited by the maximum segment size (MSS), which is determined by the length of the largest link layer frame sent by the local sending host.
The TCP connection
A TCP connection is a network connection between two hosts, consisting of buffers, variables, and a socket connection. The client process passes data through the socket, which is then handled by TCP. TCP directs the data to the connection’s send buffer, which is then passed to the network layer. The maximum amount of data in a segment is limited by the maximum segment size (MSS), determined by the length of the largest link layer frame sent by the local sending host. TCP segments are then passed to the network layer, where they are encapsulated within IP datagrams.
TCP segment structure
The TCP segment structure includes fields for source and destination port numbers, checksum, and services like reliable, in-order deliver, and flow control. The 32-bit sequence number and acknowledgment number fields are used for reliable data transfer, while the 16-bit receive window field is used for flow control. The header length can be variable due to the TCP options field, which can be used for maximum segment size or window scaling in high-speed networks. The flag field contains 6 bits, and the ACK bit indicates successful segment acknowledgment.
TCP sequence and acknowledgment
numbers
The TCP segment header contains two crucial fields: the sequence number field and the acknowledgment number field. The sequence number is the byte-stream number of the first byte in the segment, and 500KB data over 1KB MSS will be transmitted in 500 segments. The acknowledgment number is the sequence number of the next byte the receiving host expects from the sender host. For example, a receiving host sends a segment with acknowledgment number 1,00 after receiving the first segment.
Seq and ACK numbers example
A client sends a character ‘C’ to a server, which echoes it back to the client. The starting sequence numbers for the client and server are 42 and 79, respectively. The first segment sent by the client has Seq 42, ACK 79, and payload ‘C’. The server sends Seq 79, ACK 43, and payload ‘C’, expecting the next byte stream (42+1). The client sends Seq 43 and ACK 80, waiting for bytes 80 onwards.
TCP reliable data transfer
TCP provides a reliable data transfer service, ensuring uncorrupted, sequenced data streams. It uses timeouts to recover from lost segments and starts a timer when a segment is passed to IP. If a timeout occurs, TCP retransmits the segment and restarts the timer. ACK acknowledges receipt of all byte before byte number, restarting the timer if no non-acknowledged segments are present. However, timeout-triggered retransmissions can cause end-to-end delays and unnecessary retransmissions.
TCP reliable data transfer – fast
retransmit
- TCP sender uses duplicate acknowledgments in addition to timeouts.
- The sender can often detect packet loss well before the timeout event occurs by noting
duplicate ACKs. A duplicate ACK is an ACK that reacknowledges a segment for which the
sender has already received an earlier acknowledgment. - So, why the receiver sends a duplicate ACK?
- TCP receiver receives a segment with a sequence number that is larger than the next,
expected, in-order sequence number. So, it detects a gap in the data stream (missing
segment). It then reacknowledges (i.e. generates a duplicate ACK for the last in-order byte
of data it has received). - TCP sender often sends a large number of segments back to back, if one
segment is lost, there will likely be many back-to-back duplicate ACKs - If the TCP sender receives three duplicate ACKs for the same data, it takes
this as an indication that the segment following the segment that has
been ACKed three times has been lost. - In the case that three duplicate ACKs are received, the TCP sender
performs a fast retransmit, retransmitting the missing segment before
that segment’s timer expires
TCP Flow control
- If the application is relatively slow at reading the data, the sender can very easily overflow
the connection’s receive buffer by sending too much data too quickly. - Flow control is a speed-matching service – matching the rate at which the sender is
sending against the rate at which the receiving application is reading. - TCP provides flow control by having the sender maintain a variable called the receive
window. The receive window is used to give the sender an idea of how much free buffer
space is available at the receiver. - TCP is full duplex, so both hosts allocate a receive buffer and define receive window
variable. Receive window is always less or equal to receive buffer. Receive window value is
sent in the TCP header.
TCP connection management:
connection establishment
A TCP client sends a special segment to the TCP server, which contains no application layer data but has a 1 in its header. The server allocates TCP buffers and variables to the connection and sends a connection-granted segment to the client. This segment also has a 1 in its header and an acknowledgment number. The client allocates buffers and variables to the connection and sends another segment acknowledging the server’s connection-granted segment. The SYN bit is set to zero, indicating the connection is established. This stage of the three-way handshake may carry client-to-server data in the segment payload.
TCP connection management:
connection termination
- Either of the two processes participating in a TCP connection can end the connection.
When a connection ends, the “resources” (i.e. the buffers and variables) in the hosts are
deallocated. - The client application process issues a close command. This causes the client TCP to send a
special TCP segment to the server process (FIN bit set to 1). When the server receives this
segment, it sends an ACK. The server then sends its own shutdown segment (FIN bit set to
1. Finally, the client acknowledges the server’s shutdown segment. At this point, all the
resources in the two hosts are now deallocated
TCP congestion control
TCP uses end-to-end congestion control instead of network-assisted control due to the lack of explicit feedback from the IP layer. TCP senders limit their traffic rate based on perceived network congestion, which is perceived by packet loss or delayed acknowledgements. A congestion-free network is perceived when ACKs are received at high rates. The TCP congestion-control mechanism monitors the congestion window (cwnd), which limits the rate at which a sender can send traffic into the network. The sender’s rate increases when receiving ACKs until a loss event occurs, then the transmission rate decreases (bandwidth probing).
Why would we use UDP?
UDP
* Application-level control over what data is sent, and
when. Under UDP, as soon as an application process
passes data to UDP, UDP will package the data inside a
UDP segment and immediately pass the segment to the
network layer.
- Real-time applications often require a minimum sending
rate, do not want to overly delay segment transmission,
and can tolerate some data loss. - No connection establishment. UDP just blasts away
without any formal preliminaries. Thus UDP does not
introduce any delay to establish a connection - No connection state. UDP does not maintain connection
state and does not track any parameter. For this reason, a
server devoted to a particular application can typically
support many more active clients when the application
runs over UDP rather than TCP - Small packet header overhead. The UDP segment has
only 8 bytes of overhead
TCP
- TCP has a congestion control mechanism that throttles
the transport layer TCP sender when one or more links
between the source and destination hosts become
excessively congested - TCP will continue to resend a segment until the receipt of
the segment has been acknowledged by the destination,
regardless of how long reliable delivery takes. - TCP uses a three-way handshake before it starts to
transfer data. For example, TCP connection-establishment
delay in HTTP is an important contributor to the delays
associated with downloading Web documents. - TCP maintains connection state in the end systems. This
connection state includes receive and send buffers,
congestion-control parameters, and sequence and
acknowledgment number parameters, information
needed to implement TCP’s reliable data transfer service
and to provide congestion control. - The TCP segment has 20 bytes of header overhead in
every segment.