Chap 3 transport Flashcards
What is the role of the transport layer in the TCP/IP model?
bridge between the network layer and the application layer,
transfer data between applications on different network nodes.
What are the key responsibilities of the transport layer?
- Building efficiency and reliability.
- Key services include port addressing,
- segmentation,
- flow control,
- error control
- congestion control
What is port addressing and why is it important?
The transport layer uses port numbers to identify different (applications) on a host.
Each application is assigned a unique port number,
* allowing multiple applications to communicate simultaneously on the same host
What is segmentation in the transport layer?
Breaking down large data streams into smaller, manageable packets for efficient transmission across networks
What is flow control in the transport layer?
Regulates the data flow between sender and receiver, preventing overwhelming the receiver’s processing capacity
What is error control in the transport layer?
Detects and corrects errors that occur during transmission using techniques like checksums and acknowledgements
What is congestion control in the transport layer?
Dynamically adjusts the data transfer rate based on network conditions to avoid congestion and performance degradation
What are the main characteristics of TCP?
- Connection-oriented
- Reliable
- Congestion control
- Flow control:
What are the main characteristics of UDP?
Connectionless
Unreliable
Prioritizes speed
How does the transport layer interact with the application layer?
- Receives data from applications in the form of application messages
- Segments the data into smaller, manageable packets.
- Adds header information, including source and destination port numbers.
- Hands over the packets to the network layer for routing
At a high level, what type of communication does the transport layer handle?
Process to process communication
What actions does the transport layer perform on the sender side?
- Application layer creates message and drops in socket
- Transport layer determines segment header fields values.
- Transport layer creates segment and passes segment to network layer
What actions does the transport layer perform on the receiver side?
- Receives segment from network layer.
- Extracts application-layer message.
- Checks header values to ensure segment is not corrupted.
- Demultiplexes message up to application via socket
What is multiplexing ?
Combining multiple data streams into a single stream for efficient transmission
What demultiplexing?
Separating a combined data stream into individual data streams for specific applications.
Allows multiple applications on a host to share the same network connection by using port numbers to identify different application endpoints.
What are the benefits of multiplexing?
- Optimizes network bandwidth utilization.
- Enables efficient utilization of network resources.
- Reduces overall transmission time.
- Enables communication for multiple applications on a single connection
How does multiplexing work?
- Transport layer assigns unique identifiers (port numbers) to each data stream
- These identifiers are embedded within the data packets.
- Packets from different streams are interleaved and sent as a single data stream
What is demultiplexing?
Demultiplexing takes a combined data stream and separates it into individual data streams for specific applications
What are the benefits of demultiplexing?
- Efficient Resource Utilization:
- Improved Performance
- Scalability
- Security
How does demultiplexing work?
- Each data packet in the combined stream carries a unique identifier, typically a port number.
- The transport layer uses these port numbers to identify the destination application for each packet.
- Based on port number, packet is forwarded to the appropriate application on the receiving device
What happens during multiplexing at the sender?
The** transport layer** will multiplex data from multiple processes and put it into a segment and add this segment to transport layer header,
this segment will be later used during demultiplexing
What is the purpose of demultiplexing at the receiver?
Use header info to deliver received segments to correct socket
Demultiplexing at reciever steps
- Host receives IP datagrams, each with source and destination IP addresses,
- each carrying one transport-layer segment with source and destination port numbers
- Host uses IP addresses and port numbers to direct segment to appropriate socket
How does demultiplexing work in UDP?
- Demultiplexing based on port numbers only.
- UDP packets only include the destination port number. T
- he transport layer uses the port number to deliver the data to the corresponding application
What is connectionless demultiplexing?
UDP sends data packets without establishing a connection, prioritizing speed over reliability
How does demultiplexing work in TCP?
Demultiplexing based on IP address, port numbers and sequence numbers
What is connection-oriented demultiplexing?
TCP establishes a virtual connection before data transfer, ensuring reliable and ordered delivery
How is a TCP socket identified?
By a 4-tuple:
1. source IP address,
2. source port number,
3. destination IP address,
4. and destination port number
What values does the demux use to direct a segment to the appropriate socket?
All four values (4-tuple) to direct segment to appropriate socket
What are port numbers?
Logical identifiers assigned to specific applications or services on a network device, differentiating between multiple programs running on the same device
What are well-known ports?
Standardized ports (0-1023) assigned by IANA to essential services like HTTP (80), FTP (21), and SSH (22)
What are registered ports?
Ports (1024-49151) assigned by specific organizations for commonly used applications and services
What are dynamic/private ports?
Ports (49152-65535) used by applications dynamically for temporary connections, often assigned by the operating system
port number in URL
can be explicitly specified in a URL, for example, http://example.com:8080/page.html uses port 8080 for HTTP [1].
Importance of Port Numbers
Security: filtering traffic based on authorized ports
Efficiency:
Standardization: provide consistent access to essential services
When is UDP used
When speed is crucial and occasional data loss is acceptable
transaction-oriented protocols like DNS
* For stateless applications with a large number of clients, such as streaming multimedia (IPTV)
* For real-time applications like online games
* When multicasting is required [8].
* error checking or correction are not required.
Why UDP for DNS?
Speed is critical for fast lookups, ensuring smooth browsing [8].
Loss tolerance
Small data size: DNS responses are typically compact, minimizing the impact of potential loss [9].
Why UDP for SNMP?
Used for efficient data collection in network management
**Real-time monitoring: **Timely information is crucial for network troubleshooting [9].
Loss tolerance
**Large data volume: **UDP minimizes overhead when dealing with a high volume of network monitoring data [9].
Trade-offs and Considerations with UDP
Unreliability:
Security concerns: Lack of encryption and authentication
Checksum: Included for error detection within the packet, but doesn’t guarantee delivery or order [10].
UDP Header Format
(components and bits)
- Source Port (16 bits):
- Destination Port (16 bits):
- Length (16 bits): Total length of the UDP datagram (header + data) in bytes
- Checksum (16 bits): Detects errors during transmission by calculating a value based on the packet’s data and header
- Data: Contains the payload to be transmitted. Its length is determined by the Length field minus the 8-byte UDP header
TCP Features
- Connection-Oriented:
- Reliability:
- Flow Control:
- Congestion Control:
- Full-Duplex Communication:
- Ordered Data Delivery:
- Connection Termination: Uses a four-way handshake to close connections gracefully
Ordered Data Delivery
TCP ensures data is delivered to the receiver in the same order it was sent [18].
It uses sequence numbers to track the order of data packets [18].
Out-of-order packets are reordered before delivery to the application [18].
Connection Termination
TCP connections are terminated using a four-way handshake process [19].
Both the sender and receiver exchange control packets (FIN and ACK) to close the connection gracefully
TCP Segment
- A unit of data transmitted over a TCP connection
- Breaks down large data streams into smaller, manageable units
- Consists of a TCP header and a data payload
TCP Header
Contains critical information for reliable data transfer [22].
Typically 20 bytes (160 bits) long, but can be longer due to options [23].
TCP Header Components and Bits:
- Source Port (16 bits):
- Destination Port (16 bits):
- Sequence Number (32 bits):
- Acknowledgment Number (32 bits): Specifies the next sequence number expected by the sender [24].
- Data Offset (4 bits): Indicates the length of the TCP header in 32-bit words, pointing to the start of the data [24].
- Control Flags (9 bits):
- Window Size (16 bits):
- Checksum (16 bits):
- Urgent Pointer (16 bits):
- Options (variable):
Control Flags (9 bits):
- URG: Urgent Pointer field significant [25].
- ACK: Acknowledgment field significant [25].
- PSH: Push Function [25].
- RST: Reset the connection [25].
- SYN: Synchronize sequence numbers (used for connection establishment) [25, 26].
- FIN: No more data from sender (used for connection termination) [20, 21, 25].
- Other flags for congestion notification (CWR, ECE)
Window Size
A 16-bit field in the TCP header [25].
* Specifies the size of the receive window
* Used for flow control to manage the amount of data the sender can send without acknowledgment
* Indicates the available buffer space on the receiver’s side
Checksum
A 16-bit field in the TCP header [25].
Provides error detection for the TCP header and data [25, 28].
Calculated over the TCP header, TCP data, and a pseudo-header
Urgent Pointer
A 16-bit field in the TCP header [30].
Indicates the end of urgent data, if present [30].
Only significant when the URG control flag is set
Options
A variable-length field in the TCP header [30].
Provides additional control information or parameters for the TCP connection [30].
Examples include Maximum Segment Size (MSS), Timestamps, and Window Scale
TCP Data
Contains the payload or application data to be transmitted [30].
The size can vary depending on the Maximum Segment Size (MSS) and other factors
Round Trip Time (RTT)
The time it takes for a TCP segment to travel from the sender to the receiver and for the acknowledgment (ACK) to return [27].
A measure of propagation delay and processing time [27
Estimation
TCP implementations estimate the RTT by measuring the time between sending a segment and receiving its ACK [27].
Smoothed RTT (SRTT)
An exponentially weighted moving average of recent RTT samples [32].
Used to account for variations in RTT
RTT Variance (RTTVAR)
Measures the degree of variation or volatility in RTT samples [32].
TCP Timeout
The duration TCP waits for an acknowledgment before retransmitting a segment [32].
Dynamically adjusted based on RTT measurements and estimation
Retransmission Timer
Each TCP segment has an associated timer, initially set based on estimated RTT and RTTVAR [33].
If an ACK is not received within the timeout period, the segment is retransmitted [33].
Adaptive Timeout Mechanisms
TCP uses algorithms like Karn’s algorithm and Jacobson’s algorithm to dynamically adjust the timeout value based on RTT measurements and network conditions
Exponential Backoff
In case of multiple consecutive timeouts, TCP often increases the timeout value exponentially [33].
This reduces the frequency of retransmissions and helps mitigate congestion
Connection Establishment - TCP Sender
The TCP sender initiates the connection establishment by sending a SYN (synchronize) segment to the receiver.
*
It waits for an acknowledgment (ACK) from the receiver, confirming the receipt of the SYN segment.
*
Upon receiving the ACK, the sender completes the three-way handshake, establishing the TCP connectio
Segment Transmission - TCP Sender
Once the connection is established, the sender can start transmitting data segments to the receiver.
*
The sender encapsulates application data into TCP segments and sends them over the network to the receiver’s IP address and port number.
*
Each segment includes sequence numbers to allow the receiver to reconstruct the data in the correct order
Acknowledgement Reception - TCP Sender
After sending each segment, the sender waits for an acknowledgment (ACK) from the receiver.
*
The sender maintains a timer for each transmitted segment to detect lost or delayed acknowledgments.
*
If an acknowledgment is not received within a certain timeout period, the sender may retransmit the segment
Connection Establishment - TCP Receiver
The TCP receiver listens for incoming connection requests on a specific port.
*
When a connection request (SYN segment) is received, the receiver responds with a SYN-ACK (synchronize-acknowledgment) segment to acknowledge the sender’s SYN segment and establish the connection.
*
Once the three-way handshake is completed, the receiver is ready to receive data from the sender
TCP Receiver Segment Reception
After the connection is established, the receiver listens for incoming TCP segments sent by the sender.
*
It receives data segments and acknowledges their receipt by sending acknowledgment (ACK) segments back to the sender.
*
The receiver uses sequence numbers in the TCP header to reorder out-of-order segments and reconstruct the original data stream
TCP Receiver vs. TCP Sender Flow Control
Receiver Controls Sender: The receiver controls the sender to prevent overwhelming its buffers by signaling how much data it can accept.
*
Window Size: The receiver advertises its available buffer space to the sender using the window size field (rwnd) in the TCP header.
*
Dynamic Adjustment: The receiver dynamically adjusts the window size based on its available buffer space.
*
Sender Behavior: The sender regulates its transmission rate based on the receiver’s advertised window size, ensuring it doesn’t send more data than the receiver can handle
TCP Receiver Error Detection and Handling
The receiver verifies the integrity of incoming TCP segments by calculating the checksum and comparing it with the checksum value provided in the segment.
*
If errors are detected (e.g., corrupted segments, invalid checksums), the receiver may discard the segments or request their retransmission (through lack of acknowledgement).
Brief Comparison of TCP and UDP Protocols
*
TCP: Reliable transport, connection-oriented (requires connection setup), provides flow control and congestion control, guarantees in-order delivery, more overhead. Suitable for applications requiring reliable data exchange like web browsing (HTTP), email (SMTP), and file transfer (FTP).
*
UDP: Unreliable data transfer, connectionless (no connection setup), no flow control or congestion control, unordered delivery, less overhead. Prioritizes speed and is suitable for applications where speed is crucial and occasional data loss is acceptable, like online gaming and real-time audio/video conferencing
What happens if network layer delivers data faster than application layer removes data from socket buffers?
Buffer Overflow and Data Loss: Incoming data packets may be dropped because there’s no space left in the buffer.
*
Increased Latency: The network layer may need to wait for the application to consume data, leading to delays.
*
Congestion: Can occur if routers’ buffers also become overwhelmed due to the sustained high rate of data.
*
Resource Starvation: If buffers remain full, it can lead to other processes not having access to necessary resources.
*
(Solution) Flow Control: Mechanisms like TCP flow control are designed to prevent this by allowing the receiver to signal the sender to slow down
Buffer Overflow (in the context of TCP Flow Control)
Occurs when the receiver’s buffers become full because the sender is transmitting data faster than the application can consume it.
*
Can lead to data loss as new incoming packets cannot be stored
Resource Starvation (in the context of TCP Flow Control)
A situation where, due to prolonged buffer overflow, other processes or applications on the receiving system may not have access to the resources they need to function properly.
Flow Control using Sliding Window Protocol
A mechanism used by TCP to regulate the rate of data transmission between sender and receiver.
*
The receiver advertises its available buffer space to the sender through the “window size” (rwnd) in TCP segments.
*
The sender maintains a “send window” representing the range of unacknowledged data it can send, limited by the receiver’s advertised window and the sender’s congestion window (cwnd).
*
The sender sends data within the send window and advances the window as acknowledgments are received
Sender Behavior (in Flow Control using Sliding Window)
The sender regulates its rate of data transmission based on the receiver’s advertised window size (rwnd).
*
The sender ensures that the amount of unacknowledged data it has sent does not exceed the current window size, thus preventing overwhelming the receiver.
*
The “send window” at the sender determines which packets can be sent
Manifestations of Congestion
Long delays (queueing in router buffers)
*
Packet loss (buffer overflow at routers)
Congestion Control vs. Flow Control
Congestion control: Addresses the issue of too many senders sending too much data too fast for the network to handle.
*
Flow control: Addresses the issue of one sender transmitting too fast for one receiver
Congestion Control Causes/Costs
Throughput can never exceed capacity.
*
Delay increases as capacity is approached.
*
Loss/retransmission decreases effective throughput.
*
Unneeded duplicates further decrease effective throughput.
*
Upstream transmission capacity / buffering wasted for packets lost downstream
End-End Congestion Control Taken by TCP
TCP operates with no explicit feedback from the network layer.
*
The sender infers congestion by packet loss, indicated by:
◦
Timeout
◦
Duplicate ACKs
◦
Measured RTT
*
When congestion is detected, the sender should decrease its sending rate
Network-Assisted Congestion Control
Involves routers providing direct feedback to sending/receiving hosts with flows passing through a congested router.
*
Routers may indicate congestion level or explicitly set the sending rate
Explicit Congestion Notification (ECN)
A network-assisted congestion control mechanism.
*
Routers set a flag in the IP header of packets passing through congested areas, indicating congestion.
*
Endpoints respond by reducing their transmission rates, helping to alleviate congestion before packet loss
Asynchronous Transfer Mode (ATM) Congestion Control
Primarily managed through network-assisted mechanisms.
*
Relies on feedback and signaling from network elements like switches and routers.
*
ATM provides a framework for implementing network-assisted congestion control techniques
AIMD Congestion Control
Stands for Additive Increase, Multiplicative Decrease, a core algorithm used by TCP for congestion control.
*
Aims to optimize data transmission rates while avoiding congestion.
*
Key aspects:
◦
Additive Increase
◦
Multiplicative Decrease
◦
Congestion Detection
◦
Dynamic Adaptation
Additive Increase (in AIMD)
During idle periods or when no congestion is detected, the sender gradually increases the congestion window (CWND).
*
The increase is typically by a fixed increment for each successfully acknowledged segment.
*
Allows the sender to probe the network’s capacity
Multiplicative Decrease (in AIMD)
When congestion is detected (usually by packet loss or timeouts), the sender aggressively reduces the CWND by a multiplicative factor (e.g., 1/2 or 1/4).
*
Aims to quickly back off from sending too much data
Congestion Detection (in AIMD)
TCP relies on several methods:
◦
Packet loss: Interpreted as a sign of overload.
◦
Timeouts: Lack of acknowledgment suggests potential congestion.
◦
Fast Retransmit: Receipt of duplicate ACKs implies earlier segment loss due to congestion
Dynamic Adaptation (in AIMD)
The rate of increase and decrease in CWND can be adjusted based on:
◦
Network conditions: Faster increases when favorable, slower when congested.
◦
Round-trip time (RTT): Longer RTTs might lead to slower increases or quicker decreases.
◦
Fairness: Rate changes can be adjusted to ensure fair resource sharing
Benefits of AIMD
Simple and efficient: Relatively easy to implement and computationally inexpensive.
*
Adaptive: Adjusts to network conditions dynamically.
*
Fairness: Promotes fair sharing of resources among multiple flows
Limitations of AIMD
Slow convergence: Can take time to reach optimal transmission rate, especially after congestion events.
*
Sensitivity to loss events: Large rate reductions due to single losses can be inefficient