Chap 3 transport Flashcards

1
Q

What is the role of the transport layer in the TCP/IP model?

A

bridge between the network layer and the application layer,
transfer data between applications on different network nodes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the key responsibilities of the transport layer?

A
  • Building efficiency and reliability.
  • Key services include port addressing,
  • segmentation,
  • flow control,
  • error control
  • congestion control
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is port addressing and why is it important?

A

The transport layer uses port numbers to identify different (applications) on a host.

Each application is assigned a unique port number,
* allowing multiple applications to communicate simultaneously on the same host

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is segmentation in the transport layer?

A

Breaking down large data streams into smaller, manageable packets for efficient transmission across networks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is flow control in the transport layer?

A

Regulates the data flow between sender and receiver, preventing overwhelming the receiver’s processing capacity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is error control in the transport layer?

A

Detects and corrects errors that occur during transmission using techniques like checksums and acknowledgements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is congestion control in the transport layer?

A

Dynamically adjusts the data transfer rate based on network conditions to avoid congestion and performance degradation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the main characteristics of TCP?

A
  1. Connection-oriented
  2. Reliable
  3. Congestion control
  4. Flow control:
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the main characteristics of UDP?

A

Connectionless
Unreliable
Prioritizes speed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How does the transport layer interact with the application layer?

A
  1. Receives data from applications in the form of application messages
  2. Segments the data into smaller, manageable packets.
  3. Adds header information, including source and destination port numbers.
  4. Hands over the packets to the network layer for routing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

At a high level, what type of communication does the transport layer handle?

A

Process to process communication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What actions does the transport layer perform on the sender side?

A
  1. Application layer creates message and drops in socket
  2. Transport layer determines segment header fields values.
  3. Transport layer creates segment and passes segment to network layer
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What actions does the transport layer perform on the receiver side?

A
  1. Receives segment from network layer.
  2. Extracts application-layer message.
  3. Checks header values to ensure segment is not corrupted.
  4. Demultiplexes message up to application via socket
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is multiplexing ?

A

Combining multiple data streams into a single stream for efficient transmission

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What demultiplexing?

A

Separating a combined data stream into individual data streams for specific applications.

Allows multiple applications on a host to share the same network connection by using port numbers to identify different application endpoints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the benefits of multiplexing?

A
  1. Optimizes network bandwidth utilization.
  2. Enables efficient utilization of network resources.
  3. Reduces overall transmission time.
  4. Enables communication for multiple applications on a single connection
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How does multiplexing work?

A
  1. Transport layer assigns unique identifiers (port numbers) to each data stream
  2. These identifiers are embedded within the data packets.
  3. Packets from different streams are interleaved and sent as a single data stream
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is demultiplexing?

A

Demultiplexing takes a combined data stream and separates it into individual data streams for specific applications

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What are the benefits of demultiplexing?

A
  1. Efficient Resource Utilization:
  2. Improved Performance
  3. Scalability
  4. Security
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How does demultiplexing work?

A
  1. Each data packet in the combined stream carries a unique identifier, typically a port number.
  2. The transport layer uses these port numbers to identify the destination application for each packet.
  3. Based on port number, packet is forwarded to the appropriate application on the receiving device
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What happens during multiplexing at the sender?

A

The** transport layer** will multiplex data from multiple processes and put it into a segment and add this segment to transport layer header,

this segment will be later used during demultiplexing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is the purpose of demultiplexing at the receiver?

A

Use header info to deliver received segments to correct socket

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Demultiplexing at reciever steps

A
  1. Host receives IP datagrams, each with source and destination IP addresses,
  2. each carrying one transport-layer segment with source and destination port numbers
  3. Host uses IP addresses and port numbers to direct segment to appropriate socket
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

How does demultiplexing work in UDP?

A
  • Demultiplexing based on port numbers only.
  • UDP packets only include the destination port number. T
  • he transport layer uses the port number to deliver the data to the corresponding application
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is connectionless demultiplexing?

A

UDP sends data packets without establishing a connection, prioritizing speed over reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

How does demultiplexing work in TCP?

A

Demultiplexing based on IP address, port numbers and sequence numbers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is connection-oriented demultiplexing?

A

TCP establishes a virtual connection before data transfer, ensuring reliable and ordered delivery

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

How is a TCP socket identified?

A

By a 4-tuple:
1. source IP address,
2. source port number,
3. destination IP address,
4. and destination port number

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What values does the demux use to direct a segment to the appropriate socket?

A

All four values (4-tuple) to direct segment to appropriate socket

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What are port numbers?

A

Logical identifiers assigned to specific applications or services on a network device, differentiating between multiple programs running on the same device

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What are well-known ports?

A

Standardized ports (0-1023) assigned by IANA to essential services like HTTP (80), FTP (21), and SSH (22)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What are registered ports?

A

Ports (1024-49151) assigned by specific organizations for commonly used applications and services

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What are dynamic/private ports?

A

Ports (49152-65535) used by applications dynamically for temporary connections, often assigned by the operating system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

port number in URL

A

can be explicitly specified in a URL, for example, http://example.com:8080/page.html uses port 8080 for HTTP [1].

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Importance of Port Numbers

A

Security: filtering traffic based on authorized ports

Efficiency:

Standardization: provide consistent access to essential services

36
Q

When is UDP used

A

When speed is crucial and occasional data loss is acceptable

transaction-oriented protocols like DNS
* For stateless applications with a large number of clients, such as streaming multimedia (IPTV)
* For real-time applications like online games
* When multicasting is required [8].
* error checking or correction are not required.

37
Q

Why UDP for DNS?

A

Speed is critical for fast lookups, ensuring smooth browsing [8].
Loss tolerance
Small data size: DNS responses are typically compact, minimizing the impact of potential loss [9].

38
Q

Why UDP for SNMP?

A

Used for efficient data collection in network management
**Real-time monitoring: **Timely information is crucial for network troubleshooting [9].
Loss tolerance
**Large data volume: **UDP minimizes overhead when dealing with a high volume of network monitoring data [9].

39
Q

Trade-offs and Considerations with UDP

A

Unreliability:

Security concerns: Lack of encryption and authentication

Checksum: Included for error detection within the packet, but doesn’t guarantee delivery or order [10].

40
Q

UDP Header Format
(components and bits)

A
  1. Source Port (16 bits):
  2. Destination Port (16 bits):
  3. Length (16 bits): Total length of the UDP datagram (header + data) in bytes
  4. Checksum (16 bits): Detects errors during transmission by calculating a value based on the packet’s data and header
  5. Data: Contains the payload to be transmitted. Its length is determined by the Length field minus the 8-byte UDP header
41
Q

TCP Features

A
  • Connection-Oriented:
  • Reliability:
  • Flow Control:
  • Congestion Control:
  • Full-Duplex Communication:
  • Ordered Data Delivery:
  • Connection Termination: Uses a four-way handshake to close connections gracefully
42
Q

Ordered Data Delivery

A

TCP ensures data is delivered to the receiver in the same order it was sent [18].
It uses sequence numbers to track the order of data packets [18].
Out-of-order packets are reordered before delivery to the application [18].

43
Q

Connection Termination

A

TCP connections are terminated using a four-way handshake process [19].
Both the sender and receiver exchange control packets (FIN and ACK) to close the connection gracefully

44
Q

TCP Segment

A
  • A unit of data transmitted over a TCP connection
  • Breaks down large data streams into smaller, manageable units
  • Consists of a TCP header and a data payload
45
Q

TCP Header

A

Contains critical information for reliable data transfer [22].
Typically 20 bytes (160 bits) long, but can be longer due to options [23].

46
Q

TCP Header Components and Bits:

A
  1. Source Port (16 bits):
  2. Destination Port (16 bits):
  3. Sequence Number (32 bits):
  4. Acknowledgment Number (32 bits): Specifies the next sequence number expected by the sender [24].
  5. Data Offset (4 bits): Indicates the length of the TCP header in 32-bit words, pointing to the start of the data [24].
  6. Control Flags (9 bits):
  7. Window Size (16 bits):
  8. Checksum (16 bits):
  9. Urgent Pointer (16 bits):
  10. Options (variable):
47
Q

Control Flags (9 bits):

A
  1. URG: Urgent Pointer field significant [25].
  2. ACK: Acknowledgment field significant [25].
  3. PSH: Push Function [25].
  4. RST: Reset the connection [25].
  5. SYN: Synchronize sequence numbers (used for connection establishment) [25, 26].
  6. FIN: No more data from sender (used for connection termination) [20, 21, 25].
  7. Other flags for congestion notification (CWR, ECE)
48
Q

Window Size

A

A 16-bit field in the TCP header [25].
* Specifies the size of the receive window
* Used for flow control to manage the amount of data the sender can send without acknowledgment
* Indicates the available buffer space on the receiver’s side

49
Q

Checksum

A

A 16-bit field in the TCP header [25].
Provides error detection for the TCP header and data [25, 28].
Calculated over the TCP header, TCP data, and a pseudo-header

50
Q

Urgent Pointer

A

A 16-bit field in the TCP header [30].
Indicates the end of urgent data, if present [30].
Only significant when the URG control flag is set

51
Q

Options

A

A variable-length field in the TCP header [30].
Provides additional control information or parameters for the TCP connection [30].
Examples include Maximum Segment Size (MSS), Timestamps, and Window Scale

52
Q

TCP Data

A

Contains the payload or application data to be transmitted [30].
The size can vary depending on the Maximum Segment Size (MSS) and other factors

53
Q

Round Trip Time (RTT)

A

The time it takes for a TCP segment to travel from the sender to the receiver and for the acknowledgment (ACK) to return [27].
A measure of propagation delay and processing time [27

54
Q

Estimation

A

TCP implementations estimate the RTT by measuring the time between sending a segment and receiving its ACK [27].

55
Q

Smoothed RTT (SRTT)

A

An exponentially weighted moving average of recent RTT samples [32].
Used to account for variations in RTT

56
Q

RTT Variance (RTTVAR)

A

Measures the degree of variation or volatility in RTT samples [32].

57
Q

TCP Timeout

A

The duration TCP waits for an acknowledgment before retransmitting a segment [32].
Dynamically adjusted based on RTT measurements and estimation

58
Q

Retransmission Timer

A

Each TCP segment has an associated timer, initially set based on estimated RTT and RTTVAR [33].
If an ACK is not received within the timeout period, the segment is retransmitted [33].

59
Q

Adaptive Timeout Mechanisms

A

TCP uses algorithms like Karn’s algorithm and Jacobson’s algorithm to dynamically adjust the timeout value based on RTT measurements and network conditions

60
Q

Exponential Backoff

A

In case of multiple consecutive timeouts, TCP often increases the timeout value exponentially [33].
This reduces the frequency of retransmissions and helps mitigate congestion

61
Q

Connection Establishment - TCP Sender

A

The TCP sender initiates the connection establishment by sending a SYN (synchronize) segment to the receiver.
*
It waits for an acknowledgment (ACK) from the receiver, confirming the receipt of the SYN segment.
*
Upon receiving the ACK, the sender completes the three-way handshake, establishing the TCP connectio

62
Q

Segment Transmission - TCP Sender

A

Once the connection is established, the sender can start transmitting data segments to the receiver.
*
The sender encapsulates application data into TCP segments and sends them over the network to the receiver’s IP address and port number.
*
Each segment includes sequence numbers to allow the receiver to reconstruct the data in the correct order

63
Q

Acknowledgement Reception - TCP Sender

A

After sending each segment, the sender waits for an acknowledgment (ACK) from the receiver.
*
The sender maintains a timer for each transmitted segment to detect lost or delayed acknowledgments.
*
If an acknowledgment is not received within a certain timeout period, the sender may retransmit the segment

64
Q

Connection Establishment - TCP Receiver

A

The TCP receiver listens for incoming connection requests on a specific port.
*
When a connection request (SYN segment) is received, the receiver responds with a SYN-ACK (synchronize-acknowledgment) segment to acknowledge the sender’s SYN segment and establish the connection.
*
Once the three-way handshake is completed, the receiver is ready to receive data from the sender

65
Q

TCP Receiver Segment Reception

A

After the connection is established, the receiver listens for incoming TCP segments sent by the sender.
*
It receives data segments and acknowledges their receipt by sending acknowledgment (ACK) segments back to the sender.
*
The receiver uses sequence numbers in the TCP header to reorder out-of-order segments and reconstruct the original data stream

66
Q

TCP Receiver vs. TCP Sender Flow Control

A

Receiver Controls Sender: The receiver controls the sender to prevent overwhelming its buffers by signaling how much data it can accept.
*
Window Size: The receiver advertises its available buffer space to the sender using the window size field (rwnd) in the TCP header.
*
Dynamic Adjustment: The receiver dynamically adjusts the window size based on its available buffer space.
*
Sender Behavior: The sender regulates its transmission rate based on the receiver’s advertised window size, ensuring it doesn’t send more data than the receiver can handle

67
Q

TCP Receiver Error Detection and Handling

A

The receiver verifies the integrity of incoming TCP segments by calculating the checksum and comparing it with the checksum value provided in the segment.
*
If errors are detected (e.g., corrupted segments, invalid checksums), the receiver may discard the segments or request their retransmission (through lack of acknowledgement).

68
Q

Brief Comparison of TCP and UDP Protocols

A

*
TCP: Reliable transport, connection-oriented (requires connection setup), provides flow control and congestion control, guarantees in-order delivery, more overhead. Suitable for applications requiring reliable data exchange like web browsing (HTTP), email (SMTP), and file transfer (FTP).
*
UDP: Unreliable data transfer, connectionless (no connection setup), no flow control or congestion control, unordered delivery, less overhead. Prioritizes speed and is suitable for applications where speed is crucial and occasional data loss is acceptable, like online gaming and real-time audio/video conferencing

69
Q

What happens if network layer delivers data faster than application layer removes data from socket buffers?

A

Buffer Overflow and Data Loss: Incoming data packets may be dropped because there’s no space left in the buffer.
*
Increased Latency: The network layer may need to wait for the application to consume data, leading to delays.
*
Congestion: Can occur if routers’ buffers also become overwhelmed due to the sustained high rate of data.
*
Resource Starvation: If buffers remain full, it can lead to other processes not having access to necessary resources.
*
(Solution) Flow Control: Mechanisms like TCP flow control are designed to prevent this by allowing the receiver to signal the sender to slow down

70
Q

Buffer Overflow (in the context of TCP Flow Control)

A

Occurs when the receiver’s buffers become full because the sender is transmitting data faster than the application can consume it.
*
Can lead to data loss as new incoming packets cannot be stored

71
Q

Resource Starvation (in the context of TCP Flow Control)

A

A situation where, due to prolonged buffer overflow, other processes or applications on the receiving system may not have access to the resources they need to function properly.

72
Q

Flow Control using Sliding Window Protocol

A

A mechanism used by TCP to regulate the rate of data transmission between sender and receiver.
*
The receiver advertises its available buffer space to the sender through the “window size” (rwnd) in TCP segments.
*
The sender maintains a “send window” representing the range of unacknowledged data it can send, limited by the receiver’s advertised window and the sender’s congestion window (cwnd).
*
The sender sends data within the send window and advances the window as acknowledgments are received

73
Q

Sender Behavior (in Flow Control using Sliding Window)

A

The sender regulates its rate of data transmission based on the receiver’s advertised window size (rwnd).
*
The sender ensures that the amount of unacknowledged data it has sent does not exceed the current window size, thus preventing overwhelming the receiver.
*
The “send window” at the sender determines which packets can be sent

74
Q

Manifestations of Congestion

A

Long delays (queueing in router buffers)
*
Packet loss (buffer overflow at routers)

75
Q

Congestion Control vs. Flow Control

A

Congestion control: Addresses the issue of too many senders sending too much data too fast for the network to handle.
*
Flow control: Addresses the issue of one sender transmitting too fast for one receiver

76
Q

Congestion Control Causes/Costs

A

Throughput can never exceed capacity.
*
Delay increases as capacity is approached.
*
Loss/retransmission decreases effective throughput.
*
Unneeded duplicates further decrease effective throughput.
*
Upstream transmission capacity / buffering wasted for packets lost downstream

77
Q

End-End Congestion Control Taken by TCP

A

TCP operates with no explicit feedback from the network layer.
*
The sender infers congestion by packet loss, indicated by:

Timeout

Duplicate ACKs

Measured RTT
*
When congestion is detected, the sender should decrease its sending rate

78
Q

Network-Assisted Congestion Control

A

Involves routers providing direct feedback to sending/receiving hosts with flows passing through a congested router.
*
Routers may indicate congestion level or explicitly set the sending rate

79
Q

Explicit Congestion Notification (ECN)

A

A network-assisted congestion control mechanism.
*
Routers set a flag in the IP header of packets passing through congested areas, indicating congestion.
*
Endpoints respond by reducing their transmission rates, helping to alleviate congestion before packet loss

80
Q

Asynchronous Transfer Mode (ATM) Congestion Control

A

Primarily managed through network-assisted mechanisms.
*
Relies on feedback and signaling from network elements like switches and routers.
*
ATM provides a framework for implementing network-assisted congestion control techniques

81
Q

AIMD Congestion Control

A

Stands for Additive Increase, Multiplicative Decrease, a core algorithm used by TCP for congestion control.
*
Aims to optimize data transmission rates while avoiding congestion.
*
Key aspects:

Additive Increase

Multiplicative Decrease

Congestion Detection

Dynamic Adaptation

82
Q

Additive Increase (in AIMD)

A

During idle periods or when no congestion is detected, the sender gradually increases the congestion window (CWND).
*
The increase is typically by a fixed increment for each successfully acknowledged segment.
*
Allows the sender to probe the network’s capacity

83
Q

Multiplicative Decrease (in AIMD)

A

When congestion is detected (usually by packet loss or timeouts), the sender aggressively reduces the CWND by a multiplicative factor (e.g., 1/2 or 1/4).
*
Aims to quickly back off from sending too much data

84
Q

Congestion Detection (in AIMD)

A

TCP relies on several methods:

Packet loss: Interpreted as a sign of overload.

Timeouts: Lack of acknowledgment suggests potential congestion.

Fast Retransmit: Receipt of duplicate ACKs implies earlier segment loss due to congestion

85
Q

Dynamic Adaptation (in AIMD)

A

The rate of increase and decrease in CWND can be adjusted based on:

Network conditions: Faster increases when favorable, slower when congested.

Round-trip time (RTT): Longer RTTs might lead to slower increases or quicker decreases.

Fairness: Rate changes can be adjusted to ensure fair resource sharing

86
Q

Benefits of AIMD

A

Simple and efficient: Relatively easy to implement and computationally inexpensive.
*
Adaptive: Adjusts to network conditions dynamically.
*
Fairness: Promotes fair sharing of resources among multiple flows

87
Q

Limitations of AIMD

A

Slow convergence: Can take time to reach optimal transmission rate, especially after congestion events.
*
Sensitivity to loss events: Large rate reductions due to single losses can be inefficient