Chapter 6 Flashcards

1
Q

What is the primary goal of TCP congestion control?

A

To determine the available capacity in the network to avoid congestion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which mechanism does TCP use to adjust the size of the congestion window? A) Fast Retransmit

A
  • Additive Increase/Multiplicative Decrease (AIMD)
  • Slow Start
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How does TCP infer the occurrence of network congestion?

A

By detecting timeouts and duplicate ACKs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What happens during TCP’s slow start phase?

A

The congestion window increases exponentially

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What triggers the ‘Fast Retransmit’ mechanism in TCP?

A

The receipt of multiple duplicate ACKs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which TCP mechanism adjusts the congestion window based on perceived network congestion without waiting for a packet loss?

A

Fast Recovery

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does the TCP ‘Additive Increase, Multiplicative Decrease’ strategy help prevent?

A
  • Buffer overflow at the routers
  • Congestion collapse
  • Uncontrolled bandwidth usage
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

During TCP congestion control, when is the ‘Slow Start’ algorithm re-invoked?

A

After a timeout indicating packet loss

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which of the following best describes TCP’s Fast Retransmit?

A

It triggers retransmission of a packet if multiple duplicate ACKs are received.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

In TCP congestion control, what is the purpose of the ‘Congestion Window’?

A

To control the number of bytes the sender is allowed to transmit without an ACK

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Congestion Window (CWND)

A
  • The congestion window controls the amount of data TCP can send into the network before requiring an acknowledgment.
  • It starts small and adjusts dynamically based on network responses to find an optimal data flow rate without inducing congestion.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Additive Increase/Multiplicative Decrease (AIMD):

A
  • This is the primary mechanism of TCP congestion control
  • Where the congestion window increases gradually (additively) to probe for usable bandwidth and decreases sharply (multiplicatively) when congestion is detected (usually through packet loss indicated by timeouts or duplicate acknowledgments).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Slow Start Phase

A
  • When a TCP connection begins, or after a timeout, TCP uses the slow start algorithm, where the congestion window size increases exponentially each round-trip time (RTT) until it detects packet loss or reaches a threshold.
  • This rapid increase helps quickly utilize available bandwidth.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Fast Retransmit

A

Fast Retransmit triggers a retransmission of packets when multiple duplicate ACKs are received, suggesting that a packet has been lost but subsequent packets have been received.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Fast Recovery

A

Fast Recovery adjusts the congestion window in response to congestion detected via Fast Retransmit, allowing the congestion window to shrink less drastically than it would with multiplicative decrease alone, thus speeding recovery.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Congestion Avoidance:

A

After the slow start phase, TCP enters congestion avoidance where it increases the
congestion window more slowly to avoid causing congestion. This phase continues until a packet loss is detected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Thresholds and Adjustments:

A

The threshold, or ssthresh, is a boundary between slow start and congestion avoidance. TCP uses this threshold to switch from the exponential growth of the slow start to the linear increase of congestion avoidance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

TCP Self-Clocking

A
  • TCP uses the principle of self-clocking, where the receipt of ACKs paces the sending of new data.
  • This mechanism ensures that data transmission is naturally regulated by the rate at which the network can handle the traffic.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Interaction with Network Conditions:

A
  • TCP congestion control adapts not just to packet loss but also variations in round-trip time and other network conditions, adjusting its behavior to maintain efficient and stable data transfer.
20
Q

Protocols and Algorithms:

A

TCP employs several protocols and algorithms to handle various scenarios and network conditions, such as selective acknowledgments (SACK) to improve performance in networks with high packet reordering.

21
Q

TCP Timers and Their Role in Congestion Control:

A
  • TCP uses various timers, such as the Retransmission Timeout (RTO), to detect lost packets.
  • The behavior of these timers directly influences congestion control dynamics, as timeouts often trigger reductions in the congestion window.
22
Q

Retransmission Strategies

A

TCP’s retransmission strategies, including Fast Retransmit, are crucial for maintaining throughput in the face of packet loss without waiting for the expiration of timers, which can significantly slow down transmission.

23
Q

Network Feedback Utilization

A
  • TCP interprets duplicate ACKs and timeouts as signals of potential network congestion.
  • This feedback mechanism allows TCP to adjust the sending rate preemptively before more severe network congestion occurs.
24
Q

Slow Start Restart (SSR)

A
  • If a connection goes idle for a period, TCP may restart the slow start process, assuming that the state of the network may have changed during the idle period.
  • This cautious approach helps avoid sudden bursts of data that might lead to packet loss.
25
Q

Interaction with Advertised Window

A
  • The congestion window operates in conjunction with the receiver’s advertised window, which is based on the available buffer space at the receiver.
  • This ensures that the sender does not overwhelm the receiver.
26
Q

TCP’s Reaction to Packet Loss:

A

Upon detecting packet loss, TCP not only reduces the congestion window but also enters into a recovery phase where it tries to retransmit lost packets and regain the lost throughput efficiently.

27
Q

Role of ACKs in Data Pacing

A
  • TCP uses the arrival of ACKs to pace out the sending of new packets.
  • This is known as ACK-clocking, where each incoming ACK allows the sender to send one or more new packets
  • Helping to smooth the flow of data and adjust to the available network capacity.
28
Q

Congestion Window Validation (CWV)

A
  • TCP implementations may include mechanisms like Congestion Window Validation to reduce the size of the congestion window after a connection has been idle for an extended period, reflecting the assumption that network conditions may have changed.
29
Q

Sensitivity to Round Trip Time (RTT)

A

TCP’s performance is sensitive to the accurate measurement of RTT, as it influences the calculation of RTO and the pace at which the congestion window grows during the slow start and congestion avoidance phases

30
Q

Fine-Tuning Congestion Control Algorithms

A
  • Over the years, TCP’s congestion control algorithms have been fine-tuned to handle various network conditions
  • Incorporating enhancements like selective acknowledgments and explicit congestion notification, which help in environments with high packet loss or varying latency
31
Q

Network Model - Packet-Switched Network

A
  • Definition: Resource allocation in a packet-switched network with multiple links and switches.
  • Key Point: A source may have enough capacity on the outgoing link but encounter congestion in the middle of the network.
  • Example: Two high-speed links feeding a low-speed link, creating a bottleneck.
  • Important Note: Congestion control is different from routing. Routing around a congested link does not always solve congestion.
32
Q

Network Model - Connectionless Flows

A
  • Definition: Network is assumed to be connectionless; connection-oriented service is in the transport protocol (e.g., TCP/IP).
  • Key Point: IP provides connectionless datagram delivery, TCP implements end-to-end connection abstraction.
  • Flow: Sequence of packets sent between a source and destination following the same route.
  • Pro of Flow Abstraction: Flows can be defined at different granularities (e.g., host-to-host, process-to-process).
33
Q

Connectionless Flows - Implicit vs Explicit Flows

A
  • Implicit Flow: Routers observe packets traveling between the same source/destination and treat them as part of the same flow.
  • Explicit Flow: Source sends a flow setup message declaring a flow of packets, helping in resource allocation without end-to-end semantics.
  • Benefit: Explicit flows allow resource reservation at each router, potentially avoiding congestion.
34
Q

Service Model - Best-Effort Service

A
  • Definition: All packets are treated equally, with no guarantees for preferential service.
  • Key Point: No opportunity for end hosts to ask for specific guarantees.
  • Contrast: A service model with multiple qualities of service (QoS) offers guarantees for specific flows, e.g., video streaming bandwidth.
  • QoS Example: Guaranteeing the bandwidth needed for a video stream.
35
Q

Taxonomy of Resource Allocation Mechanisms

A
  • Router-Centric: Routers decide packet forwarding, dropping, and inform hosts about allowable packets to send.
  • Host-Centric: End hosts observe network conditions and adjust their behavior accordingly.
  • Note: These two groups are not mutually exclusive.
36
Q

Taxonomy - Reservation-Based vs. Feedback-Based Mechanisms

A
  • Reservation-Based: End host requests capacity for a flow; routers allocate resources if possible.
    • Key Point: Ensures enough resources before data transmission.
    • Example: Reserving buffers at each router.
  • Feedback-Based: Hosts send data without reservations and adjust based on network feedback (explicit or implicit).
    • Explicit Feedback: Congested router sends a “please slow down” message to the host.
    • Implicit Feedback: End host adjusts based on observable network behavior, such as packet losses.
37
Q

Taxonomy - Window-based vs Rate-based Control

A
  • Window-Based Control: Limits data transmission based on buffer space at the receiver.
    • Flow Control: Ensures sender does not overwhelm receiver’s buffer.
  • Rate-Based Control: Controls data transmission rate (bits per second) that the network can handle.
    • Application: Useful for multimedia applications that generate data at a steady rate.
38
Q

Evaluation Criteria - Effective Resource Allocation

A
  • Goal: Maximize power (Throughput/Delay).
  • Key Point: Balance between conservative packet sending and avoiding excessive queuing delays.
  • Optimal Load: Achieved when throughput and delay are balanced for maximum efficiency.
  • Illustration: Too few packets lead to underutilization, too many packets increase delays.
39
Q

Evaluation Criteria - Fair Resource Allocation

A
  • Goal: Ensure each flow receives an equal share of bandwidth.
  • Challenge: Fairness calculation when path lengths differ, e.g., one four-hop flow vs. three one-hop flows.
  • Fairness Index: Used to measure the fairness of resource allocation among flows.
40
Q

Effective Resource Allocation - Power Formula

A
  • Formula: Power = Throughput / Delay
  • Optimal Load: Achieved when throughput and delay are balanced.
  • Important Concept: Conservative sending leads to underutilization, aggressive sending increases delays due to queuing.
41
Q

Fair Resource Allocation - Jain’s Fairness Index

A
  • Definition: Measures fairness among flows.
  • Formula: ((i = 1 to n)Σxi)^2/(n(i = 1 to n)Σxi^2)
  • Range: 0 to 1, where 1 represents perfect fairness.
  • Example Calculation: Given flow throughputs (x1, x2, …, xn), calculate the fairness index.
42
Q

Queuing Disciplines - Introduction

A
  • Definition: Mechanisms that control how packets are buffered and transmitted by routers.
  • Key Point: Each router must implement some queuing discipline to manage bandwidth and buffer space, affecting latency.
43
Q

FIFO (First In, First Out)

A
  • Definition: Packets are processed in the order they arrive.
  • Key Point: Simple and straightforward, but can lead to inefficiencies.
  • Tail Drop: If the buffer is full, incoming packets are dropped.
44
Q

Priority Queuing

A
  • Definition: Packets are marked with priority levels and processed based on these levels.
  • Key Point: High-priority packets are transmitted first, potentially starving lower-priority packets.
  • Example: Marking packets in the IP header to indicate priority.
45
Q

Fair Queuing (FQ)

A
  • Definition: Each flow gets a separate queue, ensuring fair bandwidth distribution.
  • Key Point: Prevents ill-behaved traffic sources from affecting well-behaved ones.
  • Challenge: Packets of different lengths require consideration for fair bandwidth allocation.
46
Q

Work-Conserving vs. Non-Work-Conserving

A
  • Work-Conserving: Link is never left idle as long as there is a packet in the queue.
  • Non-Work-Conserving: Allows for controlled idle periods to manage queue lengths and delays.
47
Q

Weighted Fair Queuing (WFQ)

A
  • Definition: Each flow is assigned a weight, dictating how many bits are transmitted each time the queue is serviced.
  • Key Point: Provides more bandwidth to flows with higher weights.
  • Example: A flow with weight 2 gets twice the bandwidth of a flow with weight 1.