Congestion Control Flashcards
Principles of Congestion Control
- Packet retransmission (last lecture) treats a symptom (packet loss) but does not treat the cause
- This cause is congestion: “too many sources sending too much data too fast for network to handle”
- Manifestations:
- Lost packets (buffer overflow at routers)
- Long delays (queueing in router buffers)
Causes/costs of congestion: scenario 1
Simplest scenario:
- one router, infinite buffers
- input, output link capacity: R
- Two flows
- no retransmissions needed
Causes/costs of congestion: scenario 2
- one router, finite buffers
Idealization: perfect knowledge
- Sender sends only when router buffer is available
- sender retransmits only lost, timed-out packet
- application-layer input = application-layer output: lin = lout
- transport-layer input includes retransmissions: l’in => lin
Causes/costs of congestion: scenario 2
Idealization: some perfect knowledge
- packets can be lost (dropped at router) due to full buffers
- sender knows when packet has been dropped: only resends if packet known to be lost
Causes/costs of congestion: scenario 2
Realistic scenario: un-needed duplicates
- packets can be lost, dropped at router due to
full buffers – requiring retransmissions - but sender times can time out prematurely,
sending two copies, both of which are delivered
“costs” of congestion:
- more work (retransmission) for given receiver throughput
- unneeded retransmissions: link carries multiple copies of a packet
* decreasing maximum achievable throughput
Causes/costs of congestion: scenario 3
- four senders
- multi-hop paths
- timeout/retransmit
Q: what happens as (lambda)in and (lambda)in’ increase ?
A: as red (lambda)in’ increases, all arriving blue pkts at upper queue are dropped, blue throughput g 0
Causes/costs of congestion: scenario 3
another “cost” of congestion:
§ when packet dropped, any upstream transmission capacity and buffering used for that packet was wasted!
Causes/costs of congestion: insights
TCP Congestion control
(High-level idea)
- High-level idea: each sender limits the rate at which it sends traffic into its connection as a function of perceived network congestion
- If a sender perceives that there is little congestion on the path between itself and the destination, then the TCP sender increases its send rate;
- If a sender perceives that there is congestion along the path, then the sender reduces its send rate.
How to perceive congestion? (End-end congestion control)
- no explicit feedback from network
- congestion inferred from observed loss, delay
- approach taken by most TCP implementations
How to perceive congestion? (Network-assisted congestion control)
- routers provide direct feedback to sending/receiving hosts with flows passing through congested router
- may indicate congestion level or explicitly set sending rate
*TCP ECN, ATM, DECbit protocols
How does a sender limit its sending rate?
- TCP sender limits transmission:
LastByteSent- LastByteAcked < cwnd - cwnd is dynamically adjusted in response to observed network congestion (implementing TCP congestion control)
Types of loss detection
Why 3 duplicates? Why not 1 or 10?
“Since TCP does not know whether a duplicate ACK is caused by a lost segment or just a reordering of segments, it waits for a small number of duplicate ACKs to be received (in this case, 3).
It is assumed that if there is just a reordering of the segments, there will be only one or two duplicate ACKs before the reordered segment is processed, which will then generate a new ACK. If three or more duplicate ACKs are received in a row, it is a strong indication that a segment has been lost (and Fast Retransmit is being requested by receiver).”
How to adjust sending rate with perceived congestion?
- Basic approach: senders can increase sending rate until packet loss* (congestion) occurs, then decrease sending rate on loss event
TCP AIMD: more
Multiplicative decrease detail: sending rate is
- Cut in half on loss detected by triple duplicate ACK (TCP Reno)
- Cut to 1 MSS (maximum segment size) when loss detected by timeout (TCP Tahoe)
Why AIMD?
- AIMD – a distributed, asynchronous algorithm – has been shown to:
- optimize congested flow rates network wide!
- have desirable stability properties
TCP slow start
- when connection begins, increase rate exponentially until first loss event:
- initially cwnd = 1 MSS
- double cwnd every RTT
- done by incrementing cwnd for every ACK received
- summary: initial rate is slow, but ramps up exponentially fast
TCP: from slow start to congestion avoidance
Q: when should the exponential increase switch to linear (i.e., AIMD)?
A: when cwnd gets to 1/2 of its value before timeout
Implementation:
- variable ssthresh
- on loss event, ssthresh is set to 1/2 of cwnd just before loss event
Detecting and Reacting to Loss
Loss indicated by timeout (TCP Reno and Tahoe)
- cwnd set to 1 MSS;
- Window then grows exponentially (as in Slow Start) to threshold, then grows linearly (as in Congestion Avoidance)
Detecting and Reacting to Loss (Loss indicated by 3 duplicate ACKs)
- Indicates network capable of delivering some segments
- TCP Reno and Tahoe do Fast Retransmit (see Lec. 10)
- TCP Reno: cwnd is cut in half (equal to ssthresh), window then grows linearly
- Starts Fast Recovery phase
- TCP Tahoe: always sets cwnd to 1 MSS;
TCP Congestion Control
Versions
TCP Tahoe
* This is the original version of TCP congestion control
* Slow Start
* Fast Retransmit (see Lecture 10)
TCP Reno
* Same as Tahoe, but with Fast Recovery TCP Vegas
* This is a completely new implementation based on delay variation (instead of packet loss)
TCP Tahoe vs TCP Reno
Summary: TCP congestion control (Reno)