Congestion Control Flashcards
Principles of Congestion Control
- Packet retransmission (last lecture) treats a symptom (packet loss) but does not treat the cause
- This cause is congestion: “too many sources sending too much data too fast for network to handle”
- Manifestations:
- Lost packets (buffer overflow at routers)
- Long delays (queueing in router buffers)
Causes/costs of congestion: scenario 1
Simplest scenario:
- one router, infinite buffers
- input, output link capacity: R
- Two flows
- no retransmissions needed
Causes/costs of congestion: scenario 2
- one router, finite buffers
Idealization: perfect knowledge
- Sender sends only when router buffer is available
- sender retransmits only lost, timed-out packet
- application-layer input = application-layer output: lin = lout
- transport-layer input includes retransmissions: l’in => lin
Causes/costs of congestion: scenario 2
Idealization: some perfect knowledge
- packets can be lost (dropped at router) due to full buffers
- sender knows when packet has been dropped: only resends if packet known to be lost
Causes/costs of congestion: scenario 2
Realistic scenario: un-needed duplicates
- packets can be lost, dropped at router due to
full buffers – requiring retransmissions - but sender times can time out prematurely,
sending two copies, both of which are delivered
“costs” of congestion:
- more work (retransmission) for given receiver throughput
- unneeded retransmissions: link carries multiple copies of a packet
* decreasing maximum achievable throughput
Causes/costs of congestion: scenario 3
- four senders
- multi-hop paths
- timeout/retransmit
Q: what happens as (lambda)in and (lambda)in’ increase ?
A: as red (lambda)in’ increases, all arriving blue pkts at upper queue are dropped, blue throughput g 0
Causes/costs of congestion: scenario 3
another “cost” of congestion:
§ when packet dropped, any upstream transmission capacity and buffering used for that packet was wasted!
Causes/costs of congestion: insights
TCP Congestion control
(High-level idea)
- High-level idea: each sender limits the rate at which it sends traffic into its connection as a function of perceived network congestion
- If a sender perceives that there is little congestion on the path between itself and the destination, then the TCP sender increases its send rate;
- If a sender perceives that there is congestion along the path, then the sender reduces its send rate.
How to perceive congestion? (End-end congestion control)
- no explicit feedback from network
- congestion inferred from observed loss, delay
- approach taken by most TCP implementations
How to perceive congestion? (Network-assisted congestion control)
- routers provide direct feedback to sending/receiving hosts with flows passing through congested router
- may indicate congestion level or explicitly set sending rate
*TCP ECN, ATM, DECbit protocols
How does a sender limit its sending rate?
- TCP sender limits transmission:
LastByteSent- LastByteAcked < cwnd - cwnd is dynamically adjusted in response to observed network congestion (implementing TCP congestion control)
Types of loss detection
Why 3 duplicates? Why not 1 or 10?
“Since TCP does not know whether a duplicate ACK is caused by a lost segment or just a reordering of segments, it waits for a small number of duplicate ACKs to be received (in this case, 3).
It is assumed that if there is just a reordering of the segments, there will be only one or two duplicate ACKs before the reordered segment is processed, which will then generate a new ACK. If three or more duplicate ACKs are received in a row, it is a strong indication that a segment has been lost (and Fast Retransmit is being requested by receiver).”
How to adjust sending rate with perceived congestion?
- Basic approach: senders can increase sending rate until packet loss* (congestion) occurs, then decrease sending rate on loss event