Lesson 6 - Congestion Control Deck 2 Flashcards
Resource control deals with
handling bandwidth constraints on links
The goal of congestion control
fill the internet’s pipes without overflowing them
Hosts are _______ of other hosts and their current state (i.e. how much other traffic is on the network)
unaware
Congestion collapse
Throughput is less than the bottleneck linkCongestive collapse (or congestion collapse) is the condition in which congestion prevents or limits useful communication. Congestion collapse generally occurs at “choke points” in the network, where incoming traffic exceeds outgoing bandwidth.
Saturation
point at which increasing the load no longer results in useful work getting done
Collapse
increasing the traffic load can cause the amount of work done or amount of traffic forwarded to actually decrease
Possible causes of congestion collapse
-Spurious transmissions of packets still in flight-Undelivered packets
Spurious transmissions
when senders don’t receive acknowledgments for packets in a timely fashion, then cay spuriously retransmit, resulting in many copies of the same packets being outstanding in the network at any one time
Undelivered packets
Packets consume resources and are dropped elsewhere in the network
Solution to congestion collapse
Congestion control-For spurious transmission: better timers and TCP congestion control
Two approaches to congestion control
-End-to-end: Network provides no explicit feedback to the senders about when they should slow down their rates. Instead, congestion is inferred, typically by packet loss, and potentially also by increased delay.-Network-assisted: Routers provide explicit feedback about the rates that end systems should be sending in
Two parts of TCP congestion control
- Increase algorithm2. Decrease algorithm
TCP increase algorithm
the sender must test the network to determine whether the network can sustain a higher sending rate
TCP decrease algorithm
the senders react to congestion to achieve optimal loss rates, delays, and sending rates
2 approaches of adjusting rate
-Window-based algorithm-Rate-based algorithm
Window-based algorithm
-Sender can only have a certain number of packets outstanding (in flight). The sender uses acknowledgments from the receiver to clock the retransmission of new data.-When sender receives another ACK from receiver, it can send another packet.-If a sender wants to increase the rate at which it’s sending, it simply needs to increase the window size. Below example: window size is 4. If 4 are inflight, waits for an ACK for 1 of them, then sends another.-Window-based congestion control (or AIMD) is the common way of performing congestion control in today’s networks
Rate-based algorithm
-Explicitly rate-based congestion control algorithm-Monitor loss rate-Use a timer to modulate the transmission rate
If RTT is 100 milliseconds, packet is 1 KB, and window size is 10 packets, what is the sending rate?
100 pkts/sec * 8000 bits/packet = ~800,000 bps, or 800 Kbps
2 goals of TCP Congestion Control
-Fairness: every sender gets their fair share of network resources-Efficiency: network resources are used well
In multiplicative decrease, each sender decreases its rate by _______
Some constant factor of its current sending rate (e.g. by half)
3 traits of AIMD
-Distributed (all senders can act independently)-Fair-Efficient
TCP Incast
-The throughput collapse that results from high fan-in, high bandwidth/low latency, many small/parallel requests, and small buffers in switches in a data center-It is a drastic reduction in application throughput that results when servers using TCP all simultaneously request data, leading to a gross underutilization in network capacity in many to one communication networks like a data center. The filling up of the buffers in the switches results in bursty retransmissions that overfill the switch buffers.
TCP timeout
Filling up of the buffers in the switches results in busty retransmissions that overfill the switch buffers.-Can last 100s of milliseconds but the RTT in a data center is typically < 1 ms, often just 100s of microseconds-Because the RTT are so much less than TCP timeouts, the senders will have to wait for the TCP timeout before they retransmit, and application throughput can be reduced by as much as 90% as a result of link idle time
Barrier Synchronization
-common request pattern in data centers today-A client/application might might have many parallel threads and no forward progress can be made until all the responses for those threads are satisfied
Solutions to barrier synchronization
-TCP granularity (transmission timers on the order of micro-seconds rather than milliseconds). Reducing the retransmission time for TCP thus improves system throughput-Have the client acknowledge every other packet, rather than every packet, thus reducing the overall network load-Premise: the timers need to operate on a granularity that’s close to the RTT of the network. In the case of a data center, that’s 100s of microseconds or less
What are solutions to TCP Incast problem? Choices:-smaller packets-finer granularity timers-fewer acknowledgments-more senders
-finer granularity timers-fewer acknowledgments
4 challenges of multimedia and streaming
-Large volume of data-Data volume varies over time-Low tolerance for delay variation-Low tolerance for delay, period. Some loss IS acceptable.
What is the sampling rate with 8,000 samples/sec, and 8 bits/sample (speech, for example)
64 Kbps, which is a common bit rate for audio
Suppose we have an MP3 with 10,000 samples/sec and 16 bits/sample, what’s the sample rate?
160 Kbps
Temporal redundancy
-Compression across images in a video
Playout buffer
Client stores data as it arrives from the server and plays it for the user in a continuous fashion. Thus, data might arrive more slowly or more quickly from the server, but as long as the client is playing data out of the buffer at a continuous rate, the user sees a smooth playout. A client may wait a few seconds before it starts playing a stream to allow data to be built up in this buffer, to account for cases where the server might have times where it’s not sending at a rate that’s sufficient to satisfy the client’s playout rate.
Which pathologies can streaming audio/video tolerate?Choices:-loss-delay-variation in delay
-loss-delay
UDP
(User Datagram Protocol)-Does not retransmit lost packets, and does not automatically adapt the sending rate. It also has a smaller header.-Because it doesn’t retransmit or adapt sending rate, many things are left to higher layers, potentially the application, such as when to transmit the data, how to encapsulate it, whether to retransmit, and whether to adapt the sending rate, or to adapt the quality of the audio or video encoding.-Higher layers must solve these problems. In particular, the sending rate still needs to be friendly or fair to other TCP senders which may be sharing a link.-There are a variety of video/audio streaming transport protocols that are built on top of UDP that allow senders to figure out when and how to retransmit lost packets and how to adjust sending rates.
YouTube
-All uploaded videos are converted to Flash (or HTML5) and nearly nearly every browser has a Flash plugin. Thus, every browser can essentially play these videos. All browsers implement HTTP/TCP.-TCP is sub-optimal for streaming, but keeps things simple
Skype, VOIP
-Analog signal is digitized through an A/D conversion. Then, this resulting digitized bitstream is sent over the internet. In the case of Skype, this A/D conversion happens by way of the application. In the case of VOIP, it might be performed with some kind of phone adapter that you actually plug your phone into (Vonage for example).-Long propagation delays, high congestion, or disruptions as a result of routing changes can all degrade the quality of a VOIP call
QoS
(Quality of Service)-ensure that some streams achieve acceptable performance levels
2 ways of achieving QoS
-Explicit reservations-Mark certain packet streams as higher priorities than others
Marking and Policing (and scheduling)
Rewatch this lecture
What are commonly used QoS for streaming audio/video?Choices:-Marking packets-Scheduling-Admission control-Fixed allocations
-Marking packets-Scheduling
Admission control
An application declares its needs in advance and the network may block the application’s traffic if the application can’t satisfy the needs
Example of admission control
A busy signal on a telephone network
Why not use admission control in internet applications?
Bad UX (trying to go to a website and it’s blocked temporarily)
Goal of TCP
Prevent congestion collapse
Calculate sender rate
pkts/sec * bits/pkt-To get pkts/sec, use window size and RTT.-(Remember, 8 bits in a byte!)
If you want to traffic shape a variable bit rate (VBR) video stream with average bit rate 6 Mbps and maximum bit rate 10 Mbps, should you use a leaky bucket or a token bucket? What values should you use for rho and beta if you want to allow bursts of up to 500 ms?
Token bucket because it has bursts.rho should be 6Mbps (should match the avg rate)Beta should be:(Max - Avg) * Time window, so:(10 – 6 Mbps)(0.5s) = 2 MbNow, remember 8 bits in a Byte!!!!!