6 - Congestion Control Flashcards
Congestion Control objective
fill the internet’s pipes without overflowing them.
Congestion Collapse
a sudden increase results in a decrease of useful work done.
causes for congestion collapse
spurious retransmissions of packets from unreceived acks. Also, undelivered packets. packets consume resources and are dropped in the network.
Congestion control main goals
Use network resources efficiently. Preserve fair allocation of resources. all senders get fair share. Avoid congestion collapse.
Approaches to congestion control
End to End: Network supplies to feedback to the senders about if they should slow down their rates. Congestion is inferred by packet loss and delay. Assisted congestion control: Routers provide feeback about the rates that thee system should send. It might set a single big indicating congestion like in TCP’s ECN o congestion notification extensions.
TCP Congestion Control
senders continue to increase the rate until packet drops. They occur bc the send rate is faster than a router can drain it’s buffer. when a buffer fills up TCP interprets the packet loss and slows down.
Approaches to adjusting rates
Window based algorithm: a sender can only have a certain # of packets outstanding, in flight. The sender uses the acks it receives to clock retransmits. Then sender can’t send more packets until it’s received acks. The sender can increase its sending rate by increasing it’s window size. every time it receives an ack. It uses AIMD.
Rate based congestion Control
The sende rmonitors the loss rate, uses a timer to modulate the transmission rate.
Fairness in congestion control
Every sender gets their fair share of network resources.
Efficiency in congestion control
The network resources are used well. There should be no spare capacity or resources. Senders have data to send but can’t.
Optimal point
the network is neither under or over utilized and allocation is fair.
Multplicative decrease
sender decreases its rate by some constant factor of its sending rate.
AIMD
converges to fairness and efficiency
AIMD purpose
Handles TCP congestion control.
AIMD algorithm
distributed, all senders act independently. Looks like a sawtooth pattern. Periodically probes for available bandwidth increasaing using additive increase. Once saturate d and packet loss occurs it reduces the sending rate by half.
Throughput relationship to RTT
Inversely proportional to both the RTT, and the square root of the loss rate.
Data Centers and TCP Incast
There’s a TCP throughput problem called the TCP Incast problem
Data centers have lots of racks of servers with switches that connect links to to other servers there.
Workloads are high bandwidth and low latency, many clients make requests in parallel. The buffers in the switches are small. So for high bandwidth, low latency apps, and parallel requests. theres a throughput problem called the or collapse called the TCP incast problem. Happens when they all request data simultaneously. Because the RTT are so much less than TCP timeouts. the centers have to wait for the TCP timeout before retransmitting. This significantly reduces throughput as much as 90%.
Barrier Synchronization and Idle Time
Common request pattern. A client or application has parallel threads. and no forward progress can be made until all responses are synchronized, read. For example, a client might send a synch read, with 4 parallel requeststs, but the 4th is dropped. Now we have a request sent at time zero, then we see a resopnse less than a millisecond later. and at this point, threads 1 to 3 complete but the TCP may time out on the 4th. In this case the link is idle for a very long time, while that 4rh connection is timeout. Adding more servers in the network induces overflow of the switch buffer, packet loss and throughput collapse. This can be solved with fine grained TCP retransmission timers. or have the client ack every other packet instead of every packet.
Multimedia and Streaming
Digital audio and video data
Multimedia transfer type
transfers over a “best effort” network. tolerates packet loss and jitter and quality of service So we use multimedia streaing video frequently in todays internet.
Challenges of media streaming
the large volume of data, each sample is a sound or image and there are many samples per second. Also bc of the way the dat is compressed, the volume of data being sent varies over time. not at a constant rate. But u need smooth playout,
Digitizing audio and video
we sample the audio signal at fixed intervals, and represent the amplitude of each sample, with a fixed number of bits. For example if our dynamic range was from 0 15 we could quantize the amplitude of this signal to be represented with four bits.
Video compression
works in slightly different ways, uses spatial redundancy. Each video is a sequence of images and each image can be compressed with spacial redundancy. Exploiting aspects that humans tend not to notice.
Compression across images
Temporal redundancy. Between any two video images, or frames, there might be very little difference. So if this person was walking towards a tree, you might see version of the image that’s almost the same, just slightly different.
Video Compression
Uses a combination of static image compresson on what are called reference or anchor or I frames, and derived frames or P frames. P frames can be represented as an I frame compressed. If you dived the I frame into blocks we can see the P frame is almost the same except for a few blocks that can be represented in terms of the original I frame blocks, plus some motion vectors.
A common video compression format
MPEG
Streaming Video
A server streams stored audio and video, the steores the audio ro video files, the client requests the files, and plays the as they download. The server divides the data into segments and labels them with timestamps as to when it should be played. The solution the client uses is a playout buffer. It stores data as it arrives and playes the data in a continuous fashion.
Playout Delay
we might see packets received at slightly different times, depending on network delay. We want to avoid these delays with playout. So we wait to receive several packets. and fill the buffer before we start playing. then the playout will be smooth regardless of sporatic data retrieval from network delays.
TCP is not a good fit
TCP is not a good fit for streaming video or audio. TCP retransmits lost packets which isn’t useful. It slows the sending rate starving the client. There’s also a lot of overhead, the TCP header has 20 bytes for every packet, sending acks, UDP is better bc it doesn’t retransmit packets and doesnt adapt the sending rate. also has a smaller header.
UDP
because it doesn’t retransmit or adjust the sending rate, things are left to the app layer. like transmitting and encapsulating the data, adapting the sending rate, or quality of the media. It also needs to be fair to TCP senders.
More Streaming
Youtube videos are converted to Flash or html5. They use TCP anyway. When accessing youtube you’re redirected to a content distrobution server in their content distrobutiion network like Limelight, or their own.
Skype
has a central log in server but then uses peer to peer to exchange the actual voice streams and compress the audio to reduce bit rate. Good compression and avoiding hops imporoves the audio quality. Long propagation delays, high congestion, and distruption as a result of routign changes all degrade the quality of VOIP.
Marking (and Policing)
mark the audio packets as they arrive at the router so they receive higher priority. Then the file transfer. You could use priority queues. An alternative would be to allocate fixed bandwidth per app. but that would result in inefficiency if one flow doesn’t use all it’s fixed allocation. Another alternative is to use admission control, where an app declares it’s needs in advance and the network may block the apps traffic. Like a busy signal on a telephone. This would be very annoying for websites.