Congestion Control and Streaming Flashcards
Goals of Congestion Control
- Use Network Resources Efficiently
- Preserve Fair Allocation of Resources
- Avoid Congestion Collapse
What causes congestion collapse to occur?
Congestion collapse occurs when dropped packets and excessive queuing delays that result from congestion in turn further exacerbate the problem, which causes more drops and delays, and so on
Dropped packets cause retransmissions that add additional traffic to the congested path, while excessive delays can cause spurious retransmissions (i.e., a timeout occurs when the packet was merely delayed, not lost)
Note that normal traffic that contributes to congestion is not the cause of collapse, it is the extra traffic that is caused by congestion that leads to collapse.
What is the difference between fairness and efficiency in a congestion control scheme?
Efficiency is how much of the available bandwidth is used, i.e., efficient congestion control leaves little or no bandwidth wasted.
Fairness is how bandwidth allocated between the different flows. Two common definitions of fair are that all flows get equal throughput, or that all flows get throughput proportionate to their demand
Assuming traditional TCP Reno with AIMD behavior, given a TCP Flow’s bottleneck link bandwidth, What is the average throughput of the flow?
Additive Increase increases traffic until throughput = bandwidth. At this point, packet loss occurs resulting in Multiplicative Decrease. Multiplicative decrease reduces bandwidth to 1/2 bandwidth.
Average throughput is the Average of 1/2 bandwidth and bandwidth which = 3/4 Bandwidth.
What is Incast?
A drastic reduction in application throughput that results when servers using TCP all simultaneously request data.
What circumstances lead to the incast problem?
Incast occurs when collective communication occurs on high fan-in switches which results in many small packets arriving at the switch at the same time, causing some packets to be lost.
The Timeout Delay from the packet loss is much more than the Round Trip time of the Network as the data center operates in a low-latency network.
Suppose you are working on some live video call software (think Skype or Google Hangouts) and decide to build your application protocol on top of UDP (rather than TCP). Give as many different points as you can (minimum two) to help justify that decision.
- Latency is critical – retransmissions are pointless since they will arrive too late anyway
- Dropped frames aren’t a big deal – the next frame will advance the video state before a retransmitted frame could arrive anyway
- Congestion control and flow control could cause unacceptable delays, as video frames get backed up in the sender host’s local buffer
What are some possible solutions to the TCP Incast problem?
Finer Granularity Timers: Use TCP Timers on the order of microseconds rather than the order of miliseconds. Reducing the retransmission timeout improves system throughput. Timers need to operate on granularity close to RTT of network.
Fewer Acknowledgements: Having the client acknowledge fewer packets reduces overall network load.
Barrier Synchronization
A client or application may have many parallel threads and no forward progress can be made until all responses for threads are satisfied.
What are some commonly used QOS techniques for streaming audio and video?
Marking Packets: Mark packets as higher priority, put those packets in higher priority queue
Scheduling: Schedule higher priority queue so it’s serviced more frequently.
What are Two Approaches to Congestion Control?
End to End (Used by TCP): No explicit feedback from the network to senders. Congestion inferred by loss and delay
Network-Assisted: Routers provide explicit feedback about rates end systems should be sending at.
What are Two Approaches to adjusting sending rate in TCP Congestion control?
Window-Based: A sender can only have a certain number of packets outstanding. Another packet cannot be sent until an ACK is received for an outstanding packet. An Increase in Window Size will increase transmission rate. In TCP this correlates to the AIMD principle of increase, where the congestion window increases by one packet per round trip, and is decreased by half when packet loss occurs.
Rate-Based: Sender monitors loss rate and users timer to modulate transmission rate.
How to Digitize Audio?
Sample audio signal at fixed intervals and represent the amplitude at each sample with a fixed number of bits.