Midterm 2 Flashcards
What is congestion control?
Goal: ‘Fill the internet pipes without overflowing them’
‘Watch the sink, slow down the water flow if it begins to overlow’
What is congestion collapse?
Increase in load -> decrease in useful work done: throughput is less than the bottleneck link.
What are the causes of congestion collapse?
Spurios Retransmission: Senders don’t receive ACK in reasonable time, they retransmit the packet. The results in many copies of the same packet being outstanding.
Undelivered Packets: Packets consume resoursces and are dropped elsewhere in the network
Solution: TCP Congestion control to all traffic!
What are the goals of congestion control?
- Use network resources efficiently
- Preserve fair allocation of resources
- Avoid congestion collapse
What are the two approaches to congestion control?
End to End (What TCP uses)
- No feedback from network
- Congestion inferred from loss and delay
Network-Assisted
- Routers provide Feedback
- set a single Bit indicating congestion (ECN)
- tell sender explicit rate they should send at
How does TCP congestion occur?
- Senders increase rate until packets are dropped
* TCP interprets packet loss as congestion and slows down
What is the window-based approach to adjusting transmission rates?
- Send up to WINDOW_SIZE packets
- Stop sending until you get an ACK back
- Each time you get an ACK, increase window size by 1 and send packets until window is full
- If you fail to receive an ACK for a packet, cut the window size in half
- TCP is Additive Increase, Multiplicative Decrease (AIMD)
- This is the common method to use
What is Rate Based congestion control (as opposed to window based congestion control)?
- Monitor loss rate
* Uses timer to modulate transmission rate
Calculate the Sending Rate for the following:
RTT = 100ms
Packet: 1kB
Window Size: 10 Packets
- 10 packets / 100ms = 100 packets / second
- 1000B * 8 b/B = 8000b
- 8000 b/packet * 100 packets / sec = 800kbps
What is fairness and efficiency in congestion control?
Fairness:
* everyone gets their fair share of resources
Efficiency:
- network resources are used well.
- No spare capacity in network
How does AIMD converge to optimal fairness and efficiency?
- Efficiency: x1 + x2 = C (network capacity, Constant)
- Fairness: x1 = x2
- Left of the x1 + x2 = C -> Underutilization
- Right of the x1 + x2 = c -> Overutilization
- Optimal Point is where network is neither under or over-utilized and you are on the x1 = x2 fairness line
- Additive increased moves parallel to the fairness line. This continues until network becomes overloaded (Increases efficiency)
- multiplicative decrease moves you toward the origin which makes it more fair.
Describe TCP Incast
Drastic reduction in application network throughput when large number of servers using TCP all make simultaneous requests.
Results b/c of:
- High Fan In
- High bandwidth, low latency
- lots of parallel requests each w/ small amount of data
Results in bursty, retransmission
When TCP timeout occurs, have to wait hundreds of ms for timeout to occur, but RTT inside DC network is < 1ms, or even us, so throughput is reduced by up to 90% because of link idle time (waiting for TCP timeout to occur)
How do you calculate average AIMD throughput?
(3/4) * (Wm/RTT)
What are solutions to TCP incast?
- us granularity of retransmission timers (reducing retransmission timeout)
- Timers need to operate on granularity close to the RTT time.
- ACKS for every other packet
What are some challenges of streaming data?
- Large volume of data
- Data volume varies over time
- Low tolerance for delay variation (don’t like buffering)
- Low tolerance for delay, period
- Some loss is acceptable
How is video compression performed?
- Image Compression -> Spatial Redundancy
* Compression across images -> temporal redundancy
Why is TCP not a good fit for congestion control for streaming data?
- Reliable Delivery (could resend packets, waisting time and causing buffering)
- Slowing down upon loss (could slow down too much and starve buffer)
- Protocol overhead (extra bytes to send)
UDP:
+ No Retransmission
+ No Sending rate adaption
How are youtube and skype implemented?
Youtube: Decided to keep it simple and still use HTTP/TCP
Skype: Peer to Peer (P2P)
* Individual users route voice traffic through one another
How does QOS Work?
Marking and Policing
* Marking and Scheduling - mark packet with higher priority then they can be put into a higher priority queue. Schedule that queue so it’s serviced more often than lower priority queues.
Describe the different traffic classifications
CBR - Constant bit rate (audio)
VBR - variable bit rate (video, data)
What is leaky bucket traffic shaping?
(Isochronous)
Beta = size of bucket
rho = drain rate from bucket (average rate)
Sender can send bursty traffic as long as it doesn’t overfill the bucket
Larger bucket = larger burst
larger drain rate = enable faster packet rate
Example Audio Stream
- 16KB Bucket
- Packet size = 1KB
- Rho = 8 packets / s
So could possible accept a burst up to 16 packets, but will constantly deliver 8 packets/s out of the bucket.
What is (r, T) traffic shaping?
Typically limited to fixed rate flows.
- can send max r bits during a T-bit frame
- can send a certain number of bits every time unit
If flow exceeds rate, those bits are assigned a lower priority and may be dropped.
What is a token bucket?
Shapes bursty traffic patterns - bounded by the rate, rho.
- arrive in bucket at rate rho
- bucket size is beta
If ‘b’ is packet size, < beta:
- if bucket is full - packet sent, b tokens removed
- if bucket is empty, must wait for b tokens before sending
In general, you must wait for packet size bits to be present in the bucket at which point they will be sent.
difficult to police traffic sent by token buckets.
bound in token bucket is:
Over any interval T, rate < Beta + T*rho
You can always send at rate Rho - if you want to send a burst of > rho, you use the bucket size. So if rho = 6Mbps and you want to send at 10Mbps for 0.5s, you need: (10 - 6)*0.5 = 2Mb = 250KB = beta.
How do you police a token bucket shaper?
To use a composite shaper: Combined token bucket shaper + leaky bucket shaper. Token feeds the leaky.
What is a power boost?
Allows subscriber to send at higher rate for short period of time. Spare capacity for users who do not put a sustained load on the network.
Time allowed for power boost = Beta / (r_boost - r_sustained)
What is buffer bloat?
When there is too much buffering and thus the latency (RTT) can significantly increase.
Solutions:
1) smaller buffers (not realistic - lot of stuff currently deployed)
2) traffic shaping - prevent sending at a rate higher than the upload link provided by your ISP.
Why measure network traffic?
- Security - looking for rogue behavior (botnets, DOS)
- Billing - something like the 95th percentile