6 - Congestion control & streaming Flashcards

1
Q

Congestion Collapse

A

Increase in load -> Decrease in useful work

Cause of congestion collapse

  • Spurious retransmission (solution is to have better timers and TCP congestion control)
  • Undelivered packets (solution is to apply congestion control to all traffic)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Goals of Congestion Control

A
  • User network resources efficiently
  • Preserve fair allocation of resources
  • Avoid congestion collapse
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

End-to-end (one approach to congestion control)

A
  • No Feedback from network
  • Congestion inferred by loss and delay
  • Approach taken by TCP congestion control
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Network-assisted (one approach to congestion control)

A
  • Routers provide feedback
    • > single bit
    • > explicit rates
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

TCP Congestion Control

A
  • Senders increase rate until packets are dropped
  • TCP interprets packet loss as congestion and slows down.
  • Congestion control has 2 parts:
    1. Increase algorithm - The sender must test the network to determine whether the network can sustain a higher sending rate
      2. Decrease algorithm - the senders react to congestion to achieve optimal loss rates, delays in sending rates
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Window-based aka AIMD (Approach to adjusting rates)

A

A sender can only have a certain number of packets outstanding or in flight

Sender uses acks from the receiver to clock the retransmission of new data

If sender wants to increase rate, they must increase the window size

In TCP, every time a sender receives an acknowledgement, it increases the window size

Success: one packet increased window per round trip (“additive increase”)

Failure: window size reduced by half (“multiplicative decrease”)

common way of performing congestion control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Rate-based (Approach to adjusting rates)

A
  • monitor loss rate

- uses timer to modulate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does AIMD converge to?

A

Fairness and efficiency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Additive increase is applied

A

Increases efficiency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Multiplicative decrease is applied

A

Increases fairness

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

AIMD algorithm (TCP Congestion Control)

A
  • Distributed, fair, efficient

- sawtooth behavior (rate vs time)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Data Center & TCP “Incast”

A

racks connected to switches

  • High “fan-in”
  • High bandwidth, low latency
  • lots of parallel requests each with small amount of data
  • small switch buffers

The throughput collapse that results from this phenomenon is called TCP Incast

Incast is a drastic reduction in application throughput that results when servers using TCP all simultaneously request data, leading to a gross underutilization of network capacity in many-to-one communication networks like a data center.

The filling up of the buffers here at the switches result in bursty retransmissions that overfill the switch buffers. These bursting retransmissions are caused by TCP timeouts

TCP timeouts can last hundreds of milliseconds.

Roundtrip time in a data center network is typically less than a millisecond, often microseconds.

Because the roundtrip times are so much less than TCP timeouts the centers will have to wait for the TCP timeout before they retransmit

An application throughput can be reduced as much as 90% as a result of link idle time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Barrier Synchronization & Idle Time

A

common request pattern

Whereby a client or an application might have many parallel threads and no forward progress can be made until all the responses for those threads are satisfied

The addition of more servers in the network induces an overflow of the switch buffer causing severe packet loss and inducing throughput collapse

Solution 1: Use fine grained TCP retransmission timers on the order of microseconds , rather than on the order of milliseconds , reducing the retransmission timeout for TCP, thus improves system throughput

Basic idea is that the timers need to operate on a granularity that’s close to the round-trip time of the network.

Solution 2: Have the client acknowledge every other packet rather than every packet, thus reducing the overall network load

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Challenges for Media Streaming

A
  • Large volume of data
  • Data volume varies over time
  • Low tolerance for delay variation
  • Low tolerance for delay, period
    (some loss is acceptable)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Video Compression

A
  • Image compression -> spatial redundancy
  • Compression across images -> temporal redundancy

Video compression uses a combo of static image compression on reference frames (or anchor frames or I frames) and derived frames (P)

MPEG

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Streaming Video

A
  • Server stores audio/video
  • Client requests
    • > Playout at the “right time”

Server can divide data into segments, label each segment with a timestamp indicating the time at which that particular segment should be played so that client knows when to play that data.

The data must arrive to client quickly

Solution: Playout Buffer
Client stores data as it arrives from the server and plays the data for the user in a continuous fashion, thus data may arrive more slowly or quickly from the server, but as long as the client is playing data out of the buffer at a continuous rate, the user sees a smooth playout.

A client may typically wait a few seconds before it starts playing a stream to allow data to be built up in this buffer to account for cases when the server might have times where it is not sending at a rate that’s sufficient to satisfy the client’s playout rate

17
Q

Playout Delay

A

We might see packets generated at a particular rate and the packets might be received at slightly different times, depending on network delay.

We want to avoid these delays when we playout.

Playout delay allows client to achieve a smooth playout.

Some delay at the beginning of the playout is acceptable

18
Q

Pathologies streaming audio/video tolerate

A

Loss and Delay

19
Q

TCP is not a good fit for congestion control for streaming video or audio.

A
  • Reliable Delivery
  • Slowing down upon loss
  • Protocol overhead
Consider using UDP 
- No Transmission
- No sending rate adaptations
Higher layers must solve the above problems
"TCP Friendly"
20
Q

Skype

A
  • Central Login Server
  • P2P data exchange
  • Compression
  • Encryption
  • Delays
  • Congestion
  • Disruption
21
Q

Commonly used QoS for streaming audio/video

A
  • Marking packets (by priority)

- Scheduling