Lesson 6 - Congestion Control Deck 2 Flashcards

1
Q

Resource control deals with

A

handling bandwidth constraints on links

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The goal of congestion control

A

fill the internet’s pipes without overflowing them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Hosts are _______ of other hosts and their current state (i.e. how much other traffic is on the network)

A

unaware

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Congestion collapse

A

Throughput is less than the bottleneck linkCongestive collapse (or congestion collapse) is the condition in which congestion prevents or limits useful communication. Congestion collapse generally occurs at “choke points” in the network, where incoming traffic exceeds outgoing bandwidth.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Saturation

A

point at which increasing the load no longer results in useful work getting done

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Collapse

A

increasing the traffic load can cause the amount of work done or amount of traffic forwarded to actually decrease

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Possible causes of congestion collapse

A

-Spurious transmissions of packets still in flight-Undelivered packets

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Spurious transmissions

A

when senders don’t receive acknowledgments for packets in a timely fashion, then cay spuriously retransmit, resulting in many copies of the same packets being outstanding in the network at any one time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Undelivered packets

A

Packets consume resources and are dropped elsewhere in the network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Solution to congestion collapse

A

Congestion control-For spurious transmission: better timers and TCP congestion control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Two approaches to congestion control

A

-End-to-end: Network provides no explicit feedback to the senders about when they should slow down their rates. Instead, congestion is inferred, typically by packet loss, and potentially also by increased delay.-Network-assisted: Routers provide explicit feedback about the rates that end systems should be sending in

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Two parts of TCP congestion control

A
  1. Increase algorithm2. Decrease algorithm
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

TCP increase algorithm

A

the sender must test the network to determine whether the network can sustain a higher sending rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

TCP decrease algorithm

A

the senders react to congestion to achieve optimal loss rates, delays, and sending rates

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

2 approaches of adjusting rate

A

-Window-based algorithm-Rate-based algorithm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Window-based algorithm

A

-Sender can only have a certain number of packets outstanding (in flight). The sender uses acknowledgments from the receiver to clock the retransmission of new data.-When sender receives another ACK from receiver, it can send another packet.-If a sender wants to increase the rate at which it’s sending, it simply needs to increase the window size. Below example: window size is 4. If 4 are inflight, waits for an ACK for 1 of them, then sends another.-Window-based congestion control (or AIMD) is the common way of performing congestion control in today’s networks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Rate-based algorithm

A

-Explicitly rate-based congestion control algorithm-Monitor loss rate-Use a timer to modulate the transmission rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

If RTT is 100 milliseconds, packet is 1 KB, and window size is 10 packets, what is the sending rate?

A

100 pkts/sec * 8000 bits/packet = ~800,000 bps, or 800 Kbps

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

2 goals of TCP Congestion Control

A

-Fairness: every sender gets their fair share of network resources-Efficiency: network resources are used well

20
Q

In multiplicative decrease, each sender decreases its rate by _______

A

Some constant factor of its current sending rate (e.g. by half)

21
Q

3 traits of AIMD

A

-Distributed (all senders can act independently)-Fair-Efficient

22
Q

TCP Incast

A

-The throughput collapse that results from high fan-in, high bandwidth/low latency, many small/parallel requests, and small buffers in switches in a data center-It is a drastic reduction in application throughput that results when servers using TCP all simultaneously request data, leading to a gross underutilization in network capacity in many to one communication networks like a data center. The filling up of the buffers in the switches results in bursty retransmissions that overfill the switch buffers.

23
Q

TCP timeout

A

Filling up of the buffers in the switches results in busty retransmissions that overfill the switch buffers.-Can last 100s of milliseconds but the RTT in a data center is typically < 1 ms, often just 100s of microseconds-Because the RTT are so much less than TCP timeouts, the senders will have to wait for the TCP timeout before they retransmit, and application throughput can be reduced by as much as 90% as a result of link idle time

24
Q

Barrier Synchronization

A

-common request pattern in data centers today-A client/application might might have many parallel threads and no forward progress can be made until all the responses for those threads are satisfied

25
Q

Solutions to barrier synchronization

A

-TCP granularity (transmission timers on the order of micro-seconds rather than milliseconds). Reducing the retransmission time for TCP thus improves system throughput-Have the client acknowledge every other packet, rather than every packet, thus reducing the overall network load-Premise: the timers need to operate on a granularity that’s close to the RTT of the network. In the case of a data center, that’s 100s of microseconds or less

26
Q

What are solutions to TCP Incast problem? Choices:-smaller packets-finer granularity timers-fewer acknowledgments-more senders

A

-finer granularity timers-fewer acknowledgments

27
Q

4 challenges of multimedia and streaming

A

-Large volume of data-Data volume varies over time-Low tolerance for delay variation-Low tolerance for delay, period. Some loss IS acceptable.

28
Q

What is the sampling rate with 8,000 samples/sec, and 8 bits/sample (speech, for example)

A

64 Kbps, which is a common bit rate for audio

29
Q

Suppose we have an MP3 with 10,000 samples/sec and 16 bits/sample, what’s the sample rate?

A

160 Kbps

30
Q

Temporal redundancy

A

-Compression across images in a video

31
Q

Playout buffer

A

Client stores data as it arrives from the server and plays it for the user in a continuous fashion. Thus, data might arrive more slowly or more quickly from the server, but as long as the client is playing data out of the buffer at a continuous rate, the user sees a smooth playout. A client may wait a few seconds before it starts playing a stream to allow data to be built up in this buffer, to account for cases where the server might have times where it’s not sending at a rate that’s sufficient to satisfy the client’s playout rate.

32
Q

Which pathologies can streaming audio/video tolerate?Choices:-loss-delay-variation in delay

A

-loss-delay

33
Q

UDP

A

(User Datagram Protocol)-Does not retransmit lost packets, and does not automatically adapt the sending rate. It also has a smaller header.-Because it doesn’t retransmit or adapt sending rate, many things are left to higher layers, potentially the application, such as when to transmit the data, how to encapsulate it, whether to retransmit, and whether to adapt the sending rate, or to adapt the quality of the audio or video encoding.-Higher layers must solve these problems. In particular, the sending rate still needs to be friendly or fair to other TCP senders which may be sharing a link.-There are a variety of video/audio streaming transport protocols that are built on top of UDP that allow senders to figure out when and how to retransmit lost packets and how to adjust sending rates.

34
Q

YouTube

A

-All uploaded videos are converted to Flash (or HTML5) and nearly nearly every browser has a Flash plugin. Thus, every browser can essentially play these videos. All browsers implement HTTP/TCP.-TCP is sub-optimal for streaming, but keeps things simple

35
Q

Skype, VOIP

A

-Analog signal is digitized through an A/D conversion. Then, this resulting digitized bitstream is sent over the internet. In the case of Skype, this A/D conversion happens by way of the application. In the case of VOIP, it might be performed with some kind of phone adapter that you actually plug your phone into (Vonage for example).-Long propagation delays, high congestion, or disruptions as a result of routing changes can all degrade the quality of a VOIP call

36
Q

QoS

A

(Quality of Service)-ensure that some streams achieve acceptable performance levels

37
Q

2 ways of achieving QoS

A

-Explicit reservations-Mark certain packet streams as higher priorities than others

38
Q

Marking and Policing (and scheduling)

A

Rewatch this lecture

39
Q

What are commonly used QoS for streaming audio/video?Choices:-Marking packets-Scheduling-Admission control-Fixed allocations

A

-Marking packets-Scheduling

40
Q

Admission control

A

An application declares its needs in advance and the network may block the application’s traffic if the application can’t satisfy the needs

41
Q

Example of admission control

A

A busy signal on a telephone network

42
Q

Why not use admission control in internet applications?

A

Bad UX (trying to go to a website and it’s blocked temporarily)

43
Q

Goal of TCP

A

Prevent congestion collapse

44
Q

Calculate sender rate

A

pkts/sec * bits/pkt-To get pkts/sec, use window size and RTT.-(Remember, 8 bits in a byte!)

45
Q

If you want to traffic shape a variable bit rate (VBR) video stream with average bit rate 6 Mbps and maximum bit rate 10 Mbps, should you use a leaky bucket or a token bucket? What values should you use for rho and beta if you want to allow bursts of up to 500 ms?

A

Token bucket because it has bursts.rho should be 6Mbps (should match the avg rate)Beta should be:(Max - Avg) * Time window, so:(10 – 6 Mbps)(0.5s) = 2 MbNow, remember 8 bits in a Byte!!!!!