Midterm 2 Flashcards

1
Q

What is Congestion Collapse?

A

Throughput < Bottleneck Link

Essentially, an increase in load leads to a decrease in useful work

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are 2 causes of Congestion Collapse?

A
  1. Spurious Retransmission of packets still in flight
  2. Undelivered Packets that consume resources but are dropped elsewhere in the network

Note: Normal traffic that contributes to congestion is not the cause of collapse, it is the extra traffic that is caused by congestion that leads to collapse

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are 3 goals of Congestion Control?

A
  1. Use network efficiently
  2. Preserve fair allocation of resources
  3. Avoid Congestion Collapse
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are 2 approaches to Congestion Control?

A

E2E (TCP) and Network Assisted (Routers provide feedback)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the 2 ways rates are adjusted in TCP Congestion Control?

A
  1. Window-based (AIMD)

2. Rate-based (monitor loss rate, use timer to modulate)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What leads to “TCP Incast”?

A
  • High Fan In
  • Workloads that are high bandwidth and low latency
  • Many parallel requests
  • Small Buffer in switches
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is “TCP Incast”?

A

Incast is a drastic reduction in application throughput that results when servers using TCP all simultaneously request data, leading to a gross underutilization of network capacity in many to one communication networks (e.g. data center). In other words, it occurs when collective communication (many to one or many to many patterns) occur in high fan-in switches.

Filling of small buffers leads to bursty retransmissions that overfill switch buffers; the bursty retransmissions are caused by TCP Timeouts. TCP Timeouts can last 100ms+, but RTT is often much less and senders must wait through timeouts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the solutions to “TCP Incast”?

A

Finer granularity timers and fewer acknowledgments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are 3 traffic shaping approaches?

A
  • Leaky Bucket
  • Token Bucket
  • (r,T) Traffic Shaper

Bonus: Composite

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the 2 classes of traffic?

A

Constant Bit Rate (CBR): audio
shaped by peak rate

Variable Bit Rate (VBR): video, data
shaped by avg. rate and peak rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a Power Boost?

A

Allows a subscriber to send at a higher rate for a brief time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is Buffer Bloat?

A

Buffer fills up with packets, but can only drain at Rsus; these large buffers are very bad for time-critical apps e.g. video and voice

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the 2 types of Network Measurement?

A

Passive Measurement: collection of packets and flow statistics that are already on the network e.g. Simple Network Management Protocol (SNMP), packet monitoring, flow monitoring

Active Measurement: inject additional traffic to measure various statistics e.g. ping and traceroute

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Why do CDNs like to peer with ISPs?

A

Peering with an ISP where a customer is located provides:

  • better throughput (lower latency - not as many hops)
  • reliability
  • burstiness -> lower transit costs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why do ISPs like to peer with CDNs?

A
  • Good performance for customers

- Lower transit costs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the motivation of chord?

A

Scalable location of data in a large distributed system

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Name 2 reasons why over-buffering is a bad idea?

A
  1. Complicates design of high-speed routers leading to higher power usage, more board space, and lower density
  2. Increases E2E delay in the presence of congestion
18
Q

What is “Fairness”?

A

Fairness is how bandwidth allocated between the different flows. Two common definitions of fair are that all flows get equal throughput, or that all flows get throughput proportionate to their demand
(i.e., how much they want to send).

19
Q

What is “Efficiency”?

A

Efficiency is how much of the available bandwidth is used, i.e., efficient congestion control leaves little or no bandwidth wasted.

20
Q

What are 2 solutions to Buffer Bloat?

A
  1. Smaller buffers

2. Shape traffic such rate of traffic coming into the access link never exceeds the uplink the ISP has provided

21
Q

What is the difference between a Leaky and Token Bucket?

A
  • Both easy to implement, but token bucket is more flexible since it has additional parameters to configure burst size
  • Policing traffic sent by token buckets can be difficult
  • Token buckets allow for long bursts, and if the bursts are of high priority traffic, they are difficult to police and may interfere with other high priority traffic
22
Q

Data (Source Classification)

A

bursty, periodic, regular

23
Q

Audio (Source Classification)

A

continuous, periodic

24
Q

Video (Source Classification)

A

continuous, bursty (compression), periodic

25
Q

Suppose a TCP flow’s bottleneck link has 1Gbps capacity and assuming TCP Reno with AIMD behavior, what will the average throughput of that flow be in Mbps?

A

AI will increase throughput until it equals bandwidth and trigger MD, dropping to 1/2 bandwidth. This repeats.

Average Throughput = 1/2 bandwidth + 1 bandwidth = 3/4 bandwidth

Therefore, 3/4(10Gbps) = 750Mbps

26
Q

Why does the linear growth of TCP Reno (1/RTT) perform poorly for short lived flows in networks with large bandwidth and delay products?

A

The time period required for the congestion window to reach its maximum value is very large (on the order of minutes and hours) for TCP Reno in paths with large bandwidth delay products. Short lived flows may never reach a congestion event, meaning the flow unnecessarily transmitted slower than necessary over its entire lifetime to avoid congestion.

27
Q

Describe how BIC-TCP works

A

At a high-level, when BIC-TCP experiences a packet loss event, the congestion window value is set to the midpoint between the last window value that did not suffer from loss (WMAX) and the previous window size that was loss free for at least one RTT (WMIN). This is often referred to as a binary search, as it follow intuitively that the maximum possible stable window value is somewhere between a value that was known to be stable and the value achieved just prior to the loss event. This algorithm “searches” for this maximum stable window value by effectively reducing the range of possible values by half per packet loss event.

28
Q

How does BIC-TCP react to changes in available bandwidth?

A

If there is a sudden increase in available bandwidth, then max probing phase of BIC-TCP will rapidly increase the window beyond the value of WMAX until another loss event occurs, which resets the value of WMAX. If a sudden decrease in available bandwidth occurs, and this loss is below the value of WMAX, then the window size is reduced by a multiplicative value (B), enabling a safe reaction to a lower saturation point.

29
Q

What improvements does CUBIC have over BIC-TCP?

A
  1. On short RTT and low speed networks, BIC-TCP’s growth function is too aggressive making it unfriendly to other TCP flows competing for bandwidth
  2. CUBIC’s function is simpler than BIC-TCP’s - eliminating the need for multiple growth phases and maintaining values e.g. SMAX/MIN
30
Q

What is the purpose of CUBIC’s concave portion?

A

The concave region rapidly increases the congestion window to the previous value where a congestion even occurred, allowing for a quick recovery and high utilization of available bandwidth following a congestion event.

31
Q

What is the purpose of CUBIC’s plateau?

A

This is the TCP Friendly Region.

The congestion window here is nearly constant as it approaches and potentially exeeds WMAX. This achieves stability as WMAX represents the point where network utilization is at its highest under steady state conditions.

32
Q

What is the purpose of CUBIC’s convex portion?

A

This portion exists to rapidly converge on a new value of WMAX following a change in available bandwidth. When the congestion window exceeds WMAX, and continues to increase throughout the end of the plateau, it likely indicates some competing flows have terminated and more bandwidth is available (max probing phase).

33
Q

What kinds of web traffic benefit most from utilizing TCP Fast Open?

A

Short lived TCP connections (small data sizes) on links with large propagation delays.

34
Q

What is the difference between RED and CoDel in terms of dropping packets?

A

RED determines whether to drop a packet statistically based on how close to full the buffer is.

CoDel calculates the queuing delay of packets that it forwards and drops packets if the queuing delay is too long.

By dropping packets early, senders are made to reduce their sending rates at the first signs of congestion problems, rather than waiting for buffers to fill.

35
Q

Why do short lived flows (< 100 packets) not affect the findings that small buffer sizes are sufficient to maintain link utilization as long lived flows increases?

A

Even when the majority of flows are short lived, the flow length distribution remains dominated by long lived flows meaning the majority of packets on the link at any given time belong to long lived flows.

Required buffer size in case of short lived flows depends on actual load on the links and the length of the flows, not the number of flows or propagation delays. This means that roughly the same amount of buffering required for desynchronized long lived flows will also be sufficient for short lived flows as well.

36
Q

What is a Standing Queue?

A

The difference in the bottleneck link speed and the link RTT will result in a certain number of packets consistently occupying the buffer until the flow completes, which is referred to as the Standing Queue.

Standing Queues are not congestion because it results from a mismatch in congestion window and the bottleneck link size. A standing queue can develop in single flow environments, and under usage limits that would eliminate actual congestion.

37
Q

How does CoDel work?

A

CoDel assumes that a standing queue of the target size is acceptable, and that at least one maximum transmission unit worth of data must be in the buffer before preventing packets from entering the queue (by dropping them). CoDel monitors the minimum queue delay experienced by allowed packets as they traverse the queue (by adding a timestamp upon arrival). If this metric exceeds the target value for at least one set interval, then packets are dropped according to a control law unit the queue delay is reduced below the target, or the data in the buffer drops below one MTU.

Dropping a flow’s packet triggers a congestion window reduction by the TCP sender, which helps eliminate buffer bloat.

38
Q

What HTTP method could you use to find out if a page is still fresh and can be used (from the cache) or needs to be refreshed?

A

HEAD method because you don’t have to send all the data for the document and can grab the Last-Modified field in the response header to find out if it’s been changed

39
Q

Data can be classified as what type of source?

A

bursty, periodic, regular

40
Q

Audio can be classified as what type of source?

A

continuous, periodic

41
Q

Video can be classified as what type of source?

A

continuous, bursty (compression), periodic