Test 2 Key Concepts Flashcards
Congestion Control
TCP uses a congestion window in the sender side to do congestion avoidance. The congestion window indicates the maximum amount of data that can be sent out on a connection without being acknowledged. TCP detects congestion when it fails to receive an acknowledgement for a packet within the estimated timeout.
Congestion Control Goals
Use network resources efficiently
Preserve fair allocation of resources
Avoid congestion collapse
Congestion Control Approaches
End-to-End congestion control
Network assisted congestion control
Fairness
Fairness is how bandwidth allocated between the different flows. Two common definitions of fair are that all flows get equal throughput, or that all flows get throughput proportionate to their demand
(i.e., how much they want to send).
Efficiency
Efficiency is how much of the available bandwidth is used, i.e., efficient congestion control
leaves little or no bandwidth wasted. (Some definitions of efficiency may refer specifically to
bandwidth used to do “productive work”, thus excluding overhead traffic.)
Additive Increase
Additive increase will increase the throughput until it equals the bandwidth, at which point a
packet loss will occur and trigger multiplicative decrease. At that point, throughput immediately
drops to ½ the bandwidth. Additive increase then resumes, raising throughput linearly until it
reaches the total bandwidth again. Thus the average throughput is the average of ½ bandwidth
and 1x bandwidth = ¾ bandwidth.
TCP AIMD
Additive Increase Multiplicative Decrease (AIMD)
Graph of rate over time, TCP sawtooth because TCP increase rate using additive rate until it reaches the saturation point, it’ll see packet loss and decrease sending rate by half
Number of packets sent per packet loss is the area of the triangle of a sawtooth.
Loss rate is Wm^2/8
Throughput = 3/4*Wm/RTT
Throughput is inversely proportional to RTT and square root of the loss rate.
TCP incast
The incast problem occurs when:
- collective communication (i.e., many-to-one or many-to-many
patterns) occurs on high fan-in switches. - This results in many small packets arrive at the switch at the same time, thus causing some of the packets to be lost.
- The last necessary factor is a low-latency network, which means the timeout delay will be much more than the round-trip-time of the network.
Consequently, large delays occur in which the system is simply waiting for the timeouts to occur. This slows the whole application, since hearing from all the
senders in collective communication is usually necessary before the application can proceed.
TCP fit for streaming applications
Audio/video can tolerate Loss and delay but not variability in delay
So TCP not a good fit for congestion control in streaming audio or streaming video.
TCP retransmits lost packets, but not always useful
Slows down rate after packet loss
Protocol overhead (TCP header of 20 bytes and ack for every packet isn’t needed)
Network Assisted Congestion Control
Routers provide explicit feedback about the rates that end systems should be sending.
Set single bit indicating congestion (TCP ECN or explicit congestion notifications)
Solution to TCP Incast
Barrier Synchronization
Client/application has many parallel requests and can’t progress without responses to all of them. Addition of more servers reduces overflow of switch buffer, causing severe packet loss and inducing throughput collapse.
Solution: use fine grained TCP timeouts (microseconds) to reduce that wait time.
Could also reduce network load by having client only acknowledge every other packet.
Media Streaming Challenges
Large volume of data
Data volume varies over time
Low tolerance for delay variation
Low tolerance for delay, period (but some loss acceptable)
UDP (User Datagram Protocol)
UDP best for streaming video and streaming audio
No automatic retransmission
No sending rate adaptation
Smaller header
Leaky bucket
The leaky bucket takes data and collects it up to a maximum capacity. Data in the bucket is only released from the bucket at a set rate and size of packet. When the bucket runs out of data, the leaking stops. If incoming data would overfill the bucket, then the packet is considered to be non-conformant and is not added to the bucket. Data is added to the bucket as space becomes available for conforming packets.
- Smooths out traffic by passing packets only when there is a token. Does not permit burstiness.
- Discards packets for which no tokens are available (no concept of queue)
- Application: Traffic shaping or traffic policing.
Traffic arrives in a bucket of size beta and drains from bucket at a rate rho
Rho controls average rate. Data can arrive faster or slower but cannot drain at a rate faster than rho
So max average rate that traffic can be sent is smooth rate rho
Size of bucket controls max burst size. Even though average cannot exceed rho, but at times sender can exceed rate if total size of burst does not overflow the bucket
Leaky bucket allows flows to periodically burst and regulator ensures average rate does not exceed the drain rate of the bucket
Token Bucket
Token Bucket Traffic Shaper is for shaping bursty traffic patterns but still ensure flow does not exceed some average rate
- Token bucket smooths traffic too but permits burstiness - which is equivalent to the number of tokens accumulated in the bucket.
- Discards tokens when bucket is full, but never discards packets (infinite queue).
- Application: Network traffic shaping or rate limiting.
Rho is the rate
of tokens being added to the bucket, so it should match the average bit rate
Beta determines how large and how long a burst is allowed.
Token arrive in a bucket at a rate Rho, and Beta is again the capacity of the bucket. Traffic arrives at an average rate Lambda average and a peak rate Lambda peak. Traffic can be sent by the regulator as long as there are tokens in the bucket.
Different from leaky bucket: if token bucket is full, packet is sent and b tokens removed. But if bucket empty, must wait until b tokens arrive. If bucket partially full, will send if at least little b tokens. Otherwise wait.
Limitation: any traffic interval of length T, the flow can send Beta + TRho tokens of data. If network tries to police the flows by measuring traffic over intervals of length T, flow can cheat by sending this amount of data in each interval. Over 2T, flow consumes 2 (Beta + TRho), which is greater than the Beta +2T*Rho it’s supposed to consume.
Difference in Token Bucket and Leaky Bucket
Token Bucket --permits burstiness, but bounds it. in any interval T, rate < Beta (max tokens that can be accumulated in bucket) + T*Rho rate tokens accumulate long term rate always less than rho --No discard or priority
Leaky Bucket
- -smooths bursty traffic
- -priority policies
both easy to implement, but token bucket is more flexible since additional parameters to configure burst size
Power Boost
Power Boost Allows subscriber to send at higher rate for a brief time
Targets spare capacity in network for use by subscribers who do not put sustained load on network.
two types:
- Capped: rate at which user can achieve during burst window is set to not exceed a particular rate. To cap, apply second token bucket with another value of Rho to limit peak sending rate for power boost eligible packets to Rho C.
- -Uncapped: configuration simple. Area above average rate and below power boost rate is power boost bucket rate. Max sustained traffic rate is Rho.
Power boost: How long can sender send at the rate r that exceeds the sustained rate?
sending rate r>Rsustained
Powerboost bucket size Beta
Beta = d(r*Rsus)
d = Beta/(r-Rsus)
Power boost effect on latency
Even though power boost allows users to send at higher rate, users still experience high latency and loss over duration sending at higher rate
Reason: access link can’t support the higher rate, resulting in buffers filling up, introducing delays because no packet loss even though access link may not be able to send at that higher rate
Solution: sender shape rate never to exceed sustained rate
Buffer Bloat
Buffer Bloat
If buffer can support higher rate, it’ll fill with packets, but still only drain at sustained rate.
Even though sender can send at higher rate for brief period of time, packets are queued in a buffer, so see higher delays than if arrived at front of queue and delivered immediately
delay = amount of data in buffer/rate that buffer can drain
Ruins performance for voice, video
Shows up in home routers, home APs, hosts, switches/routers
Sender will send at increasingly faster rates until they see a loss, but buffer will continue to fill up because drains slower, but won’t show packet loss
Solution:
- -smaller buffers, but this is tall order
- -shape traffic such that the rate of traffic coming into the access link never exceeds the uplink that the ISP has provided, then buffer will never fill. Shape traffic at home router to prevent exceeding rate of uplink.
Buffer Bloat Solution
Buffer Bloat Solution
- -smaller buffers, but this is tall order
- -shape traffic such that the rate of traffic coming into the access link never exceeds the uplink that the ISP has provided, then buffer will never fill. Shape traffic at home router to prevent exceeding rate of uplink.
Buffer Bloat Example
Example of upload:
Y axis RTT (latency) to nearby server vs time on x axis
Modems experience huge increase of latency upon upload. Modem itself has buffer, ISP upstream of that buffer and access link draining buffer at certain rate.
TCP senders in the home will send until they see lost packets, but if buffer is large, senders won’t see lost packets until buffer is full.
Senders continue to send at increasingly faster rates until they see a loss.
As a result, packets that are arriving in buffer see increasing delays and senders continue sending at faster rates because without seeing a loss, there is no signal to slow down.
Hypertext Transfer Protocol (HTTP)
Hypertext Transfer Protocol (HTTP)
- application layer protocol to transfer web content
- protocol browser uses to request webpages
- protocol to return objects to browser
- layered on top of byte stream protocol like TP
HTTP Requests
Request line
- indicates method of request (GET, POST sends data to server, HEAD returns headers of get request)
- includes URL
- includes version number
Optional headers
- referrer: what caused page to be requested
- user agent: client software