Key Concepts Flashcards
TCP Incast Solutions (2)
- Fine-grained TCP timeouts (microseconds)
- Have client only acknowledge every other packet
TCP Incast Causes (3)
- Collective communication (i.e., many-to-one or many-to-many patterns) occurs on high fan-in switches.
- This results in many small packets arriving at the switch at the same time, thus causing some of the packets to be lost.
- The last necessary factor is a low-latency network, which means the timeout delay will be much more than the round-trip-time of the network.
Congestion Control Goals (3)
- Efficiency: Use network resources efficiently
- Fairness: Preserve fair allocation of resources
- Congestion Collapse: Avoid congestion collapse
UDP Traits (4)
- Ideal for streaming video/audio
- No automatic retransmission of packets
- No sending rate adaptation
- Smaller header size
Token Bucket differences (3)
- Permits burstiness, but bounds it
- Discards tokens when bucket is full, but never discards packets (infinite queue).
- More flexible (configurable burst size)
Leaky Bucket differences (2)
- Smooths bursty traffic
- Priority policies
Power boost: How long can sender send at the rate r that exceeds the sustained rate?
Sending rate r > Rsustained
Powerboost bucket size: Beta
Beta = d(r-Rsus)
d = Beta/(r-Rsus)
Powerboost description
Power Boost Allows subscriber to send at higher rate for a brief time.
Targets spare capacity in network for use by subscribers who do not put sustained load on network.
Powerboost types (2)
- Capped: rate at which user can achieve during burst window is set to not exceed a particular rate. To cap, apply second token bucket with another value of Rho to limit peak sending rate for power boost eligible packets to Rho C.
- Uncapped: configuration simple. Area above average rate and below power boost rate is power boost bucket rate. Max sustained traffic rate is Rho.
Leaky bucket description
Takes data and collects it up to a maximum capacity. Data in the bucket is only released from the bucket at a set rate and size of packet. When the bucket runs out of data, the leaking stops. If incoming data would overfill the bucket, then the packet is considered to be non-conformant and is not added to the bucket. Data is added to the bucket as space becomes available for conforming packets.
Leaky bucket: Application
Traffic shaping or traffic policing.
Leaky bucket: Does it discard packets?
Yes. It discards packets for which no tokens are available (no concept of queue)
Leaky bucket: Effect on traffic
Smooths out traffic by passing packets only when there is a token.
Leaky bucket: traffic arrives in a bucket of size __ and drains from bucket at a rate of __.
Beta; Rho
Leaky bucket: __ controls average rate. Data can arrive faster or slower but cannot drain at a rate faster than this.
Rho
Buffer bloat description
Big buffers fill up with packets. Sender doesn’t notice lost packets since they’re queued, so it increases the send rate, causing ever greater delays.
HTTP properties (4)
- Application layer protocol to transfer web content
- Protocol browser uses to request webpages
- Protocol to return objects to browser
- Layered on top of byte stream protocol like TCP
HTTP Request Line Parts (3)
- Method (GET, POST, etc)
- URL
- HTTP Version
HTTP Optional Headers (2)
- Referrer: What caused page to be requested
- User Agent: Client-software/browser
HTTP Response Headers (9)
- HTTP Version
- Response code (200, 404, etc)
- Server
- Location
- Allow
- Content Encoding
- Content Length
- Expires
- Modified
Powerboost: Reason users still experience high latency/loss over duration
Access link can’t support the higher rate, so buffers fill up and introduce delays
Poweboost: Latency solution
Sender shape rate should never exceed sustained rate
Network Assisted Congestion Control properties (2)
- Routers provide explicit feedback about the rates that end systems should be sending.
- Set single bit indicating congestion (TCP ECN or explicit congestion notifications)
Buffer bloat solutions (2)
- Smaller buffers (but this is a tall order)
- Shape traffic such that the rate of traffic coming into the access link never exceeds ISP uplink rate
HTTP Head method
Requests a document just like the GET method minus the data (headers only).
Faster - allows check of Last-Modified header to determine if cache is still valid.
HTTP Response: 200
OK/Success
HTTP Response: 100
Information
Additive Increase
Increase the throughput linearly until it equals the bandwidth and packet loss occurs
AIMD: Average bandwidth
3/4 - between 1x bandwidth (peak) and 1/2 (low-point after MD).
TCP Congestion Control Window
The congestion window indicates the maximum amount of data that can be sent out on a connection without being acknowledged.
HTTP Response: 300
Redirect
HTTP Response: 400
Error (client) e.g. 404 not found
HTTP Response: 500
Error (server)
Congestion Control Approaches (2)
- End-to-end
- Network-assisted