Key Concepts Flashcards

1
Q

TCP Incast Solutions (2)

A
  • Fine-grained TCP timeouts (microseconds)

- Have client only acknowledge every other packet

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

TCP Incast Causes (3)

A
  • Collective communication (i.e., many-to-one or many-to-many patterns) occurs on high fan-in switches.
  • This results in many small packets arriving at the switch at the same time, thus causing some of the packets to be lost.
  • The last necessary factor is a low-latency network, which means the timeout delay will be much more than the round-trip-time of the network.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Congestion Control Goals (3)

A
  • Efficiency: Use network resources efficiently
  • Fairness: Preserve fair allocation of resources
  • Congestion Collapse: Avoid congestion collapse
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

UDP Traits (4)

A
  • Ideal for streaming video/audio
  • No automatic retransmission of packets
  • No sending rate adaptation
  • Smaller header size
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Token Bucket differences (3)

A
  • Permits burstiness, but bounds it
  • Discards tokens when bucket is full, but never discards packets (infinite queue).
  • More flexible (configurable burst size)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Leaky Bucket differences (2)

A
  • Smooths bursty traffic

- Priority policies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Power boost: How long can sender send at the rate r that exceeds the sustained rate?

A

Sending rate r > Rsustained
Powerboost bucket size: Beta

Beta = d(r-Rsus)

d = Beta/(r-Rsus)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Powerboost description

A

Power Boost Allows subscriber to send at higher rate for a brief time.

Targets spare capacity in network for use by subscribers who do not put sustained load on network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Powerboost types (2)

A
  • Capped: rate at which user can achieve during burst window is set to not exceed a particular rate. To cap, apply second token bucket with another value of Rho to limit peak sending rate for power boost eligible packets to Rho C.
  • Uncapped: configuration simple. Area above average rate and below power boost rate is power boost bucket rate. Max sustained traffic rate is Rho.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Leaky bucket description

A

Takes data and collects it up to a maximum capacity. Data in the bucket is only released from the bucket at a set rate and size of packet. When the bucket runs out of data, the leaking stops. If incoming data would overfill the bucket, then the packet is considered to be non-conformant and is not added to the bucket. Data is added to the bucket as space becomes available for conforming packets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Leaky bucket: Application

A

Traffic shaping or traffic policing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Leaky bucket: Does it discard packets?

A

Yes. It discards packets for which no tokens are available (no concept of queue)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Leaky bucket: Effect on traffic

A

Smooths out traffic by passing packets only when there is a token.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Leaky bucket: traffic arrives in a bucket of size __ and drains from bucket at a rate of __.

A

Beta; Rho

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Leaky bucket: __ controls average rate. Data can arrive faster or slower but cannot drain at a rate faster than this.

A

Rho

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Buffer bloat description

A

Big buffers fill up with packets. Sender doesn’t notice lost packets since they’re queued, so it increases the send rate, causing ever greater delays.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

HTTP properties (4)

A
  • Application layer protocol to transfer web content
  • Protocol browser uses to request webpages
  • Protocol to return objects to browser
  • Layered on top of byte stream protocol like TCP
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

HTTP Request Line Parts (3)

A
  • Method (GET, POST, etc)
  • URL
  • HTTP Version
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

HTTP Optional Headers (2)

A
  • Referrer: What caused page to be requested

- User Agent: Client-software/browser

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

HTTP Response Headers (9)

A
  • HTTP Version
  • Response code (200, 404, etc)
  • Server
  • Location
  • Allow
  • Content Encoding
  • Content Length
  • Expires
  • Modified
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Powerboost: Reason users still experience high latency/loss over duration

A

Access link can’t support the higher rate, so buffers fill up and introduce delays

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Poweboost: Latency solution

A

Sender shape rate should never exceed sustained rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Network Assisted Congestion Control properties (2)

A
  • Routers provide explicit feedback about the rates that end systems should be sending.
  • Set single bit indicating congestion (TCP ECN or explicit congestion notifications)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Buffer bloat solutions (2)

A
  • Smaller buffers (but this is a tall order)

- Shape traffic such that the rate of traffic coming into the access link never exceeds ISP uplink rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
HTTP Head method
Requests a document just like the GET method minus the data (headers only). Faster - allows check of Last-Modified header to determine if cache is still valid.
26
HTTP Response: 200
OK/Success
27
HTTP Response: 100
Information
28
Additive Increase
Increase the throughput linearly until it equals the bandwidth and packet loss occurs
29
AIMD: Average bandwidth
3/4 - between 1x bandwidth (peak) and 1/2 (low-point after MD).
30
TCP Congestion Control Window
The congestion window indicates the maximum amount of data that can be sent out on a connection without being acknowledged.
31
HTTP Response: 300
Redirect
32
HTTP Response: 400
Error (client) e.g. 404 not found
33
HTTP Response: 500
Error (server)
34
Congestion Control Approaches (2)
- End-to-end | - Network-assisted
35
Early HTTP v.0.9/1.9
One request/response per TCP Connection
36
Early HTTP advantages (1)
- Simple to implement
37
Early HTTP disadvantages (3)
- TCP three way handshake for every request - TCP slow start for every new connection - Servers must reserve resources for many connections that haven't timed out yet
38
Improvement on early HTTP inefficiency
Persistent connections
39
Persistent Connections
Multiple HTTP requests/responses are multiplexed on a single TCP connection
40
Web Content Distribution Networks (CDN)
Overlay network of web caches designed to deliver content to a client from optimal location, often through many geographically disparate servers.
41
CDNs aim to place cache as close to __ as possible.
Users
42
CDN owners (3)
- Content providers, e.g. Google - Networks e.g. AT&T - Independent e.g. Akami
43
Non-network CDNs typically place servers in other __ or __
Autonomous systems; ISPs
44
CDN server selection criteria (3)
- Least loaded server - Lowest latency (most typical) - Any alive server
45
Token bucket application
Network traffic shaping or rate limiting
46
Token bucket: Rho
Rate of tokens being added to the bucket (should match average bit rate)
47
Token bucket: Beta
How large/long a burst is allowed
48
CDN content routing types (3)
- Routing systems - Application-based (e.g. HTTP redirect) - Naming system (e.g. DNS)
49
CDN content routing: Routing systems
Routing systems (e.g. anycast): number all replicas with same IP address then allow routing to take them to closes replica
50
CDN content routing: Application-based
Requires client to first go to origin server to get redirect, increasing latency (simple but delays)
51
CDN content routing: Naming system
Client looks up domain and response contains IP address of nearby cache. Significant flexibility in directing different clients to different server replicas (fine-grained control and fast)
52
CDN relationship with ISPs
Symbiotic peering relationship.
53
Why CDNs like to peer with ISPs (3)
- Better throughput since no intermediate AS hops and network latency lower - Redundancy: more vectors to deliver content; increases reliability - Burstiness: during large request events, having connectivity to multiple networks where content is hosted allows ISP to spread traffic across multiple transit links thereby reducing 95th percentile and lowering transit costs
54
Why ISPs like peering with CDNs (2)
- Closer content improves performance for customers | - Lower transit costs by avoiding traffic across costly links
55
BitTorrent
Peer-to-Peer CDN used for file sharing and distribution of large files
56
P2P advantages (2)
- Reduce congestion | - Prevent overload at network where content is hosted
57
BitTorrent publishing steps (4)
- Peer creates Torrent - Seeders create initial copy (from complete copy) - Client starts to download pieces of file from seeder - Client swaps chunks of the file until it has all the complete files
58
BitTorrent: Leechers
Clients with incomplete copies of the file
59
BitTorrent: Trackers
Allow peers to find each other and return random list of peers that leechers can use to swap parts of the file
60
BitTorrent: Freeloading
Client leaves network as soon as it finishes downloading a file
61
BitTorrent: Freeloading solution
Choking (tit-for-tat): temporary refusal to upload chunks to another peer. If peer can't download from peer, don't upload to it
62
BitTorrent: chunk swapping problem
If all client receive same chunks, no one has complete copy and clients won't swap
63
BitTorrent: Rarest piece first
Client determines which pieces are rarest and download those first. Begins with random download as rarity isn't a good basis initially.
64
BitTorrent: Tit-for-tat algorithm
A BitTorrent client sends data only to the top N peers who are sending to it, plus one peer who is optimistically unchoked. Let's say for example purposes that N=4. Your BitTorrent client will choose the 4 peers who are sending to it at the fastest rate and it will send data to them in return, plus a temporary 5th client that is optimistically unchoked.
65
CDN: Advantages of DNS redirection (3)
- Faster than HTTP redirection, which requires extra RTs - More control over who gets redirected where (vs IP anycast) - Simple to implement (DNS works out-of-the-box)
66
Distributed Hash Tabe (DHT): Main motiivation
Scalable location of data in a large distributed system implemented by CHORD protocol
67
DHT key problem
Lookup: hash table is distributed across the network
68
DHT advantages (3)
- Scalable - Provable correctness - Reasonably good performance
69
Consistent hashing
Keys and nodes map to same ID space.
70
Consistent hashing provides (2)
- Load balance: all nodes receive roughly same number of keys - Flexibility: When nodes join/leave the network, only a small fraction of keys need to be moved
71
TCP AIMD
Additive Increase Multiplicative Decrease (AIMD). Graph of rate over time, TCP sawtooth because TCP increase rate using additive rate until it reaches the saturation point, it'll see packet loss and decrease sending rate by half.
72
AIMD throughput is inversely proportional to __ and the square root of the __.
RTT; Loss rate
73
AIMD loss rate
Wm^2/8
74
Efficiency
How much of the available bandwidth is used, i.e., efficient congestion control leaves little or no bandwidth wasted
75
Problems with TCP streaming (4)
- Audio/video can tolerate loss and delay but not variability in delay - TCP retransmits lost packets, which isn't always useful - TCP slows down rate after packet loss - Protocol overhead (TCP header of 20 bytes and ack not needed)
76
Methods for implementing consistent hashing (3)
- Every node knows the location of every other node - Each node only knows location of immediate successor - Finger tables
77
Consistent Hashing: Every node knows the location of every other node
Lookups are fast - O(1). Routing table must be large - O(n).
78
Consistent Hashing: Each node only knows location of immediate successor
Small table, size O(1). Requires O(n) lookups.
79
Finger tables
Every node knows location of N other nodes, distance of nodes that it knows increases exponentially.
80
Finger tables: lookup steps (3)
- Finger i points to the successor of n+2i - Find predecessor for a particular ID and ask what is the successor of that ID - Then ask predecessor for its successor, moving around the ring looking for node whose successor's id is bigger than the data id
81
Finger tables: Requires _ hops, and _ messages per lookup. Size of finger table is _ per node
O(log n); O(log n); O(log n)
82
Finger tables: What happens when a node joins a network
Initialize fingers of node and update fingers of existing nodes, transfer keys from successor to new node
83
Finger tables: What happens when a node leaves a network
Any particular node keeps track of fingers of successor so predecessor can reach nodes corresponding to failed/left node's fingerlings
84
Given a DHT with finger tables of a constant size (e.g. 1), how many hops are required per lookup?
O(n) - each node only knows how to find the next one.
85
BIC-TCP
Approximates a cubic function through additive increase, binary search, and max probing.
86
BIC-TCP can be too aggressive (unfriendly) on networks with a short __ or low __.
RTT; speeds.