Key Concepts Flashcards

1
Q

TCP Incast Solutions (2)

A
  • Fine-grained TCP timeouts (microseconds)

- Have client only acknowledge every other packet

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

TCP Incast Causes (3)

A
  • Collective communication (i.e., many-to-one or many-to-many patterns) occurs on high fan-in switches.
  • This results in many small packets arriving at the switch at the same time, thus causing some of the packets to be lost.
  • The last necessary factor is a low-latency network, which means the timeout delay will be much more than the round-trip-time of the network.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Congestion Control Goals (3)

A
  • Efficiency: Use network resources efficiently
  • Fairness: Preserve fair allocation of resources
  • Congestion Collapse: Avoid congestion collapse
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

UDP Traits (4)

A
  • Ideal for streaming video/audio
  • No automatic retransmission of packets
  • No sending rate adaptation
  • Smaller header size
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Token Bucket differences (3)

A
  • Permits burstiness, but bounds it
  • Discards tokens when bucket is full, but never discards packets (infinite queue).
  • More flexible (configurable burst size)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Leaky Bucket differences (2)

A
  • Smooths bursty traffic

- Priority policies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Power boost: How long can sender send at the rate r that exceeds the sustained rate?

A

Sending rate r > Rsustained
Powerboost bucket size: Beta

Beta = d(r-Rsus)

d = Beta/(r-Rsus)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Powerboost description

A

Power Boost Allows subscriber to send at higher rate for a brief time.

Targets spare capacity in network for use by subscribers who do not put sustained load on network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Powerboost types (2)

A
  • Capped: rate at which user can achieve during burst window is set to not exceed a particular rate. To cap, apply second token bucket with another value of Rho to limit peak sending rate for power boost eligible packets to Rho C.
  • Uncapped: configuration simple. Area above average rate and below power boost rate is power boost bucket rate. Max sustained traffic rate is Rho.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Leaky bucket description

A

Takes data and collects it up to a maximum capacity. Data in the bucket is only released from the bucket at a set rate and size of packet. When the bucket runs out of data, the leaking stops. If incoming data would overfill the bucket, then the packet is considered to be non-conformant and is not added to the bucket. Data is added to the bucket as space becomes available for conforming packets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Leaky bucket: Application

A

Traffic shaping or traffic policing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Leaky bucket: Does it discard packets?

A

Yes. It discards packets for which no tokens are available (no concept of queue)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Leaky bucket: Effect on traffic

A

Smooths out traffic by passing packets only when there is a token.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Leaky bucket: traffic arrives in a bucket of size __ and drains from bucket at a rate of __.

A

Beta; Rho

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Leaky bucket: __ controls average rate. Data can arrive faster or slower but cannot drain at a rate faster than this.

A

Rho

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Buffer bloat description

A

Big buffers fill up with packets. Sender doesn’t notice lost packets since they’re queued, so it increases the send rate, causing ever greater delays.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

HTTP properties (4)

A
  • Application layer protocol to transfer web content
  • Protocol browser uses to request webpages
  • Protocol to return objects to browser
  • Layered on top of byte stream protocol like TCP
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

HTTP Request Line Parts (3)

A
  • Method (GET, POST, etc)
  • URL
  • HTTP Version
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

HTTP Optional Headers (2)

A
  • Referrer: What caused page to be requested

- User Agent: Client-software/browser

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

HTTP Response Headers (9)

A
  • HTTP Version
  • Response code (200, 404, etc)
  • Server
  • Location
  • Allow
  • Content Encoding
  • Content Length
  • Expires
  • Modified
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Powerboost: Reason users still experience high latency/loss over duration

A

Access link can’t support the higher rate, so buffers fill up and introduce delays

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Poweboost: Latency solution

A

Sender shape rate should never exceed sustained rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Network Assisted Congestion Control properties (2)

A
  • Routers provide explicit feedback about the rates that end systems should be sending.
  • Set single bit indicating congestion (TCP ECN or explicit congestion notifications)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Buffer bloat solutions (2)

A
  • Smaller buffers (but this is a tall order)

- Shape traffic such that the rate of traffic coming into the access link never exceeds ISP uplink rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

HTTP Head method

A

Requests a document just like the GET method minus the data (headers only).

Faster - allows check of Last-Modified header to determine if cache is still valid.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

HTTP Response: 200

A

OK/Success

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

HTTP Response: 100

A

Information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Additive Increase

A

Increase the throughput linearly until it equals the bandwidth and packet loss occurs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

AIMD: Average bandwidth

A

3/4 - between 1x bandwidth (peak) and 1/2 (low-point after MD).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

TCP Congestion Control Window

A

The congestion window indicates the maximum amount of data that can be sent out on a connection without being acknowledged.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

HTTP Response: 300

A

Redirect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

HTTP Response: 400

A

Error (client) e.g. 404 not found

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

HTTP Response: 500

A

Error (server)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Congestion Control Approaches (2)

A
  • End-to-end

- Network-assisted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Early HTTP v.0.9/1.9

A

One request/response per TCP Connection

36
Q

Early HTTP advantages (1)

A
  • Simple to implement
37
Q

Early HTTP disadvantages (3)

A
  • TCP three way handshake for every request
  • TCP slow start for every new connection
  • Servers must reserve resources for many connections that haven’t timed out yet
38
Q

Improvement on early HTTP inefficiency

A

Persistent connections

39
Q

Persistent Connections

A

Multiple HTTP requests/responses are multiplexed on a single TCP connection

40
Q

Web Content Distribution Networks (CDN)

A

Overlay network of web caches designed to deliver content to a client from optimal location, often through many geographically disparate servers.

41
Q

CDNs aim to place cache as close to __ as possible.

A

Users

42
Q

CDN owners (3)

A
  • Content providers, e.g. Google
  • Networks e.g. AT&T
  • Independent e.g. Akami
43
Q

Non-network CDNs typically place servers in other __ or __

A

Autonomous systems; ISPs

44
Q

CDN server selection criteria (3)

A
  • Least loaded server
  • Lowest latency (most typical)
  • Any alive server
45
Q

Token bucket application

A

Network traffic shaping or rate limiting

46
Q

Token bucket: Rho

A

Rate of tokens being added to the bucket (should match average bit rate)

47
Q

Token bucket: Beta

A

How large/long a burst is allowed

48
Q

CDN content routing types (3)

A
  • Routing systems
  • Application-based (e.g. HTTP redirect)
  • Naming system (e.g. DNS)
49
Q

CDN content routing: Routing systems

A

Routing systems (e.g. anycast): number all replicas with same IP address then allow routing to take them to closes replica

50
Q

CDN content routing: Application-based

A

Requires client to first go to origin server to get redirect, increasing latency (simple but delays)

51
Q

CDN content routing: Naming system

A

Client looks up domain and response contains IP address of nearby cache. Significant flexibility in directing different clients to different server replicas (fine-grained control and fast)

52
Q

CDN relationship with ISPs

A

Symbiotic peering relationship.

53
Q

Why CDNs like to peer with ISPs (3)

A
  • Better throughput since no intermediate AS hops and network latency lower
  • Redundancy: more vectors to deliver content; increases reliability
  • Burstiness: during large request events, having connectivity to multiple networks where content is hosted allows ISP to spread traffic across multiple transit links thereby reducing 95th percentile and lowering transit costs
54
Q

Why ISPs like peering with CDNs (2)

A
  • Closer content improves performance for customers

- Lower transit costs by avoiding traffic across costly links

55
Q

BitTorrent

A

Peer-to-Peer CDN used for file sharing and distribution of large files

56
Q

P2P advantages (2)

A
  • Reduce congestion

- Prevent overload at network where content is hosted

57
Q

BitTorrent publishing steps (4)

A
  • Peer creates Torrent
  • Seeders create initial copy (from complete copy)
  • Client starts to download pieces of file from seeder
  • Client swaps chunks of the file until it has all the complete files
58
Q

BitTorrent: Leechers

A

Clients with incomplete copies of the file

59
Q

BitTorrent: Trackers

A

Allow peers to find each other and return random list of peers that leechers can use to swap parts of the file

60
Q

BitTorrent: Freeloading

A

Client leaves network as soon as it finishes downloading a file

61
Q

BitTorrent: Freeloading solution

A

Choking (tit-for-tat): temporary refusal to upload chunks to another peer. If peer can’t download from peer, don’t upload to it

62
Q

BitTorrent: chunk swapping problem

A

If all client receive same chunks, no one has complete copy and clients won’t swap

63
Q

BitTorrent: Rarest piece first

A

Client determines which pieces are rarest and download those first. Begins with random download as rarity isn’t a good basis initially.

64
Q

BitTorrent: Tit-for-tat algorithm

A

A BitTorrent client sends data only to the top N peers who are sending to it, plus one peer who is optimistically unchoked. Let’s say for example purposes that N=4. Your BitTorrent client will choose the 4 peers who are sending to it at the fastest rate and it will send data to them in return, plus a temporary 5th client that is optimistically unchoked.

65
Q

CDN: Advantages of DNS redirection (3)

A
  • Faster than HTTP redirection, which requires extra RTs
  • More control over who gets redirected where (vs IP anycast)
  • Simple to implement (DNS works out-of-the-box)
66
Q

Distributed Hash Tabe (DHT): Main motiivation

A

Scalable location of data in a large distributed system implemented by CHORD protocol

67
Q

DHT key problem

A

Lookup: hash table is distributed across the network

68
Q

DHT advantages (3)

A
  • Scalable
  • Provable correctness
  • Reasonably good performance
69
Q

Consistent hashing

A

Keys and nodes map to same ID space.

70
Q

Consistent hashing provides (2)

A
  • Load balance: all nodes receive roughly same number of keys
  • Flexibility: When nodes join/leave the network, only a small fraction of keys need to be moved
71
Q

TCP AIMD

A

Additive Increase Multiplicative Decrease (AIMD). Graph of rate over time, TCP sawtooth because TCP increase rate using additive rate until it reaches the saturation point, it’ll see packet loss and decrease sending rate by half.

72
Q

AIMD throughput is inversely proportional to __ and the square root of the __.

A

RTT; Loss rate

73
Q

AIMD loss rate

A

Wm^2/8

74
Q

Efficiency

A

How much of the available bandwidth is used, i.e., efficient congestion control leaves little or no bandwidth wasted

75
Q

Problems with TCP streaming (4)

A
  • Audio/video can tolerate loss and delay but not variability in delay
  • TCP retransmits lost packets, which isn’t always useful
  • TCP slows down rate after packet loss
  • Protocol overhead (TCP header of 20 bytes and ack not needed)
76
Q

Methods for implementing consistent hashing (3)

A
  • Every node knows the location of every other node
  • Each node only knows location of immediate successor
  • Finger tables
77
Q

Consistent Hashing: Every node knows the location of every other node

A

Lookups are fast - O(1). Routing table must be large - O(n).

78
Q

Consistent Hashing: Each node only knows location of immediate successor

A

Small table, size O(1). Requires O(n) lookups.

79
Q

Finger tables

A

Every node knows location of N other nodes, distance of nodes that it knows increases exponentially.

80
Q

Finger tables: lookup steps (3)

A
  • Finger i points to the successor of n+2i
  • Find predecessor for a particular ID and ask what is the successor of that ID
  • Then ask predecessor for its successor, moving around the ring looking for node whose successor’s id is bigger than the data id
81
Q

Finger tables: Requires _ hops, and _ messages per lookup. Size of finger table is _ per node

A

O(log n); O(log n); O(log n)

82
Q

Finger tables: What happens when a node joins a network

A

Initialize fingers of node and update fingers of existing nodes, transfer keys from successor to new node

83
Q

Finger tables: What happens when a node leaves a network

A

Any particular node keeps track of fingers of successor so predecessor can reach nodes corresponding to failed/left node’s fingerlings

84
Q

Given a DHT with finger tables of a constant size (e.g. 1), how many hops are required per lookup?

A

O(n) - each node only knows how to find the next one.

85
Q

BIC-TCP

A

Approximates a cubic function through additive increase, binary search, and max probing.

86
Q

BIC-TCP can be too aggressive (unfriendly) on networks with a short __ or low __.

A

RTT; speeds.