Congestion Control and Streaming Flashcards
How does problem of lack of knowledge of shared downstream bottleneck manifest itself?
- lost packets
- long delays
- congestion collapse
Congestion Collapse (short definition)
throughput less than bottleneck link
packets consume network resources only to get dropped later at a downstream link
Congestion Collapse causes
- spurious retransmission
2. undelivered packets
Solution to spurious retransmission
- better timers
2. TCP congestion control
How does TCP interpret packet loss? What does it do as a result?
Congestion. It will slow down as a consequence
What do senders do if no packets are dropped?
Increase sending rate
TCP increase algorithm behavior
Sender tests network to determine if network can sustain higher sending rate
TCP decrease algorithm behavior
Senders react to to congestion to achieve optimal loss rates, delays, and sending rates
RTT = 100 milliseconds
packet size = 1 kb (kilobyte)
window size = 10 packets
What is transmission rate in kbps?
800 kbps
Rate Based Approach to Rate Adjustment
- Sender monitors loss rate
- sender uses timer to modulate
(less common method)
Fairness vs Efficiency
Fairness is everyone getting ‘fair share’ and efficiency is when network resources are used well.
Where does high ‘fan in’ occur
between leaves and root of data center
data center attributes
- high ‘ fan in’
- high bandwidth, low latency workloads
- many parallel requests
TCP incast problem
throughput collapse resultant from many parallel requests in data center. Switches overflow buffers, causing underutilization of network.
This is a many to one issue
Causes bursty retransmission due to TCP timeouts
bursty retransmission cause
caused by TCP timeouts in TCP incast problem scenario
incast
drastic reduction in application throughput caused when servers all simultaneously request data
barrier synchronization
client/app may have many parallel threads and no forward progress can be made until all the responses for those threads are satisfied.
solution to idle time in barrier synchronization
granular retransmission timers that operate in microseconds
another option is for client to acknowledge very other packet (not main solution)
basic goal of TCP
prevent congestion collapse
challenges of streaming
- Large volume of data
- Data volume varies over time
- Low tolerance for delay variation (video)
- Low tolerance for delay, period (games, VOIP)
analog to digital audio sampling explained
samples taken of audio at fixed intervals, with each sample being a fixed size in bits
video compress techniques
- Spatial redundancy
2. Temporal redundancy
spatial redundancy
video compression method which exploits visual aspects humans tend not to notice
temporal redundancy
compression across images via reference anchor and derived frames
reference anchor
“I” frame. Used as reference frame in video compression. Divided into grid.
derived frame
“P” frame
motion vectors
difference between the I frame blocks and the P frame blocks in video compression
how does TCP know when to stop increasing rate?
when sender notices packet drops
causes of packet drops OTHER than congestion
in wireless networks, wireless interference may corrupt packet and result in dropping of packet
how does TCP send increase sending rate?
by increasing the window size
every time additive increase is applied, what is increasing (that isn’t the window size)
efficiency
every time multiplicative decrease is applied, what is increasing?
fairness
This is because you get closer to the x1=x2 fairness line in the phase plot
throughput collapse cause (and what example was used in class?)
causes by switch buffer overflow. (exampled used is the barrier synchronization problem)
Challenges of streaming
- large volume of data
- data volume varies over time
- low tolerance for delay variation (video)
- low tolerance for delay, period (games, voip)
8,000 samples/sec
8 bits/sample
…what is sampling rate?
64kbps
playout delay
acceptable delay at beginning of stream when waiting for initial packets to fill a playout buffer
why is TCP bad for streaming?
- reliable delivery
- slow down upon loss
- protocol overload (headers, acks)
why is UDP good for streaming?
- no retransmission
- no sending rate adaptation
- smaller headers
what is delegated to higher layers is UDP is implemented?
- when to transmit
- how to encapsulate
- whether to retransmit
- whether to adapt sending rate
what property must UDP have when sharing data through a link?
UDP must be ‘TCP friendly’
QOS (quality of service) properties
- explicit reservations
- mark certain packet streams as high priority
weighted fair queueing
in network, there are multiple queues, and the queues with the with higher priority are services more frequent;y
alternatives to weighted fair queueing
fixed bandwith per app ( bad because this is inefficient from a network utilization perspective)
admission control where app declares its needs in advance and network blocks contending traffic to accomodate (analogous to having a busy signal in a telephone call)