slides25 Flashcards
retransmission timeout (RTO)
timer that determines when to resend in the absence of an ACK
solution for a dynamic behaviour that adapts to changing conditions rather than a simple fixed timeout
Jacobson gave an easy algorithm: keep a variable, the round trip time RTT for each connection
RTT is the best current estimate for the time of a segment going out and the ACK returning
If we haven’t received an ACK in approximately this time, deem it lost
If the ACK returns before the timeout, TCP looks at the actual round trip time M and updates RTT using
RTT = αRTT + (1 − α)M (smooth decrease)
α is a smoothing factor, usually 7/8 for easy arithmetic
then take the standard deviation of RTT to decide if the ACKs are returning after similar amounts of time or if it’s a free for all of RTOs
retransmission ambiguity problem
What if the timer expires before the ACK is received? • we resend the segment, of course
• but we also need to update RTT somehow
But we can’t use RTT of the resent segment as we might get the somewhat delayed ACK of the original segment, not of the resent segment
Karn’s algorithm
Karn’s algorithm is to double the timeout T on each failure, but do not adjust RTT
When segments start getting through normal RTT updates continue and RTT quickly reaches the appropriate value
This doubling is called exponential backoff
persistence timer
Its role is to prevent deadlock through the loss of window update segments
A waits for B when he is done reading and freeing buffer. After waiting a lot he gets pissed and sends another passive agressive nudge to ask for window (avoiding deadlock)
The persist timer is unset when a non-zero window is received
keepalive timer
If the client has crashed these resources could better be used elsewhere
To so this the server sets a keepalive timer when the connection goes idle
A typical value is 2 hours
When the timer expires, the server can send a keepalive probe This is simply an empty segment (i.e., no data)
If the server gets an ACK, everything is OK
If not, the server might conclude the client is no longer active
why not to use keepalive
they can cause a generally good connection to be closed because of an intermittent failure of a router
SACK
TCP may experience poor performance when multiple packets are lost from one window of data. With the limited information available from cumulative acknowledgments, a TCP sender can only learn about a single lost packet per round trip time. An aggressive sender could choose to retransmit packets early, but such retransmitted segments may have already been successfully received.
SACK is a strategy which corrects this behavior in the face of multiple dropped segments. With selective acknowledgments, the data receiver can inform the sender about all segments that have arrived successfully, so the sender need retransmit only the segments that have actually been lost.
Multipath TCP (MPTCP)
Multipath TCP (MPTCP) has been suggested both for extra performance, failover and for mobile hosts that roam between, say, 4G and Wi-Fi (used in iOS7). It layers one MPTCP connection over one or more TCP connections, e.g., using both the 4G and Wi-Fi links simultaneously
T/TCP
This protocol is faster than TCP and delivery reliability is comparable to that of TCP. T/TCP suffers from several major security problems as described by Charles Hannum in September 1996.[1][2] It has not gained widespread popularity.
SCTP
SCTP provides some of the features of both UDP and TCP: it is message-oriented like UDP and ensures reliable, in-sequence transport of messages with congestion control like TCP. It differs from those protocols by providing multi-homing and redundant paths to increase resilience and reliability.
DCCP
DCCP provides a way to gain access to congestion-control mechanisms without having to implement them at the application layer. It allows for flow-based semantics like in Transmission Control Protocol (TCP), but does not provide reliable in-order delivery. Sequenced delivery within multiple streams as in the Stream Control Transmission Protocol (SCTP) is not available in DCCP. A DCCP connection contains acknowledgment traffic as well as data traffic.
QUIC
QUIC (“quick UDP Internet connection”) is a Google-developed alternative to TCP, primarily aimed as a better transport layer for HTTP (expected to be a basis for HTTP/3)
It is reliable, connection oriented, has congestion control, is encrypted and authenticated and is transmitted within UDP datagrams
The last is important as routers have a tendency to mess with (or drop) packets if they don’t recognise the protocol
OVERHEAD REDUCTIO
The first change is to greatly reduce overhead during connection setup. As most HTTP connections will demand TLS and compression, QUIC makes the exchange of setup keys and supported protocols part of the initial handshake process. When a client opens a connection, the response packet includes the data needed for future packets to use the encryption or any desired compression. This eliminates the need to set up the TCP connection and then negotiate the transmission protocols via additional packets
HEAD OF LINE
Head-of-line blocking (HOL blocking) in computer networking is a performance-limiting phenomenon that occurs when a line of packets is held up by the first packet.
This organization allows future changes to be made more easily as it does not require changes to the kernel for updates. One of QUIC’s longer-term goals is to add new systems for forward error correction (FEC) and improved congestion control. One reason for the use of FEC is that QUIC currently uses HTTP/2 header compression, which includes head-of-line blocking for header frames. By adopting FEC, such errors can be eliminated before they reach the HTTP level, and this source of blocking removed without changing the underlying HTTP/2 protocol.[15]
SPDY AND RELATION WITH HTTP sorry for caps
SPDY does not replace HTTP; it modifies the way HTTP requests and responses are sent over the wire.[1] This means that all existing server-side applications can be used without modification if a SPDY-compatible translation layer is put in place.
SPDY is effectively a tunnel for the HTTP and HTTPS protocols. When sent over SPDY, HTTP requests are processed, tokenized, simplified and compressed. For example, each SPDY endpoint keeps track of which headers have been sent in past requests and can avoid resending the headers that have not changed; those that must be sent are compressed.
The IETF working group for HTTPbis has released the draft of HTTP/2.[22] SPDY (draft-mbelshe-httpbis-spdy-00) was chosen as the starting point.[23][24]
middlebox router problem
Application interference
Some middleboxes interfere with application functionality, restricting or preventing end host applications from performing properly.
Network Address Translators present a challenge in that NAT devices divide traffic destined to a public IP address across several receivers. When connections between a host on the Internet and a host behind the NAT are initiated by the host behind the NAT, the NAT learns that traffic for that connection belongs to the local host. Thus, when traffic coming from the Internet is destined to the public (shared) address on a particular port, the NAT can direct the traffic to the appropriate host. However, connections initiated by a host on the Internet do not present the NAT any opportunity to “learn” which internal host the connection belongs to. Moreover, the internal host itself may not even know its own public IP address to announce to potential clients what address to connect to. To resolve this issue, several new protocols have been proposed.[8][9][10]
Other common middlebox-induced application challenges include web proxies serving “stale” or out of date content,[11] and firewalls rejecting traffic on desired ports.[12]
UDP-Lite
UDP-Lite (Lightweight User Datagram Protocol,[1] sometimes UDP Lite) is a connectionless protocol that allows a potentially damaged data payload to be delivered to an application rather than being discarded by the receiving station. This is useful as it allows decisions about the integrity of the data to be made in the application layer (application or the codec), where the significance of the bits is understood