Transport Layer Flashcards

1
Q

What are the differences between connectionless and connection-oriented transport?

A

Connectionless:

  • Fast and simple communication
  • No connection setup
  • Limited error control (simple checksums)
  • No flow control

Connection-oriented:

  • Reliable communication
  • Separation of concerns
  • 3 phases: Establishment, data transfer and release
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What’s the difference between the data link layer and the transport layer?

A

The data link layer provides communication between two hosts on the same physical link, while hosts at the transport layer may be seperated by a whole network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Discuss addressing

A

Transport Service Access Points define an end-point for transport layer traffic. These are necessary, because processes on the same host may exchange data. In the Initial Connection Protocol a process server listens for connections on TSAPs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Discuss connection management

A

Transport layer protocols operate over a network, so the segments are part of network layer packets. There are different end-to-end paths through the network, one faster than the other. This causes packets to be received out of order. A bounded packet lifetime is added to each packet to know for sure that a packet is lost or that it took a slower path. A sequence number is also added for duplicate detection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Discuss error control

A

By putting the transferred data into a hash function, we get a simple checksum to see whether or not the data has been corrupted. If the data is corrupted, the hash function will return a different value than the original checksum.
We know that packets are received when the sender receives an acknowledgement for that packet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Discuss Stop-and-Wait

A

The window size of the sender is equal to 1. It stores the latest segment until an ACK for that segment is received. The receiver stores the next segment it wants to receive. When the sender doesn’t receive an acknowledgement in time, a timeout occurs. The last segment will be resent until an acknowledgement is received. Stop-and-Wait can be optimized for bidirectional connections by sending data segments and acknowledgements at once. The downside of Stop-and-Wait is the high roundtrip time compared to speed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Discuss Go-Back-N

A

Window size sender is N, size of receiver remains 1. The sender can send multiple segments in one roundtrip. The sender window starts sliding when ACKs are received. Let’s assume we send out segments 0, 1 and 2. If we receive ACK 2 before ACK 0 and ACK 1, we still know that the receiver received segments 0 and 1. Otherwise, segment 2 would’ve been discarded. This is called cumulative acknowledgements. When the ACK of a certain segment isn’t received before the segment timer runs out, that segment and all the following ones in the current sender window are retransmitted. The downside of Go-Back-N is the repeated transmission of received packets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Discuss Selective Repeat

A

Now the sender AND the receiver have a buffer. This means that the receiver can receive packets out of order. If the acknowledgement of a packet isn’t received before the packet timer runs out, the packet gets retransmitted individually. The window size has to be equal to or less than half of the size of our sequence numbers. Otherwise, the receiver could accept duplicate segments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why can’t we use all available bandwidth while transmitting?

A

When we use almost all of the bandwidth, bursts of higher traffic cause losses in network buffers. These losses cause more retransmissions, initiating congestion collapse.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Discuss convergence in congestion control

A

Any approach to fair allocation must converge quickly to the ideal operating point. TCP uses Additive Increas Multiple Decrease. This gives a saw-tooth fashion convergence. It is easy to drive the network into congestion, but it’s difficult to recover.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Discuss UDP

A

Connectionless transfer protocol, avoiding connection overhead while transmitting segments. UDP also provides ports and checksums. These checksums are end-to-end, which is impossible in the network layer. However, there is no flow and congestion control. There is also no ordering of the sent data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Discuss the working of RTP

A

The Real-time Transfer Protocol takes multimedia streams and multiplexes them. The different streams need to be synchronized so that the order of the streams at the receiving side is the same as at the sending side. RTP transmits packets over UDP because multicast is critical for efficient media distribution. Since RTP uses UDP, it can be seen as an application protocol.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Discuss the working of RTP

A

The Real-time Transfer Protocol takes multimedia streams and multiplexes them. The different streams need to be synchronized so that the order of the streams at the receiving side is the same as at the sending side. RTP transmits packets over UDP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Discuss jitter

A

Packets can take multiple paths to end up at the receiver. This results in out of order packets and a variable delay between sender and receiver. This variability of delay has a big impact on the quality of the stream. To minimize the effect, a buffer is used to store the received packets. If the delay is too long, the buffer needs te be bigger of the packet needs to be dropped.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the Modifying Playback Point?

A

The playback point determines how long to wait at the receiver before playing incoming RTP packets. If this playback point is too long, an unnecessary high delay can occur. If it’s too short, a lot of packets will get lost.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Discuss TCP

A

TCP is a reliable connection-oriented transfer protocol. The connections are full duplex, but only unicast is possible. TCP also provides ordering. A downside is the setup cost and slower transfer rates for new connections.

16
Q

What are empty TCP segments good for?

A

Setting up a connection, setting up the sequence numbers and acknowledgement numbers
Tearing down a connection when there’s nothing left to send
Window advertisement

17
Q

Discuss the TCP three-way handshake

A

To establish a connection, SYN(SEQ=x) is sent to the receiver. If the receiver is willing to connect, he responds with SYN(SEQ=y,ACK=x+1). Then the sender responds with an acknowledgement and the connection is established.
To disconnect, one of the two transmits a FIN segment. After the FIN segment is acknowledged, the connection is closed. However, the FIN segment can get lost. Therefore, a timer is started when the FIN segment is transmitted. When the timer runs out before receiving an acknowledgement, the connection is closed automatically.

18
Q

Discuss the TCP sliding window

A

The receiver acknowledges the packets it received and advertises the remaining place left in the receiver window. When the receiver window is 0, no transmission can occur. There are 2 exceptions:
Urgent traffic
Window probe: A window probe is necessary if a window update got lost. The window probe is an empty TCP segment (= window advertisement).

19
Q

What is the tinygram syndrome?

A

The tinygram syndrome respresents the worst-case overhead of sneding one char with a 100% reliable TCP connection. The transmission of the char takes 1 byte for the char itself, 20 bytes for the TCP header and 20 bytes for the IP header. The transmission needs to be acknowledged, requiring another 20 bytes for the TCP header and 20 bytes for the IP header. In the absolute worst case, a window update needs to be sent requiring, once again, 20 bytes for the TCP header and 20 bytes for the IP header (however, this window update is very unlikely). In total, it takes 121 bytes to transmit one char.

20
Q

What’s a solution to the tinygram syndrome?

A

Delayed acknowledgements. Until timeout, wait for data to be transmitted from the receiver side. If data is transmitted, piggyback the acknowledgement in the data.
Nagle’s algorithm maximizes the used bandwidth. If the window size and the available data are greater than the MSS, the data is sent immediately. Otherwise, when there is data in the pipe, data is buffered until an acknowledgement is received.

21
Q

What’s the silly window syndrome?

A

This syndrome occurs when the sender has a lot of data, but the receiver can only read one byte at a time. At some point, the receiver window will be full. Once a byte is read, a window advertisement is sent (20 + 20 bytes). The sender can send one byte to the receiver, making the receiver window full again.
Conlcusion: Per byte read by the receiver we need to advertise the window, resulting in a lot of overhead.

22
Q

What’s a solution to the silly window syndrome?

A

Clarke’s algorithm delays window updates until the receiver window can receive MSS data or until the buffer is half empty. This is complementary with Nagle’s algorithm

23
Q

Discuss RTO

A

The Retransmission Time Out in TCP determines how long to wait for an ACK before retransmitting a segment. If the RTO is too long, unnecessary pauses occur. If it’s too short, packets that are still in the network will be retransmitted, wasting a lot of bandwith. Therefore, TCP dynamically adapts the RTO.

24
Q

How can the round trip time in TCP be estimated?

A

The best estimate for the current round trip time is Smoothed Round Trip Time: SRRT : alfaSRRT + (1-alfa)RTT
However, when the loads reaches capacity, the variability of the delays increases a lot. Therefore, the Rount Trip Time Variance needs to be taken into account: RTTVAR = betaRTTVAR + (1-beta)|SRTT-RTT|
The estimate of the Round Trip Time equals: RTO = SRTT + 4*RTTVAR

25
Q

What is the persistence timer?

A

The persistence timer is used to determine when to send a windom probe. When a window size of 0 is reported, the timer starts running. If a non-zero window size update occurs, the timer is cancelled. When the timer runs out, a window probe is sent.

26
Q

What’s the difference between the TCP sliding and congestion window?

A

Sliding window is responsible for flow control. It makes sure that the hosts aren’t overwhelmed.
Congestion window is responsible for congestion control. It makes sure that the network doesn’t get overwhelmed. The congestion window determines how many bytes a sender may have in transmission, while the sliding window determines how many bytes a sender can send or a receiver can receive.

27
Q

Is packet loss a good signal for congestion?

A

For wired routers it is, but for wireless routers it isn’t since they drop packets unpredictably. If we use packet loss as a signal, we need a good retransmission timer (RTO = SRRT + 4*RTTVAR).

28
Q

Why is the timing of pakcet transmissions important for congestion control?

A

The timing of segment transmission must match the speed at which they are transmitted across the slowest link. Otherwise, bursty traffic periodically blocks these links. This rate is determined by sending a small burst of traffic to the receiver. The ACKs will be received at the slow links rate. We must inject new traffic into the network only as fast as we can receive ACKs. With this implemented, bursts won’t cause congestion.

29
Q

Discuss slow start and fast recovery

A

AIMD is too slow for high capacity connections. Slow start increases the congestion window exponentionally until a ceratin threshold is reached, the size of the flow control window. Once this threshold is breached, the new threshold is set to half the size of the congestion window and TCP switches to AIMD. When packet loss occurs, we want to start AIMD again at the new threshold, avoiding another slow start. This is done with fast recovery. By counting duplicate ACKs, we can determine the amount of packets in the network. When the threshold is reached, TCP starts using AIMD again.

30
Q

Discuss selective acknowledgements

A

Selective acknowledgements contain the byte range received above the cumulative acknowledgement. It’s implemented using TCP header options (backward compatibility). The received packets above the acknowledgement can be selectively retransmitted.