S10. How are bit errors detected in link vs network vs transport layer protocols? Flashcards
What are bit errors?
Bit errors occur when bits of a data stream over a communication channel have been altered due to noise, interference, distortion or bit synchronization errors
Why is bit error detection important?
Bit error detection is crucial for ensuring data integrity during transmission.
How are bit errors detected at the link layer?
At the link layer, error detection is handled by frame-level error checking mechanisms like Cyclic Redundancy Check (CRC).
How does CRC work?
When a frame is transmitted, the sender calculates a CRC value, a 32-bit number, based on the frame’s content and includes this value in the frame’s trailer. Upon receiving the frame, the receiver also calculates a CRC value and compares it to the one from the sender. If they match, the frame is considered error-free, otherwise the frame is discarded.
What is CRC?
Cyclic Redundancy Check, a bit-error detection mechanism used by link-layer protocols like Ethernet and Wi-Fi
What happens when a bit error is detected at the link layer?
The frame is discarded
How are bit errors detected at the network layer?
IPv4 includes a header checksum field. The checksum is calculated by the sender, based only on the packet’s header and not its payload. The checksum is then verified by each router along the packet’s path and the final destination. If the checksum does not match, the packet is considered corrupt and is typically discarded. IPv6 does not include a header checksum, instead relying on other layers’ protocols to detect errors, which reduces overhead and improves network performance, as routers and other network devices don’t need to calculate checksums for each packet.
How does bit error detecting differ in IPv4 vs IPv6?
IPv4 includes a header checksum field, the checksum is then verified by each router along the packet’s path and at the final destination. IPv6 does not include a header checksum, instead relying on other layers’ protocols to detect errors
How does IPv6 improve on IPv4 in terms of bit error detection?
It reduces overhead and improves network performance, as routers and other network devices don’t need to calculate checksums for each packet.
How are bit errors detected at the transport layer?
TCP and UDP use checksums to verify data integrity. The checksum covers both headers and payload, and is calculated by the sender and verified by the receiver. If a mismatch is detected, the data will be discarded. TCP will then typically request a retransmission of the segment, ensuring reliable data delivery. UDP, on the other hand, does not guarantee retransmission, making it suitable for applications where speed is more critical than reliability.
How do TCP and UDP differ in their handling of bit errors?
TCP will then typically request a retransmission of corrupted segments, ensuring reliable data delivery. UDP, on the other hand, does not guarantee retransmission, making it suitable for applications where speed is more critical than reliability.
What is the benefits of UDP over TCP in terms of handling of bit errors?
UDP does not make the sender retransmit corrupt datagrams, making it suitable for applications where speed is more critical than reliability