Lower Layers Part 1 Flashcards
What are the three ideal goals when linking network elements?
To exchange data:
1. In any chosen amounts
2. At any chosen speed
3. With zero error rates
Why can’t networks achieve zero error rates and infinite speeds?
Data travels at bounded speeds through networks with finite capacity and experiences non-zero error rates due to real-world constraints.
What metrics are used to characterise a flow of data?
- Bandwidth: Volume of data per unit time (commonly Mbps).
- Latency: Delay in transmission (usually milliseconds).
- Error rate: Errors per gigabit or gigabits per error.
Why is “speed” not always the best measure of network performance?
Speed alone can be misleading, like comparing a car at 60mph to a lorry at 60mph.
It’s more about bandwidth (data capacity) and how much can be transmitted without loss.
How has data transmission capacity evolved over time?
Commodity servers have increased from 10Mbps to 40Gbps in 30 years.
This represents a 4000x increase in bandwidth capacity.
What is latency in networking?
Latency is the time it takes for a bit, or group of bits, to travel through the network.
It includes delays caused by the speed of light, clocking the packet at each end, and delays at intermediate stages due to packet size and bitrate.
Why does the speed of light impose a lower bound on latency?
The speed of light determines the minimum time required for data to travel over long distances, and this is constant. It dominates other delays over long distances.
What is Bit Error Rate (BER), and what causes it?
Bit Error Rate (BER) measures errors in data transmission. Causes include:
1. Data loss due to interference.
2. Cosmic rays, impulse noise, cable, and connector issues.
How does CPU power affect error correction in 2024?
CPU power is cheap, so adding more or longer error detection and correction is rarely costly.
This enables better error handling without significant overhead.
What prediction did Ian make in the slides about memory errors in the next few years?
The prediction is that memory errors in CPU caches will become a significant problem as memory ages and systems handle larger data volumes.
This is especially true as systems process increasingly larger data volumes, requiring caches to handle more intense workloads.
What are the current measurements for measuring volume, latency, and error rates? (long-haul networks)
- Volume: Gigabits or terabits per second.
- Latency: Measured in milliseconds.
- Undetected error rates: 1 bit per terabyte.
Why has latency become a significant issue despite bandwidth improvements?
While bandwidth has increased by 3–5 orders of magnitude (10^3–10^5), latency has not improved much in the last 30 years, making it a major bottleneck today.
What trade-offs do different types of data require? (voice, file transfers and streaming)
- Voice: Tolerates low bandwidth and high errors but cannot tolerate high latency.
- File Transfers (e.g., OS images): Prioritise reliability above all else.
- Streaming and recordings: Tolerate some buffering, unlike live content.
For voice, file transfers and streaming, what are their requirements for latency and error rates?
- Voice and live sport require low latency but tolerate higher error rates.
- Streaming and recordings tolerate latency but require low error rates.
- File transfers and OS images prioritise reliability (low errors) over latency.
Why is latency due to the speed of light significant for long distances?
Because even with infinite bandwidth, latency caused by the speed of light is unavoidable and becomes dominant over long distances, like from London to California.
What is circuit switching, and how did it work in old telephone systems?
Circuit switching connected physical wires so that one microphone was continuously linked to a speaker at the other end using amplifiers and multiplexors.
What are the problems with circuit switching?
- Inefficient: It ties up a duplex circuit even if one or both parties are silent.
- Complexity: Multiplexing was very complicated and expensive, especially with 1950s technology.
- Reliability: Treating the wire as a radio with multiple carriers (for increased efficiency, FDM) was difficult to do reliably.
What is packet switching, and how does it work?
Packet switching divides data into small units called “packets,” adds identifying information, and switches packets over the network to their destination.
What happens when no data is being sent in packet switching?
When no data is being sent, no (or at least few) resources are consumed.
What are the advantages of packet switching over circuit switching?
- Multiplexing happens in the time domain instead of the frequency domain.
- Resources are not wasted when no data is sent.
- You achieve a statistical gain on bandwidth (because it efficiently shares resources among multiple users)
- Data can be re-routed around failed switches, improving resilience.
What is the concept of the Time-Division Multiplexing (TDM) in data streams?
The time domain refers to buffering multiple data streams at a lower rate (e.g., 8kbps) and sending them together at a higher rate (e.g., 16kbps).
The line alternates usage, resulting in latency due to waiting for all bits to arrive.
Does the Time-Division Multiplexing (TDM) approach for data streaming prioritise efficiency or latency?
It prioritises efficiency over latency.
What is the alternative to Time-Division Multiplexing (TDM), and why is it often preferred?
The alternative is frequency domain multiplexing, where data streams are sent simultaneously at different frequencies.
Time domain is often preferred because it is simpler to implement with modern digital electronics and reduces complexity.
Is frequency domain multiplexing faster than time domain?
No, FDM does not make data travel faster.