Ch 14: QoS Flashcards

1
Q

Which of the following are the leading causes of quality of service issues? (Choose all that apply.)

  1. Bad hardware
  2. Lack of bandwidth
  3. Latency and jitter
  4. Copper cables
  5. Packet loss
A

2, 3 and 5.. The leading causes of quality of service issues are lack of bandwidth, latency and jitter, and packet loss.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Network latency can be broken down into which of the following types? (Choose all that apply.)

  1. Propagation delay (fixed)
  2. Time delay (variable)
  3. Serialization delay (fixed)
  4. Processing delay (fixed)
  5. Packet delay (fixed)
  6. Delay variation (variable)
A

1, 3, 4, and 6.

Network latency can be broken down into propagation delay, serialization delay, processing delay, and delay variation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which of the following is not a QoS implementation model?

  1. IntServ
  2. Expedited forwarding
  3. Best effort
  4. DiffServ
A

B.

Best effort, IntServ, and DiffServ are the three QoS implementation models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following is the QoS implementation model that requires a signaling protocol?

  1. IntServ
  2. Best Effort
  3. DiffServ
  4. RSVP
A

A.

IntServ uses Resource Reservation Protocol (RSVP) to reserve resources throughout a network for a specific application and to provide Call Admission Control (CAC) to guarantee that no other IP traffic can use the reserved bandwidth.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Which of the following is the most popular QoS implementation model?

  1. IntServ
  2. Best effort
  3. DiffServ
  4. RSVP
A

C.

DiffServ is the most popular and most widely deployed QoS model. It was designed to address the limitations of the best effort and IntServ.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

T/F: Traffic classification should always be performed in the core of the network.

A

Packet classification should take place at the network edge, as close to the source of the traffic as possible, in an effort to provide an end-to-end QoS experience.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The 16-bit TCI field is composed of which fields? (Choose three.)

  1. Priority Code Point (PCP)
  2. Canonical Format Identifier (CFI)
  3. User Priority (PRI)
  4. Drop Eligible Indicator (DEI)
  5. VLAN Identifier (VLAN ID)
A

1, 4, and 5.

The TCI (Tag Control Information) field is a two byte field composed of the 3-bit Priority Code Point (PCP) field (formerly PRI), the 1-bit Drop Eligible Indicator (DEI) field (formerly CFI), and the 12-bit VLAN Identifier (VLAN ID) field. This field is part of 802.1Q.

IEEE 802.1Q, often referred to as Dot1q, is the networking standard that supports virtual LANs (VLANs) on an IEEE 802.3 Ethernet network. The standard defines a system of VLAN tagging for Ethernet frames and the accompanying procedures to be used by bridges and switches in handling such frames. The standard also contains provisions for a quality-of-service prioritization scheme commonly known as IEEE 802.1p and defines the Generic Attribute Registration Protocol.

Portions of the network which are VLAN-aware (i.e., IEEE 802.1Q conformant) can include VLAN tags. When a frame enters the VLAN-aware portion of the network, a tag is added to represent the VLAN membership.[a] Each frame must be distinguishable as being within exactly one VLAN. A frame in the VLAN-aware portion of the network that does not contain a VLAN tag is assumed to be flowing on the native VLAN.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

T/F: The one byte DiffServ (DS or Differentiated Services) field contains a 6-bit Differentiated Services Code Point (DSCP) field that allows for classification of up to 64 values (0 to 63).

A

True.

The DS field replaces the outdated IPv4 ToS field and the IPV6 traffic class field. They were redefined as an 8-bit Differentiated Services (DiffServ) field.

The DiffServ field is composed of a 6-bit Differentiated Services Code Point (DSCP) field that allows for classification of up to 64 values (0 to 63) and a 2-bit Explicit Congestion Notification (ECN) field.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which of the following is not a QoS PHB?

  1. Best Effort (BE)
  2. Class Selector (CS)
  3. Default Forwarding (DF)
  4. Assured Forwarding (AF)
  5. Expedited Forwarding (EF)
A

1.

Four PHBs (Per-Hop Behaviors) have been defined and characterized for general use:

  1. Class Selector (CS) PHB: The first 3 bits of the DSCP field are used as CS bits; the class selector bits make DSCP backward compatible with IP Precedence because IP Precedence uses the same 3 bits to determine class.
  2. Default Forwarding (DF) PHB: Used for best-effort service.
  3. Assured Forwarding (AF) PHB: Used for guaranteed bandwidth service.
  4. Expedited Forwarding (EF) PHB: Used for low-delay service.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which traffic conditioning tool can be used to drop or mark down traffic that goes beyond a desired traffic rate?

  1. Policers
  2. Shapers
  3. WRR
  4. None of the above
A

1.

Policers drop or re-mark incoming or outgoing traffic that goes beyond a desired traffic rate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does Tc stand for? (Choose two.)

  1. Committed time interval
  2. Token credits
  3. Bc bucket token count
  4. Traffic control
A

1 and 3.

The Committed Time Interval (Tc) is the time interval in milliseconds (ms) over which the Committed Burst (Bc) is sent. Tc can be calculated with the formula Tc = (Bc [bits] / CIR [bps]) × 1000. For single-rate three-color markers/policers (srTCMs) and two-rate three-color markers/policers (trTCMs),

Tc can also refer to the Bc Bucket Token Count (Tc), which is the number of tokens in the Bc bucket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which of the following are the recommended congestion management mechanisms for modern rich-media networks? (Choose two.)

  1. Class-based weighted fair queuing (CBWFQ)
  2. Priority queuing (PQ) 14
  3. Weighted RED (WRED)
  4. Low-latency queuing (LLQ)
A

1 and 4.

CBWFQ and LLQ provide real-time, delay-sensitive traffic bandwidth and delay guarantees while not starving other types of traffic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Which of the following is a recommended congestion-avoidance mechanism for modern rich-media networks?

  1. Weighted RED (WRED)
  2. Tail drop
  3. FIFO
  4. RED
A

1.

WRED provides congestion avoidance by selectively dropping packets before the queue buffers are full. Packet drops can be manipulated by traffic weights denoted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the top three leading causes of quality issues?

A
  1. Lack of bandwidth: The available bandwidth on the data path from a source to a destination equals the capacity of the lowest-bandwidth link.
  2. Latency and jitter:
    • One-way end-to-end delay, also referred to as network latency, is the time it takes for packets to travel across a network from a source to a destination.
    • Delay variation, also referred to as jitter, is the difference in the latency between packets in a single flow.
  3. Packet loss: Packet loss is usually a result of congestion on an interface. Packet loss can be prevented by implementing one of the following approaches:​​
    • Increase link speed.
    • Implement QoS congestion-avoidance and congestion-management mechanism.
    • Implement traffic policing to drop low-priority packets and allow high-priority traffic through.
    • Implement traffic shaping to delay packets instead of dropping them since traffic may burst and exceed the capacity of an interface buffer. Traffic shaping is not recommended for real-time traffic because it relies on queuing that can cause jitter.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Network latency can be broken down into fixed and variable latency. Give a brief definition of each of the following:

  1. Propagation delay (fixed)
  2. Serialization delay (fixed)
  3. Processing delay (fixed)
  4. Delay variation (variable)
A

Propagation delay is the time it takes for a packet to travel from the source to a destination at the speed of light over a medium such as fiber-optic cables or copper wires.

Serialization delay is the time it takes to place all the bits of a packet onto a link. It is a fixed value that depends on the link speed; the higher the link speed, the lower the delay.

Processing delay is the fixed amount of time it takes for a networking device to take the packet from an input interface and place the packet onto the output queue of the output interface.

Delay variation aka jitter, is the difference in the latency between packets in a single flow.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does Cisco think is an acceptable latency for real time traffic?

A

200ms.

ITU Recommendation G.114 recommends that, regardless of the application type, a network latency of 400 ms should not be exceeded, and for real-time traffic, network latency should be less than 150 ms.

However, ITU and Cisco have demonstrated that real-time traffic quality does not begin to significantly degrade until network latency exceeds 200 ms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the average refractive index of fiber optic cable?

A

The speed of light is 299,792,458 meters per second in a vacuum. The lack of vacuum conditions in a fiber-optic cable or a copper wire slows down the speed of light by a ratio known as the refractive index; the larger the refractive index value, the slower light travels.

The average refractive index value of an optical fiber is about 1.5. The speed of light through a medium v is equal to the speed of light in a vacuum c divided by the refractive index n, or v = c / n. This means the speed of light through a fiber-optic cable with a refractive index of 1.5 is approximately 200,000,000 meters per second (that is, 300,000,000 / 1.5).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is the serialization delay for a 1500-byte packet over a 1Gbps interface?

A

Serialization delay is the time it takes to place all the bits of a packet onto a link. It is a fixed value that depends on the link speed; the higher the link speed, the lower the delay.

The serialization delay, s, is equal to the packet size in bits divided by the line speed in bits per second.

For example, the serialization delay for a 1500-byte packet over a 1 Gbps interface is 12 μs and can be calculated as follows:

s = packet size in bits / line speed in bps
s = (1500 bytes × 8) / 1 Gbps
s = 12,000 bits / 1,000,000,000 bps = 0.000012 s × 1000 = .012 ms × 1000 = **12 μs**
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

The processing delay depends on all of the following factors, except one. Which is incorrect?

  1. CPU speed (for software-based platforms)
  2. CPU utilization (load)
  3. IP packet switching mode (process switching, software CEF, or hardware CEF)
  4. Bandwidth of the circuit connecting the endpoints
  5. Router architecture (centralized or distributed)
  6. Configured features on both input and output interfaces
A
  1. Bandwidth is not a factor in processing delay.

Processing delay is the fixed amount of time it takes for a networking device to take the packet from an input interface and place the packet onto the output queue of the output interface. The processing delay depends on factors such as the following:

  1. CPU speed (for software-based platforms)
  2. CPU utilization (load)
  3. IP packet switching mode (process switching, software CEF, or hardware CEF)
  4. Router architecture (centralized or distributed)
  5. Configured features on both input and output interfaces
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What causes jitter?

A

Jitter is experienced due to the queueing delay experienced by packets during periods of network congestion. Queuing delay depends on the number and sizes of packets already in the queue, the link speed, and the queuing mechanism. Queuing introduces unequal delays for packets of the same flow, thus producing jitter.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is a de-jitter buffer?

A

Voice and video endpoints typically come equipped with de-jitter buffers that can help smooth out changes in packet arrival times due to jitter. A de-jitter buffer is often dynamic and can adjust for approximately 30 ms changes in arrival times of packets. If a packet is not received within the 30 ms window allowed for by the de-jitter buffer, the packet is dropped, and this affects the overall voice or video quality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is LLQ and what is it useful for?

A

To prevent jitter for high-priority real-time traffic, it is recommended to use queuing mechanisms such as low-latency queueing (LLQ) that allow matching packets to be forwarded prior to any other low priority traffic during periods of network congestion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the usual cause of packet loss? What are some solutions to packet loss?

A

Packet loss is usually a result of congestion on an interface. Packet loss can be prevented by implementing one of the following approaches:

  1. Increase link speed.
  2. Implement QoS congestion-avoidance and congestion-management mechanism.
  3. Implement traffic policing to drop low-priority packets and allow high-priority traffic through.
  4. Implement traffic shaping to delay packets instead of dropping them since traffic may burst and exceed the capacity of an interface buffer. Traffic shaping is not recommended for real-time traffic because it relies on queuing that can cause jitter.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Give a brief definition of the three different QoS implementation models.

  1. Best effort
  2. Integrated Services (IntServ)
  3. Differentiated Services (DiffServ)
A

There are three different QoS implementation models:

Best effort: QoS is not enabled for this model. It is used for traffic that does not require any special treatment.

Integrated Services (IntServ): Applications signal the network to make a bandwidth reservation and to indicate that they require special QoS treatment.

Differentiated Services (DiffServ): The network identifies classes that require special QoS treatment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

In this model, applications signal their requirements to the network to reserve the end-to-end resources (such as bandwidth) they require to provide an acceptable user experience.

Which QoS model is this?

A

The IntServ model was created for real-time applications such as voice and video that require bandwidth, delay, and packet-loss guarantees to ensure both predictable and guaranteed service levels.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is RSVP in QoS?

A

IntServ uses Resource Reservation Protocol (RSVP) to reserve resources throughout a network for a specific application and to provide call admission control (CAC) to guarantee that no other IP traffic can use the reserved bandwidth. The bandwidth reserved by an application that is not being used is wasted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is the biggest drawback/limitation with the IntServ model of QoS?

A

The biggest drawback of IntServ is that it cannot scale well on large networks that might have thousands or millions of flows due to the large number of RSVP flows that would need to be maintained.

To be able to provide end-to-end QoS, all nodes, including the endpoints running the applications, need to support, build, and maintain RSVP path state for every single flow.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

With this model, there is no need for a signaling protocol, and there is no RSVP flow state to maintain on every single node, which makes it highly scalable; QoS characteristics (such as bandwidth and delay) are managed on a hop-by-hop basis with QoS policies that are defined independently at each device in the network.

Which QoS model is this?

A

DiffServ is not considered an end-to-end QoS solution because end-to-end QoS guarantees cannot be enforced.

DiffServ divides IP traffic into classes and marks it based on business requirements so that each of the classes can be assigned a different level of service. As IP traffic traverses a network, each of the network devices identifies the packet class by its marking and services the packets according to this class.

Many levels of service can be chosen with DiffServ. For example, IP phone voice traffic is very sensitive to latency and jitter, so it should always be given preferential treatment over all other application traffic. Email, on the other hand, can withstand a great deal of delay and could be given best-effort service, and non-business, non-critical scavenger traffic (such as from YouTube) can either be heavily rate limited or blocked entirely. The DiffServ model is the most popular and most widely deployed QoS model and is covered in detail in this chapter.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is Packet Classification in QoS?

A

Packet classification is a QoS mechanism responsible for distinguishing between different traffic streams. It uses traffic descriptors to categorize an IP packet within a specific class.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

T/F: Packet classification should take place at the network edge, as close to the destination of the traffic as possible.

A

False.

Packet classification should take place at the network edge, as close to the source of the traffic as possible.

Once an IP packet is classified, packets can then be marked/re-marked, queued, policed, shaped, or any combination of these and other actions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is NBAR2?

A

This is considered Layer 7 classification.

NBAR2 is a deep packet inspection engine that can classify and identify a wide variety of protocols and applications using Layer 3 to Layer 7 data, including difficult-to-classify applications that dynamically assign Transmission Control Protocol (TCP) or User Datagram Protocol (UDP) port numbers.

NBAR2 can recognize more than 1000 applications, and monthly protocol packs are provided for recognition of new and emerging applications, without requiring an IOS upgrade or router reload.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

NBAR2 has two modes of operation. Give a brief description of them.

  1. Protocol Discovery
  2. Modular QoS CLI (MQC)
A

NBAR2 has two modes of operation:

  1. Protocol Discovery: Protocol Discovery enables NBAR2 to discover and get real-time statistics on applications currently running in the network. These statistics from the Protocol Discovery mode can be used to define QoS classes and policies using MQC configuration.
  2. Modular QoS CLI (MQC): Using MQC, network traffic matching a specific network protocol such as Cisco Webex can be placed into one traffic class, while traffic that matches a different network protocol such as YouTube can be placed into another traffic class. After traffic has been classified in this way, different QoS policies can be applied to the different classes of traffic.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What is Packet Marking?

A

Packet marking is a QoS mechanism that colors a packet by changing a field within a packet or a frame header with a traffic descriptor so it is distinguished from other packets during the application of other QoS mechanisms (such as re-marking, policing, queuing, or congestion avoidance).

The following traffic descriptors are used for marking traffic:

  1. Internal: QoS groups
  2. Layer 2: 802.1Q/p Class of Service (CoS) bits
  3. Layer 2.5: MPLS Experimental (EXP) bits
  4. Layer 3: Differentiated Services Code Points (DSCP) and IP Precedence (IPP)

For enterprise networks, the most commonly used traffic descriptors for marking traffic include the Layer 2 and Layer 3 traffic descriptors mentioned in the previous list.

34
Q

T/F: QoS groups are used to mark packets as they are received and processed internally within the router and are automatically removed when packets egress the router.

A

True.

QoS groups are used to mark packets as they are received and processed internally within the router and are automatically removed when packets egress the router. They are used only in special cases in which traffic descriptors marked or received on an ingress interface would not be visible for packet classification on egress interfaces due to encapsulation or de-encapsulation.

35
Q

Diagram a L2 ethernet frame with 802.1Q marking.

A

The 802.1Q standard is an IEEE specification for implementing VLANs in Layer 2 switched networks.

The 802.1Q specification defines two 2-byte fields, which are inserted within an Ethernet frame following the Source Address field, as illustrated in Figure 14-2.

  1. Tag Protocol Identifier (TPID)
  2. Tag Control Information (TCI)
36
Q

What is the TPID field in a L2 marked frame?

A

The Tag Protocol Identifier (TPID) value is a 16-bit field assigned the value 0x8100 that identifies it as an 802.1Q tagged frame.

37
Q

What is a TCI field?

A

The Tag Control Information (TCI) field is a 16-bit field composed of the following three fields:

  1. Priority Code Point (PCP) field (3 bits)
    • The specifications of the 3-bit PCP field are defined by the IEEE 802.1p specification. This field is used to mark packets as belonging to a specific CoS. The CoS marking allows a Layer 2 Ethernet frame to be marked with eight different levels of priority values.
  2. Drop Eligible Indicator (DEI) field (1 bit)
    • The DEI field is a 1-bit field that can be used independently or in conjunction with PCP to indicate frames that are eligible to be dropped during times of congestion.
  3. VLAN Identifier (VLAN ID) field (12 bits)
    • The VLAN ID field is a 12-bit field that defines the VLAN used by 802.1Q.
38
Q

What is the PCP field in an ethernet header?

A

It is part of the TCI, Tag Control Information.

The specifications of the 3-bit PCP field are defined by the IEEE 802.1p specification. This field is used to mark packets as belonging to a specific CoS. The CoS marking allows a Layer 2 Ethernet frame to be marked with eight different levels of priority values, 0 to 7, where 0 is the lowest priority and 7 is the highest. Table 14-2 includes the IEEE 802.1p specification standard definition for each CoS.

39
Q

What is a drawback of CoS?

A

One drawback of using CoS markings is that frames lose their CoS markings when traversing a non-802.1Q link or a Layer 3 network.

For this reason, packets should be marked with other higher-layer markings whenever possible so the marking values can be preserved end-to-end. This is typically accomplished by mapping a CoS marking into another marking. For example, the CoS priority levels correspond directly to IPv4’s IP Precedence Type of Service (ToS) values so they can be mapped directly to each other.

40
Q

What is the DEI field?

A

The DEI field is a 1-bit field that can be used independently or in conjunction with PCP to indicate frames that are eligible to be dropped during times of congestion.

The default value for this field is 0, and it indicates that this frame is not drop eligible; it can be set to 1 to indicate that the frame is drop eligible.

41
Q

How many VLANs can be supported by the VLAN ID field of 802.1Q?

A

The VLAN ID field is a 12-bit field that defines the VLAN used by 802.1Q. Since this field is 12 bits, it restricts the number of VLANs supported by 802.1Q to 4096, which may not be sufficient for large enterprise or service provider networks.

42
Q

What is an advantage of L3 marking over L2 marking?

A

As a packet travels from its source to its destination, it might traverse non-802.1Q trunked, or non-Ethernet links that do not support the CoS field. Using marking at Layer 3 provides a more persistent marker that is preserved end-to-end.

43
Q

What is a ToS field in an IPv4 header? How many bits is it? How many are used? and for what?

A

The ToS field is an 8-bit field where only the first 3 bits of the ToS field, referred to as IP Precedence (IPP), are used for marking, and the rest of the bits are unused.

IPP values, which range from 0 to 7, allow the traffic to be partitioned in up to six usable classes of service; IPP 6 and 7 are reserved for internal network use.

Figure 14-3 illustrates the ToS/DiffServ field within an IPv4 header.

44
Q

What is the DiffServ field used for?

A

Newer standards have redefined the IPv4 ToS and the IPv6 Traffic Class fields as an 8-bit Differentiated Services (DiffServ) field.

The DiffServ field uses the same 8 bits that were previously used for the IPv4 ToS and the IPv6 Traffic Class fields, and this allows it to be backward compatible with IP Precedence.

The DiffServ field is composed of two parts:

  1. a 6-bit Differentiated Services Code Point (DSCP) field that allows for classification of up to 64 values (0 to 63)
  2. a 2-bit Explicit Congestion Notification (ECN) field.
45
Q

T/F: Packets are classified and marked to receive a particular per-hop forwarding behavior (that is, expedited, delayed, or dropped) on network nodes along their path to the destination.

A

True.

Packets are classified and marked to receive a particular per-hop forwarding behavior (that is, expedited, delayed, or dropped) on network nodes along their path to the destination.

46
Q

What is a DiffServ BA?

A

The DiffServ field is used to mark packets according to their classification into DiffServ Behavior Aggregates (BAs).

A DiffServ BA is a collection of packets with the same DiffServ value crossing a link in a particular direction.

47
Q

What is PHB in QoS?

A

Per-hop behavior (PHB) is the externally observable forwarding behavior (forwarding treatment) applied at a DiffServ-compliant node to a collection of packets with the same DiffServ value crossing a link in a particular direction (DiffServ BA).

48
Q

T/F: PHB is expediting, delaying, or dropping a collection of packets by one or multiple QoS mechanisms on a per-hop basis, based on the DSCP value.

A

True.

A DiffServ BA (Behavior Aggregation) could be multiple applications—for example, SSH, Telnet, and SNMP all aggregated together and marked with the same DSCP value. This way, the core of the network performs only simple PHB, based on DiffServ BAs, while the network edge performs classification, marking, policing, and shaping operations. This makes the DiffServ QoS model very scalable.

49
Q

Four PHBs have been defined and characterized for general use. What are they used for?

  1. Class Selector (CS) PHB
  2. Default Forwarding (DF) PHB
  3. Assured Forwarding (AF) PHB
  4. Expedited Forwarding (EF) PHB
A

Four PHBs have been defined and characterized for general use:

  1. Class Selector (CS) PHB: The first 3 bits of the DSCP field are used as CS bits. The CS bits make DSCP backward compatible with IP Precedence because IP Precedence uses the same 3 bits to determine class.
  2. Default Forwarding (DF) PHB: Used for best-effort service.
  3. Assured Forwarding (AF) PHB: Used for guaranteed bandwidth service.
  4. Expedited Forwarding (EF) PHB: Used for low-delay service.
50
Q

T/F: DiffServ made the ToS field obsolete.

A

True.

RFC 2474 made the ToS field obsolete by introducing the DiffServ field, and the Class Selector (CS) PHB was defined to provide backward compatibility for DSCP with IP Precedence. Fun facts:

  • Packets with higher IP Precedence should be forwarded in less time than packets with lower IP Precedence.
  • The last 3 bits of the DSCP (bits 2 to 4), when set to 0, identify a Class Selector PHB, but the Class Selector bits 5 to 7 are the ones where IP Precedence is set. Bits 2 to 4 are ignored by non-DiffServ-compliant devices performing classification based on IP Precedence.
  • There are eight CS classes, ranging from CS0 to CS7, that correspond directly with the eight IP Precedence values.

Figure 14-4 illustrates the CS PHB.

51
Q

What is the DF PHB?

A

Default Forwarding (DF) and Class Selector 0 (CS0) provide best-effort behavior and use the DS value 000000.

Default best-effort forwarding is also applied to packets that cannot be classified by a QoS mechanism such as queueing, shaping, or policing. This usually happens when a QoS policy on the node is incomplete or when DSCP values are outside the ones that have been defined for the CS, AF, and EF PHBs.

Figure 14-5 illustrates the DF PHB (Per-hop Behavior).

52
Q

What is the role of the AF PHB?

A

The AF PHB (Assured Forwarding Per-Hop Behavior) guarantees a minimum amount of bandwidth to an AF class and allows access to extra bandwidth, if available.

Packets requiring AF PHB should be marked with DSCP value aaadd0, where aaa is the binary value of the AF class (bits 5 to 7), and dd (bits 2 to 4) is the drop probability where bit 2 is unused and always set to 0.

Note: AF uses WRED to drop packets preemptively.

Figure 14-6 illustrates the AF PHB.

53
Q

T/F: WRED uses the AF Drop Probability value within each class—where 1 is the lowest possible value, and 3 is the highest possible—to determine which packets should be dropped first during periods of congestion.

A

True.

An AF implementation must detect and respond to long-term congestion within each class by dropping packets using a congestion-avoidance algorithm such as weighted random early detection (WRED).

WRED uses the AF Drop Probability value within each class—where 1 is the lowest possible value, and 3 is the highest possible—to determine which packets should be dropped first during periods of congestion.

It should also be able to handle short-term congestion resulting from bursts if each class is placed in a separate queue, using a queueing algorithm such as class-based weighted fair queueing (CBWFQ). The AF specification does not define the use of any particular algorithms to use for queueing and congestions avoidance, but it does specify the requirements and properties of such algorithms.

54
Q

What is the EF PHB?

A

The EF (Expedited Forwarding) PHB can be used to build a low-loss, low-latency, low-jitter, assured bandwidth, end-to-end service. The EF PHB guarantees bandwidth by ensuring a minimum departure rate and provides the lowest possible delay to delay-sensitive applications by implementing low-latency queueing. It also prevents starvation of other applications or classes that are not using the EF PHB by policing EF traffic when congestion occurs.

55
Q

T/F: When you create a new WLAN, its QoS policy defaults to Silver, or best-effort handling.

A

True.

56
Q

What is the difference between a traffic policer and a shaper?

A

Both are traffic-conditioning QoS mechanisms used to classify traffic and enforce other QoS mechanisms such as rate limiting. They classify traffic in an identical manner but differ in their implementation:

  • Policers: Drop or re-mark incoming or outgoing traffic that goes beyond a desired traffic rate.
  • Shapers: Buffer and delay egress traffic rates that momentarily peak above the desired rate until the egress traffic rate drops below the defined traffic rate. If the egress traffic rate is below the desired rate, the traffic is sent immediately.

Figure 14-9 illustrates the difference between traffic policing and shaping. Policers drop or re-mark excess traffic, while shapers buffer and delay excess traffic.

57
Q

Where should policers for incoming traffic be placed? and for outbound?

A

Policers for incoming traffic are most optimally deployed at the edge of the network to keep traffic from wasting valuable bandwidth in the core of the network.

Policers for outbound traffic are most optimally deployed at the edge of the network or core-facing interfaces on network edge devices.

58
Q

What is a downside of policing?

A

A downside of policing is that it causes TCP retransmissions when it drops traffic.

59
Q

T/F: Cisco IOS policers and shapers are based on token bucket algorithms.

A

True.

The following list includes definitions that are used to explain how token bucket algorithms operate:

  • Committed Information Rate (CIR): The policed traffic rate, in bits per second (bps), defined in the traffic contract.
  • Committed Time Interval (Tc): The time interval, in milliseconds (ms), over which the committed burst (Bc) is sent. Tc can be calculated with the formula Tc = (Bc [bits] / CIR [bps]) × 1000.
  • Committed Burst Size (Bc): The maximum size of the CIR token bucket, measured in bytes, and the maximum amount of traffic that can be sent within a Tc. Bc can be calculated with the formula Bc = CIR × (Tc / 1000).
  • Token: A single token represents 1 byte or 8 bits.
  • Token bucket: A bucket that accumulates tokens until a maximum predefined number of tokens is reached (such as the Bc when using a single token bucket); these tokens are added into the bucket at a fixed rate (the CIR). Each packet is checked for conformance to the defined rate and takes tokens from the bucket equal to its packet size; for example, if the packet size is 1500 bytes, it takes 12,000 bits (1500 × 8) from the bucket. If there are not enough tokens in the token bucket to send the packet, the traffic conditioning mechanism can take one of the following actions:
    • Buffer the packets while waiting for enough tokens to accumulate in the token bucket (traffic shaping)
    • Drop the packets (traffic policing)
    • Mark down the packets (traffic policing)
60
Q

What are the three types of policer algorithms? (think rate and color)

A

There are different policing algorithms, including the following:

  1. Single-rate two-color marker/policer
  2. Single-rate three-color marker/policer (srTCM)
  3. Two-rate three-color marker/policer (trTCM)
61
Q

Figure 14-12 illustrates different actions that the ______________ policer can take.

A

Figure 14-12 illustrates different actions that the single-rate two-color policer can take.

The section above the dotted line on the left side of the figure represents traffic that exceeded the CIR and was marked down. The section above the dotted line on the right side of the figure represents traffic that exceeded the CIR and was dropped.

The first policers implemented use a single-rate, two-color model based on the single token bucket algorithm. For this type of policer, traffic can be either conforming to or exceeding the CIR. Marking down or dropping actions can be performed for each of the two states.

62
Q

Figure 14-13 illustrates different actions that a ________________ policer can take.

A

Single-rate three-color policer algorithms are based on RFC 2697. This type of policer uses two token buckets, and the traffic can be classified as either conforming to, exceeding, or violating the CIR. Marking down or dropping actions are performed for each of the three states of traffic.

The exceeding and violating traffic rates vary because they rely on random tokens spilling over from the Bc bucket into the Be. The section right above the straight dotted line on the right side of the figure represents traffic that exceeded the CIR and was marked down and the top section represents traffic that violated the CIR and was dropped.

The first token bucket operates very similarly to the single-rate two-color system; the difference is that if there are any tokens left over in the bucket after each time period due to low or no activity, instead of discarding the excess tokens (overflow), the algorithm places them in a second bucket to be used later for temporary bursts that might exceed the CIR. Tokens placed in this second bucket are referred to as the excess burst (Be), and Be is the maximum number of bits that can exceed the Bc burst size.

63
Q

With the two token-bucket mechanism, traffic can be classified in three colors or states. How do these states function?

  1. Conform:
  2. Exceed:
  3. Violate:
A

Conform: Traffic under Bc is classified as conforming and green. Conforming traffic is usually transmitted and can be optionally re-marked.

Exceed: Traffic over Bc but under Be is classified as exceeding and yellow. Exceeding traffic can be dropped or marked down and transmitted.

Violate: Traffic over Be is classified as violating and red. This type of traffic is usually dropped but can be optionally marked down and transmitted.

64
Q

The single-rate three-color marker/policer uses the following parameters to meter the traffic stream. What is the significance of these?

  • Committed Information Rate (CIR):
  • Committed Burst Size (Bc):
  • Excess Burst Size (Be):
  • Bc Bucket Token Count (Tc):
  • Be Bucket Token Count (Te):
  • Incoming Packet Length (B):
A

The single-rate three-color marker/policer uses the following parameters to meter the traffic stream:

  • Committed Information Rate (CIR): The policed rate.
  • Committed Burst Size (Bc): The maximum size of the CIR token bucket, measured in bytes. Referred to as Committed Burst Size (CBS) in RFC 2697.
  • Excess Burst Size (Be): The maximum size of the excess token bucket, measured in bytes. Referred to as Excess Burst Size (EBS) in RFC 2697.
  • Bc Bucket Token Count (Tc): The number of tokens in the (Commited burst) Bc bucket. Not to be confused with the committed time interval Tc.
  • Be Bucket Token Count (Te): The number of tokens in the (Excess burst) Be bucket.
  • Incoming Packet Length (B): The packet length of the incoming packet, in bits.
65
Q

The ___________________ marker/policer is based on RFC 2698 and is similar to the single-rate three-color policer.

The difference is that ____________________ policers rely on excess tokens from the Bc bucket, which introduces a certain level of variability and unpredictability in traffic flows; the _____________________ marker/policers address this issue by using two distinct rates, the CIR and the Peak Information Rate (PIR).

A

The two-rate three-color marker/policer is based on RFC 2698 and is similar to the single-rate three-color policer.

The difference is that single-rate three-color policers rely on excess tokens from the Bc bucket, which introduces a certain level of variability and unpredictability in traffic flows; the two-rate three-color marker/policers address this issue by using two distinct rates, the CIR and the Peak Information Rate (PIR).

66
Q

Congestion management involves a combination of __________ and _____________.

A

Congestion management involves a combination of queuing and scheduling.

Queuing (also known as buffering) is the temporary storage of excess packets. Queuing is activated when an output interface is experiencing congestion and deactivated when congestion clears.

67
Q

Congestion is detected by the queuing algorithm when a Layer 1 hardware queue present on physical interfaces, known as the ___________ (Tx-ring or TxQ), is full.

A

Congestion is detected by the queuing algorithm when a Layer 1 hardware queue present on physical interfaces, known as the transmit ring (Tx-ring or TxQ), is full.

When the Tx-ring is not full anymore, this indicates that there is no congestion on the interface, and queueing is deactivated. Congestion can occur for one of these two reasons:

  1. The input interface is faster than the output interface.
  2. The output interface is receiving packets from multiple input interfaces.
68
Q

What is FIFO queueing?

A

First-in, first-out queuing (FIFO): FIFO involves a single queue where the first packet to be placed on the output interface queue is the first packet to leave the interface (first come, first served). In FIFO queuing, all traffic belongs to the same class.

69
Q

What is Round-Robin queuing?

A

Round robin: With round robin, queues are serviced in sequence one after the other, and each queue processes one packet only. No queues starve with round robin because every queue gets an opportunity to send one packet every round.

No queue has priority over others, and if the packet sizes from all queues are about the same, the interface bandwidth is shared equally across the round robin queues. A limitation of round robin is it does not include a mechanism to prioritize traffic.

70
Q

What is WRR queuing?

A

Weighted round robin (WRR): WRR was developed to provide prioritization capabilities for round robin. It allows a weight to be assigned to each queue, and based on that weight, each queue effectively receives a portion of the interface bandwidth that is not necessarily equal to the other queues’ portions.

71
Q

What is CQ?

A

Custom queuing (CQ): CQ is a Cisco implementation of WRR that involves a set of 16 queues with a round-robin scheduler and FIFO queueing within each queue. Each queue can be customized with a portion of the link bandwidth for each selected traffic type.

If a particular type of traffic is not using the bandwidth reserved for it, other traffic types may use the unused bandwidth. CQ causes long delays and also suffers from all the same problems as FIFO within each of the 16 queues that it uses for traffic classification.

72
Q

What is PQ in QoS?

A

Priority queuing (PQ): With PQ, a set of four queues (high, medium, normal, and low) are served in strict-priority order, with FIFO queueing within each queue.

The high-priority queue is always serviced first, and lower-priority queues are serviced only when all higher-priority queues are empty.

For example, the medium queue is serviced only when the high-priority queue is empty. The normal queue is serviced only when the high and medium queues are empty; finally, the low queue is serviced only when all the other queues are empty. At any point in time, if a packet arrives for a higher queue, the packet from the higher queue is processed before any packets in lower-level queues. For this reason, if the higher-priority queues are continuously being serviced, the lower-priority queues are starved.

73
Q

What is WFQ in QoS?

A

Weighted fair queuing (WFQ): The WFQ algorithm automatically divides the interface bandwidth by the number of flows (weighted by IP Precedence) to allocate bandwidth fairly among all flows. This method provides better service for high-priority real-time flows but can’t provide a fixed-bandwidth guarantee for any particular flow.

74
Q

The current queuing algorithms recommended for rich-media networks (and supported by MQC) combine the best features of the legacy algorithms. These algorithms provide real-time, delay-sensitive traffic bandwidth and delay guarantees while not starving other types of traffic. The recommended queuing algorithms include the following. What are they?

  1. CBWFQ
  2. LLQ
A

Class-based weighted fair queuing (CBWFQ): CBWFQ enables the creation of up to 256 queues, serving up to 256 traffic classes. Each queue is serviced based on the bandwidth assigned to that class. It extends WFQ functionality to provide support for user-defined traffic classes.

With CBWFQ, packet classification is done based on traffic descriptors such as QoS markings, protocols, ACLs, and input interfaces. After a packet is classified as belonging to a specific class, it is possible to assign bandwidth, weight, queue limit, and maximum packet limit to it. The bandwidth assigned to a class is the minimum bandwidth delivered to the class during congestion.

The queue limit for that class is the maximum number of packets allowed to be buffered in the class queue. After a queue has reached the configured queue limit, excess packets are dropped. CBWFQ by itself does not provide a latency guarantee and is only suitable for non-real-time data traffic.

Low-latency queuing (LLQ): LLQ = (CBWFQ + PQ) and it was developed to meet the requirements of real-time traffic, such as voice. Traffic assigned to the strict-priority queue is serviced up to its assigned bandwidth before other CBWFQ queues are serviced.

All real-time traffic should be configured to be serviced by the priority queue. Multiple classes of real-time traffic can be defined, and separate bandwidth guarantees can be given to each, but a single priority queue schedules all the combined traffic. If a traffic class is not using the bandwidth assigned to it, it is shared among the other classes.

This algorithm is suitable for combinations of real-time and non-real-time traffic. It provides both latency and bandwidth guarantees to high-priority real-time traffic. In the event of congestion, real-time traffic that goes beyond the assigned bandwidth guarantee is policed by a congestion-aware policer to ensure that the non-priority traffic is not starved.

75
Q

T/F: LLQ allows for two different traffic classes to be assigned to it so that different policing rates can be applied to different types of high-priority traffic.

A

True.

For example, voice traffic could be policed during times of congestion to 10 Mbps, while video could be policed to 100 Mbps. This would not be possible with only one traffic class and a single policer.

76
Q

What is a tail drop and why is it bad?

A

Congestion-avoidance techniques monitor network traffic loads to anticipate and avoid congestion by dropping packets. The default packet dropping mechanism is tail drop.

Tail drop treats all traffic equally and does not differentiate between classes of service. With tail drop, when the output queue buffers are full, all packets trying to enter the queue are dropped, regardless of their priority, until congestion clears up and the queue is no longer full.

Tail drop should be avoided for TCP traffic because it can cause TCP global synchronization, which results in significant link underutilization.

77
Q

What is WRED?

A

The Cisco implementation of RED is known as weighted RED (WRED). The difference between RED and WRED is that the randomness of packet drops can be manipulated by traffic weights denoted by either IP Precedence (IPP) or DSCP.

Packets with a lower IPP value are dropped more aggressively than are higher IPP values; for example, IPP 3 would be dropped more aggressively than IPP 5 or DSCP, AFx3 would be dropped more aggressively than AFx2, and AFx2 would be dropped more aggressively than AFx1.

WRED can also be used to set the IP Explicit Congestion Notification (ECN) bits to indicate that congestion was experienced in transit. ECN is an extension to WRED that allows for signaling to be sent to ECN-enabled endpoints, instructing them to reduce their packet transmission rates.

78
Q

What is DiffServ?

A

DiffServ is a coarse-grained, class-based mechanism for traffic management. In contrast, IntServ is a fine-grained, flow-based mechanism. DiffServ relies on a mechanism to classify and mark packets as belonging to a specific class. DiffServ-aware routers implement per-hop behaviors (PHBs), which define the packet-forwarding properties associated with a class of traffic. Different PHBs may be defined to offer, for example, low-loss or low-latency service.

Rather than differentiating network traffic based on the requirements of an individual flow, DiffServ operates on the principle of traffic classification, placing each data packet into one of a limited number of traffic classes.

Each router on the network is then configured to differentiate traffic based on its class. Each traffic class can be managed differently, ensuring preferential treatment for higher-priority traffic on the network. The premise of Diffserv is that complicated functions such as packet classification and policing can be carried out at the edge of the network by edge routers.

Since no classification and policing is required in the core router, functionality there can then be kept simple. Core routers simply apply PHB treatment to packets based on their markings. PHB treatment is achieved by core routers using a combination of scheduling policy and queue management policy.

79
Q

How many of the 64 possible values in the DSCP(Differentiated Services Code Point) field are commonly used?

A

In theory, a network could have up to 64 different traffic classes using the 64 available DSCP values. The DiffServ RFCs recommend, but do not require, certain encodings. This gives a network operator great flexibility in defining traffic classes. In practice, however, most networks use the following commonly defined per-hop behaviors:

  1. Default Forwarding (DF) PHB: typically best-effort traffic.
  2. Expedited Forwarding (EF) PHB: dedicated to low-loss, low-latency traffic
  3. Assured Forwarding (AF) PHB: gives assurance of delivery under prescribed conditions
  4. Class Selector (CS) PHBs: maintains backward compatibility with the IPP (IP Precedence) field.
80
Q

T/F: Under DiffServ, all the policing and classifying are done at the boundaries between DiffServ domains.

A

True.
This means that in the core of the Internet, routers are unhindered by the complexities of collecting payment or enforcing agreements. That is, in contrast to IntServ, DiffServ requires no advance setup, no reservation, and no time-consuming end-to-end negotiation for each flow.