Ch 14: QoS Flashcards
Which of the following are the leading causes of quality of service issues? (Choose all that apply.)
- Bad hardware
- Lack of bandwidth
- Latency and jitter
- Copper cables
- Packet loss
2, 3 and 5.. The leading causes of quality of service issues are lack of bandwidth, latency and jitter, and packet loss.
Network latency can be broken down into which of the following types? (Choose all that apply.)
- Propagation delay (fixed)
- Time delay (variable)
- Serialization delay (fixed)
- Processing delay (fixed)
- Packet delay (fixed)
- Delay variation (variable)
1, 3, 4, and 6.
Network latency can be broken down into propagation delay, serialization delay, processing delay, and delay variation.
Which of the following is not a QoS implementation model?
- IntServ
- Expedited forwarding
- Best effort
- DiffServ
B.
Best effort, IntServ, and DiffServ are the three QoS implementation models.
Which of the following is the QoS implementation model that requires a signaling protocol?
- IntServ
- Best Effort
- DiffServ
- RSVP
A.
IntServ uses Resource Reservation Protocol (RSVP) to reserve resources throughout a network for a specific application and to provide Call Admission Control (CAC) to guarantee that no other IP traffic can use the reserved bandwidth.
Which of the following is the most popular QoS implementation model?
- IntServ
- Best effort
- DiffServ
- RSVP
C.
DiffServ is the most popular and most widely deployed QoS model. It was designed to address the limitations of the best effort and IntServ.
T/F: Traffic classification should always be performed in the core of the network.
Packet classification should take place at the network edge, as close to the source of the traffic as possible, in an effort to provide an end-to-end QoS experience.
The 16-bit TCI field is composed of which fields? (Choose three.)
- Priority Code Point (PCP)
- Canonical Format Identifier (CFI)
- User Priority (PRI)
- Drop Eligible Indicator (DEI)
- VLAN Identifier (VLAN ID)
1, 4, and 5.
The TCI (Tag Control Information) field is a two byte field composed of the 3-bit Priority Code Point (PCP) field (formerly PRI), the 1-bit Drop Eligible Indicator (DEI) field (formerly CFI), and the 12-bit VLAN Identifier (VLAN ID) field. This field is part of 802.1Q.
IEEE 802.1Q, often referred to as Dot1q, is the networking standard that supports virtual LANs (VLANs) on an IEEE 802.3 Ethernet network. The standard defines a system of VLAN tagging for Ethernet frames and the accompanying procedures to be used by bridges and switches in handling such frames. The standard also contains provisions for a quality-of-service prioritization scheme commonly known as IEEE 802.1p and defines the Generic Attribute Registration Protocol.
Portions of the network which are VLAN-aware (i.e., IEEE 802.1Q conformant) can include VLAN tags. When a frame enters the VLAN-aware portion of the network, a tag is added to represent the VLAN membership.[a] Each frame must be distinguishable as being within exactly one VLAN. A frame in the VLAN-aware portion of the network that does not contain a VLAN tag is assumed to be flowing on the native VLAN.
T/F: The one byte DiffServ (DS or Differentiated Services) field contains a 6-bit Differentiated Services Code Point (DSCP) field that allows for classification of up to 64 values (0 to 63).
True.
The DS field replaces the outdated IPv4 ToS field and the IPV6 traffic class field. They were redefined as an 8-bit Differentiated Services (DiffServ) field.
The DiffServ field is composed of a 6-bit Differentiated Services Code Point (DSCP) field that allows for classification of up to 64 values (0 to 63) and a 2-bit Explicit Congestion Notification (ECN) field.
Which of the following is not a QoS PHB?
- Best Effort (BE)
- Class Selector (CS)
- Default Forwarding (DF)
- Assured Forwarding (AF)
- Expedited Forwarding (EF)
1.
Four PHBs (Per-Hop Behaviors) have been defined and characterized for general use:
- Class Selector (CS) PHB: The first 3 bits of the DSCP field are used as CS bits; the class selector bits make DSCP backward compatible with IP Precedence because IP Precedence uses the same 3 bits to determine class.
- Default Forwarding (DF) PHB: Used for best-effort service.
- Assured Forwarding (AF) PHB: Used for guaranteed bandwidth service.
- Expedited Forwarding (EF) PHB: Used for low-delay service.
Which traffic conditioning tool can be used to drop or mark down traffic that goes beyond a desired traffic rate?
- Policers
- Shapers
- WRR
- None of the above
1.
Policers drop or re-mark incoming or outgoing traffic that goes beyond a desired traffic rate.
What does Tc stand for? (Choose two.)
- Committed time interval
- Token credits
- Bc bucket token count
- Traffic control
1 and 3.
The Committed Time Interval (Tc) is the time interval in milliseconds (ms) over which the Committed Burst (Bc) is sent. Tc can be calculated with the formula Tc = (Bc [bits] / CIR [bps]) × 1000. For single-rate three-color markers/policers (srTCMs) and two-rate three-color markers/policers (trTCMs),
Tc can also refer to the Bc Bucket Token Count (Tc), which is the number of tokens in the Bc bucket.
Which of the following are the recommended congestion management mechanisms for modern rich-media networks? (Choose two.)
- Class-based weighted fair queuing (CBWFQ)
- Priority queuing (PQ) 14
- Weighted RED (WRED)
- Low-latency queuing (LLQ)
1 and 4.
CBWFQ and LLQ provide real-time, delay-sensitive traffic bandwidth and delay guarantees while not starving other types of traffic.
Which of the following is a recommended congestion-avoidance mechanism for modern rich-media networks?
- Weighted RED (WRED)
- Tail drop
- FIFO
- RED
1.
WRED provides congestion avoidance by selectively dropping packets before the queue buffers are full. Packet drops can be manipulated by traffic weights denoted
What are the top three leading causes of quality issues?
- Lack of bandwidth: The available bandwidth on the data path from a source to a destination equals the capacity of the lowest-bandwidth link.
-
Latency and jitter:
- One-way end-to-end delay, also referred to as network latency, is the time it takes for packets to travel across a network from a source to a destination.
- Delay variation, also referred to as jitter, is the difference in the latency between packets in a single flow.
-
Packet loss: Packet loss is usually a result of congestion on an interface. Packet loss can be prevented by implementing one of the following approaches:
- Increase link speed.
- Implement QoS congestion-avoidance and congestion-management mechanism.
- Implement traffic policing to drop low-priority packets and allow high-priority traffic through.
- Implement traffic shaping to delay packets instead of dropping them since traffic may burst and exceed the capacity of an interface buffer. Traffic shaping is not recommended for real-time traffic because it relies on queuing that can cause jitter.
Network latency can be broken down into fixed and variable latency. Give a brief definition of each of the following:
- Propagation delay (fixed)
- Serialization delay (fixed)
- Processing delay (fixed)
- Delay variation (variable)
Propagation delay is the time it takes for a packet to travel from the source to a destination at the speed of light over a medium such as fiber-optic cables or copper wires.
Serialization delay is the time it takes to place all the bits of a packet onto a link. It is a fixed value that depends on the link speed; the higher the link speed, the lower the delay.
Processing delay is the fixed amount of time it takes for a networking device to take the packet from an input interface and place the packet onto the output queue of the output interface.
Delay variation aka jitter, is the difference in the latency between packets in a single flow.
What does Cisco think is an acceptable latency for real time traffic?
200ms.
ITU Recommendation G.114 recommends that, regardless of the application type, a network latency of 400 ms should not be exceeded, and for real-time traffic, network latency should be less than 150 ms.
However, ITU and Cisco have demonstrated that real-time traffic quality does not begin to significantly degrade until network latency exceeds 200 ms.
What is the average refractive index of fiber optic cable?
The speed of light is 299,792,458 meters per second in a vacuum. The lack of vacuum conditions in a fiber-optic cable or a copper wire slows down the speed of light by a ratio known as the refractive index; the larger the refractive index value, the slower light travels.
The average refractive index value of an optical fiber is about 1.5. The speed of light through a medium v is equal to the speed of light in a vacuum c divided by the refractive index n, or v = c / n. This means the speed of light through a fiber-optic cable with a refractive index of 1.5 is approximately 200,000,000 meters per second (that is, 300,000,000 / 1.5).
What is the serialization delay for a 1500-byte packet over a 1Gbps interface?
Serialization delay is the time it takes to place all the bits of a packet onto a link. It is a fixed value that depends on the link speed; the higher the link speed, the lower the delay.
The serialization delay, s, is equal to the packet size in bits divided by the line speed in bits per second.
For example, the serialization delay for a 1500-byte packet over a 1 Gbps interface is 12 μs and can be calculated as follows:
s = packet size in bits / line speed in bps s = (1500 bytes × 8) / 1 Gbps s = 12,000 bits / 1,000,000,000 bps = 0.000012 s × 1000 = .012 ms × 1000 = **12 μs**
The processing delay depends on all of the following factors, except one. Which is incorrect?
- CPU speed (for software-based platforms)
- CPU utilization (load)
- IP packet switching mode (process switching, software CEF, or hardware CEF)
- Bandwidth of the circuit connecting the endpoints
- Router architecture (centralized or distributed)
- Configured features on both input and output interfaces
- Bandwidth is not a factor in processing delay.
Processing delay is the fixed amount of time it takes for a networking device to take the packet from an input interface and place the packet onto the output queue of the output interface. The processing delay depends on factors such as the following:
- CPU speed (for software-based platforms)
- CPU utilization (load)
- IP packet switching mode (process switching, software CEF, or hardware CEF)
- Router architecture (centralized or distributed)
- Configured features on both input and output interfaces
What causes jitter?
Jitter is experienced due to the queueing delay experienced by packets during periods of network congestion. Queuing delay depends on the number and sizes of packets already in the queue, the link speed, and the queuing mechanism. Queuing introduces unequal delays for packets of the same flow, thus producing jitter.
What is a de-jitter buffer?
Voice and video endpoints typically come equipped with de-jitter buffers that can help smooth out changes in packet arrival times due to jitter. A de-jitter buffer is often dynamic and can adjust for approximately 30 ms changes in arrival times of packets. If a packet is not received within the 30 ms window allowed for by the de-jitter buffer, the packet is dropped, and this affects the overall voice or video quality.
What is LLQ and what is it useful for?
To prevent jitter for high-priority real-time traffic, it is recommended to use queuing mechanisms such as low-latency queueing (LLQ) that allow matching packets to be forwarded prior to any other low priority traffic during periods of network congestion.
What is the usual cause of packet loss? What are some solutions to packet loss?
Packet loss is usually a result of congestion on an interface. Packet loss can be prevented by implementing one of the following approaches:
- Increase link speed.
- Implement QoS congestion-avoidance and congestion-management mechanism.
- Implement traffic policing to drop low-priority packets and allow high-priority traffic through.
- Implement traffic shaping to delay packets instead of dropping them since traffic may burst and exceed the capacity of an interface buffer. Traffic shaping is not recommended for real-time traffic because it relies on queuing that can cause jitter.
Give a brief definition of the three different QoS implementation models.
- Best effort
- Integrated Services (IntServ)
- Differentiated Services (DiffServ)
There are three different QoS implementation models:
Best effort: QoS is not enabled for this model. It is used for traffic that does not require any special treatment.
Integrated Services (IntServ): Applications signal the network to make a bandwidth reservation and to indicate that they require special QoS treatment.
Differentiated Services (DiffServ): The network identifies classes that require special QoS treatment.











