63-4
User Guide for Cisco Security Manager 4.4
OL-28826-01
Chapter 63 Configuring Quality of Service
Quality of Service on Cisco IOS Routers
Understanding Queuing Parameters
Queuing manages congestion on traffic leaving a Cisco IOS router by determining the order in which to
send packets out over an interface, based on priorities you assign to those packets. Queuing makes it
possible to prioritize traffic to satisfy time-critical applications, such as desktop video conferencing,
while still addressing the needs of less time-dependent applications, such as file transfer.
During periods of light traffic, that is, when no congestion exists, packets are sent out as soon as they
arrive at an interface. However, during periods of transmission congestion at the outgoing interface,
packets arrive faster than the interface can send them. By using congestion management features such
as queuing, packets accumulating at the interface are queued until the interface is free to send them. They
are then scheduled for transmission according to their assigned priority and the queuing mechanism
configured for the interface. The router determines the order of packet transmission by controlling which
packets are placed in which queue and how queues are serviced with respect to one another.
Security Manager uses a form of queuing called Class-Based Weighted Fair Queuing (CBWFQ). With
CBWFQ, you define traffic classes based on match criteria. Packets matching the criteria constitute the
traffic for this class. A queue is reserved for each class, containing the traffic belonging to that class. You
assign characteristics to queues, such as the bandwidth (fixed or minimum) assigned to it and the queue
limit, which is the maximum number of packets allowed to accumulate in the queue.
When you use CBWFQ, the sum of all bandwidth allocation on an interface cannot exceed 75 percent of
the total available interface bandwidth. The remaining 25 percent is used for other overhead, including
Layer 2 overhead, routing traffic, and best-effort traffic. Bandwidth for the CBWFQ default class, for
instance, is taken from the remaining 25 percent.
For more information about queuing, see:
Tail Drop vs. WRED, page 63-4
Low-Latency Queuing, page 63-5
Default Class Queuing, page 63-6
For information about defining queuing parameters in a QoS policy, see Defining QoS Class Queuing
Parameters, page 63-16.
Related Topics
Understanding Marking Parameters, page 63-3
Understanding Policing and Shaping Parameters, page 63-6
Defining QoS Policies, page 63-10
Quality of Service on Cisco IOS Routers, page 63-1

Tail Drop vs. WRED

After a queue reaches its configured queue limit, the arrival of additional packets causes tail drop or
packet drop to take effect, depending on how you configured the QoS policy. Tail drop, which is the
default response, treats all traffic equally and does not differentiate between different classes of service.
When tail drop is in effect, packets are dropped from full queues until the congestion is eliminated and
the queue is no longer full. This often leads to global synchronization, in which a period of congestion
is followed by a period of underutilization, as multiple TCP hosts reduce their transmission rates
simultaneously.
A more sophisticated approach to managing queue congestion is offered by Cisco’s implementation of
Random Early Detection, called Weighted Random Early Detection, or WRED. As shown in
Figure 63-1, WRED reduces the chances of tail drop by selectively dropping packets when the output