14-6
Ethernet Card Software Feature and Configuration Guide, R7.2
January 2009
Chapter 14 Configuring Quality of Service
Queuing
In some cases, it might be desirable to discard all traffic of a specific ingress class. This can be
accomplished by using a police command of the following form with the class: police 96000
conform-action drop exceed-action drop.
If a marked packet has a provider-supplied Q-tag inserted before transmission, the m arking only affects
the provider Q-tag. If a Q-tag is received, it is re-marked. If a marked packet is transported over the Cisco
proprietary RPR ring, the marking also affects the RPR-CoS bit.
If a Q-tag is inserted (QinQ), the marking affects the added Q-tag. If the ingress packet contains a Q-tag
and is transparently switched, the existing Q-tag is marked. In the case of a packet without any Q-tag,
the marking does not have any significance.
The local scheduler treats all nonconforming packets as discard eligible regardless of their CoS setting
or the global CoS commit definition. For Cisco proprietary RPR implementation, the discard eligible
(DE) packets are marked using the DE bit on the Cisco proprietary RPR header. The discard eligibility
based on the CoS commit or the policing action is local to the ML-Series card scheduler, but it is global
for the Cisco proprietary RPR ring.
Queuing
ML-Series card queuing uses a shared buffer pool to allocate memory dynamically to different traffic
queues. The ML-Series card uses a total of 12 MB of memory for the buffer pool. Ethernet ports share
6 MB of the memory, and packet-over-SONET/SDH (POS) ports share the remaining 6 MBs of memo ry.
Memory space is allocated in 1500-byte increments.
Each queue has an upper limit on the allocated number of buffers based on the class bandwidth
assignment of the queue and the number of queues configured. This upper limit is typically 30 percent
to 50 percent of the shared buffer capacity. Dynamic buffer allocation to each queue can be reduced
based on the number of queues that need extra buffering. The dynami c allocation mechanism provides
fairness in proportion to service commitments as well as optimization of system t hroughput over a range
of system traffic loads.
The Low Latency Queue (LLQ) is defined by setting the weight to infinity or by committing 100 percent
of the bandwidth. When a LLQ is defined, a policer should also be defined on the ingress for that spec ific
class to limit the maximum bandwidth consumed by the LLQ; otherwise there is a potential risk of LLQ
occupying the whole bandwidth and starving the other unicast queues.
The ML-Series includes support for 400 user-definable queues, which are assigned according to the
classification and bandwidth allocation definition. The classification used for scheduling classifies the
frames/packet after the policing action, so if the policer is used to mark or change the CoS bits of the
ingress frames/packet, the new values are applicable for the classification of traffic for queuing and
scheduling. The ML-Series provides buffering for 4000 packets.
Scheduling
Scheduling is provided by a series of schedulers that perform a WDRR as well as by priority scheduling
mechanisms from the queued traffic associated with each egress port.
Though ordinary round robin servicing of queues can be done in constant time, unfair ness occurs when
different queues use different packet sizes. Deficit Round Robin (DRR) scheduling solves this pro blem.
If a queue was not able to send a packet in its previous round because its packet size was too large, the
remainder from the previous amount of credits a queue gets in e ach round (quantum) is added to the
quantum for the next round.