11-6
Cisco ONS 15310-CL and Cisco ONS 15310-MA Ethernet Card Software Feature and Configuration Guide R8.5
78-18133-01
Chapter 11 Configuring Quality of Service on the ML-Series Card
ML-Series QoS
In some cases, it might be desirable to discard all traffic of a specific ingress class. This can be
accomplished by using a police command of the following form with the class: police 96000
conform-action drop exceed-action drop.
If a marked packet has a provider-supplied Q-tag inserted before transmission, the m arking only affects
the provider Q-tag. If a Q-tag is received, it is re-marked. If a marked packet is transported over the RPR
ring, the marking also affects the RPR-CoS bit.
If a Q-tag is inserted (QinQ), the marking affects the added Q-tag. If the ingress packet contains a Q-tag
and is transparently switched, the existing Q-tag is marked. In case of a packet without any Q-tag, the
marking does not have any significance.
The local scheduler treats all nonconforming packets as discard eligible regardless of their CoS setting
or the global cos commit definition. For RPR implementation, the discard eligible (DE) packets are
marked using the DE bit on the RPR header. The discard eligibility based on the CoS commit or the
policing action is local to the ML-Series card scheduler, but it is global for the RPR ring.
Queuing
ML-Series card queuing uses a shared buffer pool to allocate memory dynamically to different traffic
queues. The ML-100T-8 has 1.5 MB of packet buffer memory.
Each queue has an upper limit on the allocated number of buffers based on the class bandwidth
assignment of the queue and the number of queues configured. This upper limit is typically 30 percent
to 50 percent of the shared buffer capacity. Dynamic buffer allocation to each queue can be reduced
based on the number of queues needing extra buffering. The dynamic allocation mechanism provides
fairness in proportion to service commitments as well as optimization of system t hroughput over a range
of system traffic loads.
The Low Latency Queue (LLQ) is defined by setting the weight to infinity or committing 100 percent
bandwidth. When a LLQ is defined, a policer should also be defined on the i ngress for that specific class
to limit the maximum bandwidth consumed by the LLQ; otherwise there is a potential risk of LLQ
occupying the whole bandwidth and starving the other unicast queues.
The ML-Series includes support for 400 user-definable queues, which are assigned per the classification
and bandwidth allocation definition. The classification used for scheduling classifies the frames/packet
after the policing action, so if the policer is used to mark or change the CoS bits of the ingress
frames/packet, the new values are applicable for the classification of traffic for queuing and scheduling.
The ML-Series provides buffering for 4000 packets.
Scheduling
Scheduling is provided by a series of schedulers that perform a WDRR as well as priority scheduling
mechanisms from the queued traffic associated with each egress port.
Though ordinary round robin servicing of queues can be done in constant time, unfair ness occurs when
different queues use different packet sizes. Deficit Round Robin (DRR) scheduling solves this pro blem.
If a queue was not able to send a packet in its previous round because its packet size was too large, the
remainder from the previous amount of credits that the queue got in each previous round (quantum) is
added to the quantum for the next round.
WDRR extends the quantum idea from the DRR to provide weighted throughput for each queue.
Different queues have different weights, and the quantum assigned to each queue in its round is
proportional to the relative weight of the queue among all t he queues serviced by that scheduler.