Mechanisms Providing QoS

queue-limit value for the queue size. Be aware that by setting the queue size smaller than the shaper burst, shape will not be able to achieve the configured average rate. When the queue-limit command is not invoked, queue size is determined only by the shaper burst.

Congestion Control & Avoidance

Describing Queue Size Control (Drop Tail)

By using delay control and congestion avoidance, you control queued up packets. If an outgoing queue is empty when a packet is ready to be sent, the packet can be forwarded immediately to the line with minimal delay. But, if there are 20 queued packets in the outgoing queue when the packet arrives, the new packet must wait until the 20 queued packets are sent before it can go.

Depending on the average packet size of the queued packets and the speed of the link, this last packet could be delayed considerably. When the queue limit is reached no new arriving packets are accepted in the queue and are dropped. The limit of the queue is set by the queue-limitcommand as shown in the following example:

XSR(config)#policy-map droptail

XSR(config-pmap<droptail>)#class the_heat

XSR(config-pmap-c<the_heat>)#queue-limit 50

Describing Random Early Detection

Random Early Detection (RED) is a congestion avoidance mechanism for adaptive applications (e.g., TCP/IP) that adjusts bandwidth usage of the XSR based on network conditions. TCP/IP uses a slow-start feature that initially sends a few packets to test network conditions.

If the acknowledgement returns indicating no packet loss, TCP considers the network capable of handling more traffic and increases its output rate. The protocol continues to do so until it detects any packets dropped and not delivered, at which point it considers the network congested and begins cutting back the output rate.

Because of TCP’s slow-start/fast-drop-off behavior when dealing with congestion, TCP/IP’s performance is choppy when the node or network is heavily loaded and the network does not apply congestion avoidance. This occurs because when the node is congested and the outgoing queue fills up, subsequent packets (very likely from multiple TCP sessions) are dropped, and these in turn cause corresponding TCP sessions to dramatically cut output.

After a short delay, all sessions try to ramp up using slow-start in a process called Global Synchronization. The queue grows, congestion and packet drops recur, and undesirable global synchronization repeats. The end result is a distinctive “peak and trough” traffic pattern where the outgoing queue is full just before packets are dropped, delay throughout the network is high and varies by large margins.

RED tries avoiding congestion by proactively dropping packets randomly at an early sign of congestion (when the queue rises above the threshold). Because packets are dropped randomly, all TCP/IP sessions will be affected eventually and the treatment made fair to all sessions.

By dropping packets early - before it reaches its queue limit - RED starts to “throttle” the traffic source before the queue grows too large. It helps limit delay, which is proportional to the number of packets in the queue, and avoid queue overflow and global TCP synchronization.

The random-detectcommand includes three parameters to configure RED for a queue: minimum threshold (MinThres), maximum threshold (MaxThres) and maximum drop probability (MaxProb). The drop probability of a packet is based on the average queue size and the three parameters mentioned earlier. The calculation of the drop probability is pictured below.

12-10 Configuring Quality of Service

Page 292
Image 292
Enterasys Networks X-PeditionTM manual Congestion Control & Avoidance, Describing Queue Size Control Drop Tail