Texas Instruments TNETX4090 Adaptive performance optimization APO, Interframe gap enforcement

Models: TNETX4090

1 78
Download 78 pages 14.04 Kb
Page 33
Image 33

TNETX4090

ThunderSWITCH II9-PORT 100-/1000-MBIT/S ETHERNETSWITCH

SPWS044E ± DECEMBER 1997 ± REVISED AUGUST 1999

adaptive performance optimization (APO)

Each Ethernet MAC incorporates APO logic. This can be enabled on an individual port basis. When enabled, the MAC uses transmission pacing to enhance performance (when connected on networks using other transmit pacing-capable MACs). Adaptive performance pacing introduces delays into the normal transmission of frames, delaying transmission attempts between stations, reducing the probability of collisions occurring during heavy traffic (as indicated by frame deferrals and collisions), thereby, increasing the chance of successful transmission.

When a frame is deferred, suffers a single collision, multiple collisions, or excessive collisions, the pacing counter is loaded with an initial value of 31. When a frame is transmitted successfully (without a deferral, single collision, multiple collision, or excessive collision), the pacing counter is decremented by 1, down to 0.

With pacing enabled, a new frame is permitted to immediately [after one inter-packet gap (IPG)] attempt transmission only if the pacing counter is 0. If the pacing counter is not 0, the frame is delayed by the pacing delay (a delay of approximately four interframe gap delays).

NOTE:

APO affects only the IPG preceding the first attempt at transmitting a frame. It does not affect the backoff algorithm for retransmitted frames.

interframe gap enforcement

The measurement reference for the interpacket gap of 96-bit times is changed, depending on frame traffic conditions. If a frame is successfully transmitted (without collision), 96-bit times is measured from Mxx_TXEN. If the frame suffered a collision, 96-bit times is measured from Mxx_CRS.

backoff

The device implements the IEEE Std 802.3 binary exponential backoff algorithm.

receive versus transmit priority

The queue manager prioritizes receive and transmit traffic as follows:

DHighest priority is given to frames that currently are being transmitted. This ensures that transmitting frames do not underrun.

DNext priority is given to frames that are received if the free-buffer stack is not empty. This ensures that received frames are not dropped unless it is impossible to receive them.

DLowest priority is given to frames that are queued for transmission but have not yet started to transmit. These frames are promoted to the highest priority only when there is spare capacity on the memory bus.

DThe NM port receives the lowest priority to prevent frame loss during busy periods.

The memory bus has enough bandwidth to support the two highest priorities. The untransmitted frame queues grow when frames received on different ports require transmission on the same port(s) and when frames are repeatedly received on ports that are at a higher speed than the ports on which they are transmitted. This is likely to be exacerbated by the reception of multicast frames, which typically require transmission on several ports. When the backlog grows to such an extent that the free buffer stack is nearly empty, flow control is initiated (if it has been enabled) to limit further frame reception.

POST OFFICE BOX 655303 DALLAS, TEXAS 75265

33

Page 33
Image 33
Texas Instruments TNETX4090 specifications Adaptive performance optimization APO, Interframe gap enforcement, Backoff