Single BUS vs. Crossbar Matrix

A single−BUS architecture is pretty simple: One BUS connects all the ports together. This setup creates a bandwidth problem called a blocking architecture, or what the networking industry likes to call over−subscription. Over−subscription is characterized as a condition in which the total bandwidth of all the ports on the switch is greater than the capacity of the switching fabric or backplane. As a result, data is held up at the port because the tunnel−through switch is too small. Examples of Cisco switches with a single−BUS architecture are the Cisco Catalyst 1900, 2820, 3000, and 5000 series.

A cross−bar matrix is used to solve the problems of a single BUS architecture by creating a multiple BUS architecture in which more than one BUS services the switch ports. In this architecture, the BUS can handle all the data the ports can possibly send—and more. It is sometimes referred to as a non−blocking architecture, and it requires a very sophisticated arbitration scheme.

Tip The switching fabric is the “highway” the data takes from the point of entry to the port or ports from which the data exits.

Each switch employs some kind of queuing method in order to solve blocking problems. An Ethernet interface may receive data when the port does not have access to the BUS. In this situation, the port has a buffer in which it stores the frame it receives until the BUS can process it. The frame uses queuing to determine which frame will be processed next. Let’s look at the three queuing components: input queuing, output queuing, and shared buffering.

Input Queuing

Input queuing is the simpler of the two forms of queuing. The frame is buffered into the port’s buffer until it becomes its turn to enter the bus. When the frame enters the bus, the exit port must be free to allow the frame to exit. If another frame is exiting the port, a condition called head−of−line blocking occurs: The frame is dropped because it was blocked by other data.

Output Queuing

Output queuing can be used with input queuing; it allows the frame to be buffered on the outbound port if other data is in the way. This is a way to resolve head−of−line blocking, but if a large burst of frames occurs, head−of−line blocking still can occur. The problem of large bursts can be resolved by using shared buffering. All the Cisco Catalyst switches (with the exception of the 1900 and 2820 series) use both input and output queuing.

Shared Buffering

Although there is no sure way to stop head−of−line blocking, shared buffering can be used in a switch as a safeguard. Shared buffering is a derivative of output queuing and provides each port with access to one large buffer instead of smaller, individual buffering spaces. If a frame is placed in this buffer, the frame is extracted from the shared memory buffer and forwarded. This method is used on the 1900 and 2820 series of Cisco Catalyst switches.

ASICs

The ASICs shown in Figure 4.1 are used in the Catalyst 5000 series Supervisor Engine and an Ethernet Module. Let’s take a look at each:

Encoded Address Recognition Logic (EARL) ASIC

Encoded Address Recognition Logic Plus (EARL+) ASIC

Synergy Advanced Interface and Network Termination (SAINT) ASIC

69

Page 85
Image 85
Cisco Systems RJ-45-to-AUX manual ASICs, Single BUS vs. Crossbar Matrix, Input Queuing, Output Queuing, Shared Buffering