Combining Switching Methods
To resolve the problems associated with the switching methods discussed so far, a new method was
developed. Some switches, such as the Cisco Catalyst 1900, 2820, and 3000 series, begin with either
cut−through or FragmentFree switching. Then, as frames are received and forwarded, the switch also checks
the frame’s CRC. Although the CRC may not match the frame itself, the frame is still forwarded before the
CRC check and after the MAC address is reached. The switch performs this task so that if too many bad
frames are forwarded, the switch can take a proactive role, changing from cut−through mode to
store−and−forward mode. This method, in addition to the development of high−speed processors, has reduced
many of the problems associated with switching.
Only the Catalyst 1900, 2820, and 3000 series switches support cut−through and FragmentFree switching.
You might ponder the reasoning behind the faster Catalyst series switches not supporting this seemingly faster
method of switching. Well, store−and−forward switching is not necessarily slower than cut−through
switching—when switches were first introduced, the two modes were quite different. With better processors
and integrated−circuit technology, store−and−forward switching can perform at the physical wire limitations.
This method allows the end user to see no difference in the switching methods.
Switched Network Bottlenecks
This section will take you step by step through how bottlenecks affect performance, some of the causes of
bottlenecks, and things to watch out for when designing your network. A bottleneck is a point in the network
at which data slows due to collisions and too much traffic directed to one resource node (such as a server). In
these examples, I will use fairly small, simple networks so that you will get the basic strategies that you can
apply to larger, more complex networks.
Let’s start small and slowly increase the network size. We’ll take a look at a simple way of understanding how
switching technology increases the speed and efficiency of your network. Bear in mind, however, that
increasing the speed of your physical network increases the throughput to your resource nodes and doesn’t
always increase the speed of your network. This increase in traffic to your resource nodes may create a
bottleneck.
Figure 1.6 shows a network that has been upgraded to 100Mbps links to and from the switch for all the nodes.
Because all the devices can send data at 100Mbps or wire−speed to and from the switch, a link that receives
data from multiple nodes will need to be upgraded to a faster link than all the other nodes in order to process
and fulfill the data requests without creating a bottleneck. However, because all the nodes—including the file
servers—are sending data at 100Mbps, the link between the file servers that is the target for the data transfers
for all the devices becomes a bottleneck in the network.
Figure 1.6: A switched network with only two servers. Notice that the sheer number of clients sending data to
the servers can overwhelm the cable and slow the data traffic.
20