We call a switch a blocking switch when the switch bus or components cannot handle the theoretical
maximum throughput of all the input ports combined. There is a lot of debate over whether every switch
should be designed as a non−blocking switch; but for now this situation is only a dream, considering the
current pricing of non−blocking switches.
Let’s get even more complicated and introduce another solution by implementing two physical links between
the two switches and using full−duplexing technology. Full duplex essentially means that you have two
physical wires from each port—data is sent on one link and received on another. This setup not only virtually
guarantees a collision−free connection, but also can increase your network traffic to almost 100 percent on
each link.
You now have 200 percent throughput by utilizing both links. If you had 10Mbps on the wire at half duplex,
by implementing full duplex you now have 20Mbps flowing through the wires. The same thing goes with a
100BaseT network: Instead of 100Mbps, you now have a 200Mbps link.
Tip If the interfaces on your resource nodes can implement full duplex, it can also be a secondary solution for
your servers.
Almost every Cisco switch has an acceptable throughput level and will work well in its own layer of the Cisco
hierarchical switching model or its designed specification. Implementing VLANs has become a popular
solution for breaking down a segment into smaller collision domains.
Internal Route Processor vs. External Route Processor
Routing between VLANs has been a challenging problem to overcome. In order to route between VLANs,
you must use a Layer 3 route processor or router. There are two different types of route processors: an
external route processor and an internal route processor. An external route processor uses an external router to
route data from one VLAN to another VLAN. An internal route processor uses internal modules and cards
located on the same device to implement the routing between VLANs.
Now that you have a pretty good idea how a network should be designed and how to monitor and control
bottlenecks, let’s take a look at the general traffic rule and how it has changed over time.
The Rule of the Network Road
Network administrators and designers have traditionally strived to design networks using the 80/20 rule.
Using this rule, a network designer would try to design a network in which 80 percent of the traffic stayed on
local segments and 20 percent of the traffic went on the network backbone.
This was an effective design during the early days of networking, when the majority of LANs were
departmental and most traffic was destined for data that resided on the local servers. However, it is not a good
design in today’s environment, where the majority of traffic is destined for enterprise servers or the Internet.
A switch’s ability to create multiple data paths and provide swift, low−latency connections allows network
administrators to permit up to 80 percent of the traffic on the backbone without causing a massive overload of
the network. This ability allows for the introduction of many bandwidth−intensive uses, such as network
video, video conferencing, and voice communications.
Multimedia and video applications can demand as much as 1.5Mbps or more of continuous bandwidth. In a
typical environment, users can rarely obtain this bandwidth if they share an average 10Mbps network with
dozens of other people. The video will also look jerky if the data rate is not sustained. In order to support this
application, a means of providing greater throughput is needed. The ability of switches to provide dedicated
bandwidth at wire−speed meets this need.
22