Chapter 2 The Cluster Platform 4500/3 System 9
■TwoFC-100 FC-AL hubs, with three installed GBICs in each hub
Connectivity■Cluster interconnect: The cluster connectivity uses Ethernet patch cables (no
Ethernet switch required), with redundant qfe 100BASE-T ports (qfe0 and qfe4)
on two separate SBus controllers to avoid a controller single point of failure.
■Public networks: The cluster nodes main network connection is implemented
using the on-board hme0 (100BASE-T) primary port with hme1 (100BASE-T) as
the failover port. The main network connection has a failover interface on
separate controllers. Twohme ports, hme2 and hme3, are available to implement
additional production networks.
■Storage access: Access to disk arrays is achieved through proper configuration of
the FC-AL from each node.
■Administration networks: Sun StorEdge™ Component Manager 2.1, included
with your Cluster Platform 4500/3, is used to administer the disk arrays.
Network AdministrationThe administration network provides access to the Ethernet hub.
Miscellaneous HardwareThe following miscellaneous hardware is used:
■Twoone-meter RS-232C (DB-25/RJ-45) serial cables (Part No. 2151A) for the
cluster nodes console (Part No. 530-2151)
■One one-meter RS-232C (RJ-45/RJ-45) serial cable for the management server
(Part No. 530-9524) to connect the terminal concentrator to the management
server
■Power cables provided with the power sequencers in the expansion cabinet
■Terminalconcentrator mounting bracket
■Ethernet hub mounting brackets
By default, the I/O board slots in the servers are labeled 1, 3, 5, and 7 (FIGURE2-2).