& + $ 3 7 ( 5 ￿ ￿

&DEOLQJ￿WKH￿&OXVWHU￿+DUGZDUH

This chapter provides instructions on how to cable your system hardware for a cluster configuration.

NOTE: The Peripheral Component Interconnect (PCI) slot placement for the network interface controllers (NICs), host bus adapters, and redundant arrays of independent disks (RAID) controllers in the illustrations for this chapter are examples only. See Appendix A, “Upgrading to a Cluster Configuration,” for specific recommendations for placing PCI expansion cards in your nodes.

&OXVWHU￿&DEOLQJ

The Dell PowerEdge Cluster FE100 consists of two PowerEdge 6300, 6350, or 4300 server systems and one PowerVault 65xF storage system. These components are interconnected with the following cables:

A copper or optical fiber cable connects the QLogic host bus adapter(s) in each PowerEdge system to the PowerVault 65xF storage system.

For QLogic QLA-2100 host bus adapters: A copper cable containing a high- speed serial data connector (HSSDC) on one end and a DB-9 connector on the other connects the host bus adapter to the storage processor.

For QLogic QLA-2100F host bus adapters: An optical fiber cable containing SC connectors on each end connects the host bus adapter to a media inter- face adapter (MIA) attached to the storage processor.

If you are using Disk-Array Enclosures (DAEs) with your PowerVault system, 0.3-meter (m) serial cables with DB-9–to–DB-9 connectors are required to con- nect the storage processors with DAE(s).

NOTE: Do not connect an unused interface cable to a DAE’s link control card (LCC) port. Unnecessary connections can add excess noise to the system’s signal loop.

A crossover Category 5 Ethernet cable connects the NICs in each PowerEdge system.

Power cables are connected according to the safety requirements for your region. Contact your Dell sales representative for specific power cabling and dis- tribution requirements for your region.

Cabling the Cluster Hardware 2-1

Page 31
Image 31
3Com FE100 manual Deolqjwkh&Oxvwhu+Dugzduh, Oxvwhu&Deolqj