3Com FE100 manual Deolqjwkh&Oxvwhu+Dugzduh, Oxvwhu&Deolqj

Page 31

& + $ 3 7 ( 5 ￿ ￿

&DEOLQJ￿WKH￿&OXVWHU￿+DUGZDUH

This chapter provides instructions on how to cable your system hardware for a cluster configuration.

NOTE: The Peripheral Component Interconnect (PCI) slot placement for the network interface controllers (NICs), host bus adapters, and redundant arrays of independent disks (RAID) controllers in the illustrations for this chapter are examples only. See Appendix A, “Upgrading to a Cluster Configuration,” for specific recommendations for placing PCI expansion cards in your nodes.

&OXVWHU￿&DEOLQJ

The Dell PowerEdge Cluster FE100 consists of two PowerEdge 6300, 6350, or 4300 server systems and one PowerVault 65xF storage system. These components are interconnected with the following cables:

A copper or optical fiber cable connects the QLogic host bus adapter(s) in each PowerEdge system to the PowerVault 65xF storage system.

For QLogic QLA-2100 host bus adapters: A copper cable containing a high- speed serial data connector (HSSDC) on one end and a DB-9 connector on the other connects the host bus adapter to the storage processor.

For QLogic QLA-2100F host bus adapters: An optical fiber cable containing SC connectors on each end connects the host bus adapter to a media inter- face adapter (MIA) attached to the storage processor.

If you are using Disk-Array Enclosures (DAEs) with your PowerVault system, 0.3-meter (m) serial cables with DB-9–to–DB-9 connectors are required to con- nect the storage processors with DAE(s).

NOTE: Do not connect an unused interface cable to a DAE’s link control card (LCC) port. Unnecessary connections can add excess noise to the system’s signal loop.

A crossover Category 5 Ethernet cable connects the NICs in each PowerEdge system.

Power cables are connected according to the safety requirements for your region. Contact your Dell sales representative for specific power cabling and dis- tribution requirements for your region.

Cabling the Cluster Hardware 2-1

Image 31
Contents Zzzghoofrp 167$//$7,21$1 7528%/6+227,1Page $51,1*7KLVV\VWHPPD\KDYHPRUHWKDQRQHSRZHUVXSSO\FDEOH7R 6DIHW\,QVWUXFWLRQVKHQ8VLQJRXU&RPSXWHU6\VWHP 6DIHW\,QIRUPDWLRQIRU7HFKQLFLDQVKHQRUNLQJ,QVLGHRXU&RPSXWHU UJRQRPLF&RPSXWLQJ+DELWV3URWHFWLQJ$JDLQVWOHFWURVWDWLFLVFKDUJH Lqvwuxfwlrqvdwwkhiurqwriwklvjxlgh Page $ERXW7KLV*XLGH 3UHIDFHDUUDQW\DQG5HWXUQ3ROLF\,QIRUPDWLRQ 2WKHURFXPHQWDWLRQRX0D\1HHG1RWHV&DXWLRQVDQGDUQLQJV 1RWDWLRQDO&RQYHQWLRQVExample del drive path filename /p \SRJUDSKLFDO&RQYHQWLRQVNo boot device available Xvi Kdswhu RqwhqwvDEOLQJWKH&OXVWHU+DUGZDUH $SSHQGL HOO3RZHUGJH&OXVWHU Kdswhu 5XQQLQJ$SSOLFDWLRQV6RIWZDUH$SSHQGL$ $SSHQGL%QGH Ljxuhv $SSHQGL DUUDQW\5HWXUQ3ROLF\DQGHDU7DEOHV HWWLQJ6WDUWHG 0LQLPXP6\VWHP5HTXLUHPHQWV 3RZHUGJH&OXVWHU&RPSRQHQWVGetting Started DVLF,QVWDOODWLRQ3URFHGXUH $GGLQJ3HULSKHUDOV5HTXLUHGIRU&OXVWHULQJ LJXUHDPSOHRID3RZHUGJH1RGH Host bus adapters 2 required 6HWWLQJ8SWKH,QWHUQDO6&6,+DUGLVNULYHV DEOLQJWKH&OXVWHU+DUGZDUHRQILJXULQJWKH4/RJLF+RVW%XV$GDSWHUV RQILJXULQJWKH&OXVWHU1,&V QVWDOOLQJDQG&RQILJXULQJLQGRZV176HUYHU QwhusulvhglwlrqQVWDOOLQJWKHHYLFHULYHUIRUWKH$7,9LGHR&RQWUROOHU QVWDOOLQJHOO2SHQ0DQDJH$7$SSO\LQJWKH&OXVWHU+RWIL 6RIWZDUHDWD$GPLQLVWUDWRU KHFNLQJWKH6\VWHPOXVWHU&DEOLQJ DEOLQJWKH&OXVWHU+DUGZDUHDEOLQJWKH3RZHUGJH&OXVWHU QVWDOOLQJ2SWLFDOLEHU,QWHUIDFH&DEOHV SDQGLQJWKH3RZHU9DXOW6WRUDJH6\VWHP DeolqjRQQHFWLQJ6WDQGE\3RZHU6XSSOLHVLQWKH3RZHU9DXOW 6\VWHP 3RZHU&DEOLQJLJXUH&DEOLQJWKH3RZHU9DXOW3RZHU6XSSOLHV 0RXVH.H\ERDUGDQG0RQLWRU&DEOLQJ Page +DUG/RRS,IRUWKH4/RJLF+RVW%XV$GDSWHUV RZ/HYHO6RIWZDUH&RQILJXUDWLRQ$,/HYHOIRUWKH,QWHUQDO+DUGLVNULYHV2SWLRQDO $,/HYHOIRUWKH6KDUHG6WRUDJH6XEV\VWHPQVWDOOLQJ+32SHQ9LHZ1HWZRUN1RGH0DQDJHU +LJK/HYHO6RIWZDUH&RQILJXUDWLRQType Managed Node Agent in the Name field 5HVRXUFH5XQQLQJHOO2SHQ0DQDJHDWD$GPLQLVWUDWRULQD&OXVWHU KRRVLQJDRPDLQ0RGHO 5XQQLQJHOO2SHQ0DQDJH$76WDWLF,3$GGUHVVHV 1DPLQJDQGRUPDWWLQJ6KDUHGULYHV ULYHUIRUWKH2SWLRQDO3RZHUGJHSDQGDEOH5$ Rqwuroohu 8VLQJWKHIWGLVNULYHU$GMXVWLQJWKH3DJLQJLOH6LHDQG5HJLVWU\6LHV 8SGDWLQJWKHLQGRZV17ULYHUIRU,QWHO1,&VKDQJLQJWKH,3$GGUHVVRID1RGH Oxvwhurpdlq 9HULI\LQJWKH&OXVWHUXQFWLRQDOLW\RJLF+RVW%XV$GDSWHUULYHU 6KDUHG6WRUDJH6XEV\VWHPULYH/HWWHUV 2SWLRQDO5$,&RQWUROOHUULYHU9HULI\LQJWKHLQGRZV176HUYLFH3DFN9HUVLRQ OXVWHU6HUYLFH OXVWHU1HWZRUN&RPPXQLFDWLRQV$YDLODELOLW\RI&OXVWHU5HVRXUFHV 6HWWLQJ8SWKH4XRUXP5HVRXUFH 5HPRYLQJD1RGHURPD&OXVWHUQVWDOOLQJDQG&RQILJXULQJ$GGLWLRQDO1,&V LQD&OXVWHU IP Address Subnet Mask 8SGDWLQJDQLVWLQJ6HUYLFH3DFN OXVWHUWR6HUYLFH3DFNHWHUPLQLQJWKH5$,/HYHOVRIWKH6KDUHG LVN9ROXPHV WKH&OXVWHU 5XQQLQJ$SSOLFDWLRQV6RIWZDUHRQ D&OXVWHU69HUVLRQ9LUWXDO5RRW6HUYLFH QVWDOOLQJ&OXVWHU$ZDUH$SSOLFDWLRQV6RIWZDUHLOH6KDUH6HUYLFH 3ULQW6SRROHU6HUYLFH Running Applications Software on a Cluster Select Remote print server \\spoolname and click Next 7DSH%DFNXSIRU&OXVWHUHG6\VWHPV 5XQQLQJFKNGVNIRQD4XRUXPLVNPage 7URXEOHVKRRWLQJ Microsoft Cluster Server Administrator’s Guide7DEOH7URXEOHVKRRWLQJFRQWLQXHG Microsoft Cluster Server suc Page 8SJUDGLQJWRD&OXVWHU Rqiljxudwlrq $GGLQJSDQVLRQ&DUGVIRUD&OXVWHU 8SJUDGH KHFNLQJRXULVWLQJ+DUGZDUHLJXUH$DPSOHRID3RZHUGJH1RGH Drac II optional OXVWHUDWD6KHHW Page Dell PowerEdge Cluster Installer Data Card and Checklist HOO3RZHU9DXOW&RQILJXUDWLRQDQGHOO2SHQ0DQDJH8WLOLWLHV RQILJXUDWLRQ0DWUL PowerEdge Cluster Model Configuration Matrix LWK3RZHUGJH6\VWHPV Page 5HJXODWRU\1RWLFHV ODVV$ 1RWLFHV862QO\ODVV% 1RWLFH&DQDGD2QO\Wdnhdghtxdwhphdvxuhv 1RWLFHXURSHDQ8QLRQVCCI-A ODVV$HYLFH 02&1RWLFH6RXWK.RUHD2QO\ODVV%HYLFH EMI B 8ZNBHBOJB1PMTLJFHP$FOUSVN#BEBËJ $FSUZGJLBDKJ120,QIRUPDWLRQ0HLFR2QO\ 1PPTUBFJOTUSVLDKFCFQJFDFËTUXBBciq Notice Taiwan Only Page RYHUDJHXULQJHDU2QH DUUDQW\5HWXUQ3ROLF\DQGHDU 6WDWHPHQWRI&RPSOLDQFHHQHUDO3URYLVLRQV RYHUDJHXULQJHDUV7ZRDQG7KUHH´7RWDO6DWLVIDFWLRQµ5HWXUQ3ROLF\86DQG &DQDGD2QO\ 3UHYLRXV3URGXFWV HDU6WDWHPHQWRI&RPSOLDQFHIRU HOO%UDQGHG+DUGZDUH3URGXFWV$GGLWLRQDO,QIRUPDWLRQ Page ATF QGHDPE PCI Scsi