2.Cable the system hardware for clustering.
If you are using Dell PowerVault Fibre Channel switches, see the Dell PowerEdge Cluster
3.If you are using
4.Perform the
5.Install and configure the Windows NT 4.0 Server, Enterprise Edition operating system on each node.
6.Configure the public and private NIC interconnects in each node, and place the interconnects on separate Internet protocol (IP) subnetworks using static IP addresses.
NOTES: “Public” refers to the NIC used for client connections. “Private” refers to the dedicated cluster interconnect.
If you are using Giganet cluster local area network (cLAN) Host Adapters or a Giganet cLAN Cluster Switch, see “Using Giganet for Cluster Interconnect (Pow- erEdge Cluster FL100/FL200)” in Chapter 4 for more information.
7.Install the device driver for the integrated video controller.
8.Install Windows NT Service Pack 6a or later.
NOTE: See the Dell PowerEdge Cluster FE100/FL100 and FE200/FL200 Platform Guide for more information on the latest supported service pack.
9.Install the miniport driver for the Fibre Channel HBAs in each node.
10.Install the QLogic Fibre Channel Configuration software.
11.Install the storage management software and failover driver that is appropriate for your storage system.
For the PowerEdge Cluster FE100/FL100, perform the following steps:
a.Install Dell OpenManage Application Transparent Failover (ATF) software on each node and reboot.
b.Install Dell OpenManage Managed Node (Data Agent) on each node.
c.Install Dell OpenManage Data Supervisor or Dell OpenManage Data Admin- istrator on node A.
For the PowerEdge Cluster FE200/FL200, perform the following steps:
a.Install QLDirect on each node and reboot.
b.Set the failover path in the QLogic Fibre Channel Configuration software on each node.
c.Reboot each node.