Dell 4200 manual High-Level Software Configuration

Page 35

High-Level Software

Configuration

When the SCSI drives and RAID levels have been set up, Windows NT Server Enterprise Edition can be installed and configured. A number of operating system configura- tions must be set during the installation to enable clustering. These configuration requirements are described in the Microsoft Windows NT Server Enterprise Edition Administrator’s Guide and Release Notes . The following subsections briefly discuss these configurations.

Installing Intel LANDesk®Server

Manager

After installing the Windows NT Enterprise Edition oper- ating system, install LANDesk prior to applying the Service Pack to your system. Refer to the LANDesk Server Manager Setup Guide for installation instructions.

Choosing a Domain Model

Cluster nodes can be set up in three possible configura- tions: as two stand-alone member servers, as two backup domain controllers (BDC), or as a primary domain con- troller (PDC) and a BDC. The first two configurations require an existing domain for the servers to join. The PDC/BDC configuration establishes a new domain of which the one server is the primary domain controller and the other server is the backup domain controller. Any of the three configurations can be chosen for clustering, but the recommended default is having each cluster server as a member server in an existing domain. This relieves the cluster nodes from the processing overhead involved in authenticating the user logon.

Static IP Addresses

The Microsoft Cluster Server software requires one static Internet Protocol (IP) address for the cluster and one static IP address for each disk resource group. A static IP address is an Internet address that a network administra- tor assigns exclusively to a system or a resource. The address assignment remains in effect until the network administrator changes it.

IPs and Subnet Masks

For the node-to-node network interface controller (NIC) connection on the PowerEdge Cluster, the default IP address 10.0.0.1 is assigned to the first node and the sec- ond node is assigned the default address 10.0.0.2. The default subnet mask is 255.0.0.0.

Configuring Separate Networks on a Cluster

Two network interconnects are strongly recommended for a cluster configuration to eliminate any single point of failure that could disrupt intracluster communication.

Separate networks can be configured on a cluster by redefining the network segment of the IP address assigned to the NICs residing in the cluster nodes.

For example, two NICs reside in two cluster nodes. The NICs in the first node have the following IP addresses and configuration:

NIC1:

 

IP address:

143.166.110.2

Default gateway:

143.166.111.3

NIC2

 

IP address:

143.166.111.3

Default gateway:

143.166.110.2

The NICs in the second node have the following IP addresses and configuration:

NIC1:

 

IP address:

143.166.110.4

Default gateway:

143.166.111.5

NIC2

 

IP address:

143.166.111.5

Default gateway:

143.166.110.4

IP routing is enabled and the subnet mask is 255.255.255.0 on all NICs.

The NIC1s of two machines establish one network seg- ment, and the NIC2s create another. In each system, one NIC is defined to be the default gateway for the other NIC.

Configuring the Cluster Software

3-3

Image 35
Contents HOOŠ3RZHUGJHŠ&OXVWHU3RZHUGJH Page 167$//$7,21$17528%/6+227,1**8 Page Before You Begin Safety InstructionsProtecting Against Electrostatic Discharge When Working Inside the ComputerWhen Using the Computer System Viii About This Guide PrefaceNotational Conventions Other Documentation You May NeedExamples autoexec.bat and c\windows Typographical ConventionsXii Contents Chapter Configuring the Cluster Software Chapter Running Applications on a Cluster Figures Xvii Xviii Getting Started PowerEdge Cluster ComponentsMinimum System Requirements PowerEdge Cluster LayoutBasic Installation Procedure Adding Peripherals Required for Clustering Cabling the Cluster Hardware Setting Up the Cluster HardwareUpdating System BIOS/Firmware for Clustering Checking the System Setting Up the Shared Storage Subsystem Hard-Disk DrivesSetting Up the Internal Scsi Hard-Disk Drives Installing PowerEdge Cluster ApplicationsOne Shared Storage Subsystem Cabled to a Cluster Cabling the Cluster HardwareCluster Cabling Ultra-high density connector Cabling the Cluster Hardware Two SDS 100 Storage Systems Cabled to Dual RAID Controllers NIC Cabling SMB CablingCabling the Network Switch Mouse, Keyboard, and Monitor Cabling Power CablingPowerEdge Cluster Power Cabling Important System Warning Configuring the Cluster SoftwareLow-Level Software Configuration RAID Level for the Internal Hard-Disk Drives Optional Disabling a RAID Controller BiosScsi Host Adapter IDs RAID Level for the Shared Storage SubsystemsHigh-Level Software Configuration Driver for the RAID Controller Changing the IP Address of a Cluster NodeNaming and Formatting Shared Drives Updating the NIC Driver Adjusting the Paging File Size and Registry Sizes8 Mode on the SDS 100 Storage System Verifying the Cluster FunctionalityShared Storage Subsystem Drive Letters Scsi Controller IDsCluster Domain RAID Controller DriverAvailability of Cluster Resources Uninstalling Microsoft Cluster ServerCluster Service Removing a Node From a ClusterCluster RAID Controller Functionality Setting Up the Quorum ResourceUsing the ftdisk Driver Using the Maximize Feature in PowerEdge RAID ConsoleRebuild Operation in RAID Console Page Running Applications on a Cluster Internet Information Server ServiceFile Share Service Print Spooler Service Using the Rediscovery Application in Intel LANDesk Running chkdsk /f on a Quorum Disk Tape Backup for Clustered SystemsPage Troubleshooting TroubleshootingTroubleshooting Troubleshooting Troubleshooting Checking Your Existing Hardware Upgrading to a Cluster ConfigurationAdding Expansion Cards for a Cluster Upgrade Mounting, Cabling, and Configuring the Cluster Hardware Upgrading the PowerEdge SDS 100 Storage System Firmware Installing and Configuring the Cluster SoftwareInstalling and Configuring NICs Upgrading the PowerEdge 4200 FirmwareMove all cluster resources to the first cluster node Power Requirements of the PowerEdge Cluster Stand-Alone and Rack ConfigurationsRack Stabilizer Feet Supported Stand-Alone ConfigurationsRack Safety Notices Kit Installation RestrictionsConfiguration PowerEdge SDS 100 storage systems Figure B-3. Supported Rack Configuration Supported Rack ConfigurationRack-Mounting the Network Switch Cluster Data Sheet Page Dell PowerEdge Cluster Installer Data Card and Checklist Microsoft Cluster Service Installation PowerEdge Cluster Configuration Matrix PowerEdge Cluster Configuration Matrix CE Notice Safety StandardRegulatory Compliance Regulatory StandardsPage Safety Information for Technicians Page Coverage During Year One Warranties and Return PolicyGeneral Total Satisfaction Return Policy U.S. and Canada OnlyCoverage During Years Two and Three Warranties and Return Policy Page Index BiosPage Scsi Page