Dell EMC AX4-5 Cluster Nodes, Lists the hardware requirements for the cluster nodes, Identical

Page 9

Cluster Nodes

Table 1-1 lists the hardware requirements for the cluster nodes.

Table 1-1. Cluster Node Requirements

Component

Minimum Requirement

 

 

Cluster nodes

A minimum of two identical Dell™ PowerEdge™ servers are

 

required. The maximum number of nodes that is supported

 

depends on the variant of the Windows Server operating

 

system used in your cluster, and on the physical topology in

 

which the storage system and nodes are interconnected.

 

 

RAM

The variant of the Windows Server operating system that is

 

installed on your cluster nodes determines the minimum

 

required amount of system RAM.

 

 

HBA ports

Two Fibre Channel HBAs per node, unless the server employs

 

an integrated or supported dual-port Fibre Channel HBA.

 

Where possible, place the HBAs on separate PCI buses to

 

improve availability and performance.

 

 

NICs (public and

At least two NICs: one NIC for the public network and

private networks)

another NIC for the private network.

 

NOTE: It is recommended that the NICs on each public network

 

are identical, and that the NICs on each private network are

 

identical.

 

 

Internal disk

One controller connected to at least two internal hard drives

controller

for each node. Use any supported RAID controller or disk

 

controller.

 

Two hard drives are required for mirroring (RAID 1) and at

 

least three are required for disk striping with parity (RAID 5).

 

NOTE: It is strongly recommended that you use hardware-

 

based RAID or software-based disk-fault tolerance for the

 

internal drives.

 

 

NOTE: For more information about supported systems, HBAs and operating system variants, see the Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering website at dell.com/ha.

Introduction

9

Image 9
Contents Hardware Installation Troubleshooting Guide January 2010 Rev A01 Contents Installing Navisphere Storage System Cluster Configuration OverviewInstalling the Expansion Pack Using Advanced or Optional Storage FeaturesZoning Configuration Form Contents Introduction IntroductionCluster Hardware Requirements Cluster SolutionLists the hardware requirements for the cluster nodes Cluster NodesCluster Node Requirements Component Minimum Requirement IdenticalCluster Storage Cluster Storage RequirementsRequirement Supported Cluster Configurations Direct-Attached ClusterSystem Storage Expansion SAN-Attached Cluster Private networkStorage system Other Documents You May Need Fibre Channel switch Storage system Cluster nodePage Cabling Your Cluster Hardware Cabling the Power SuppliesCabling the Mouse, Keyboard, and Monitor Cabling Your Cluster HardwareAC PDU not shown Power strip or on oneCabling Your Cluster for Public and Private Networks Network Connections Description Cabling the Public Network Cabling the Private NetworkPrivate Network Hardware Components and Connections Method Cabling the Storage Systems NIC TeamingCabling Storage for Your Direct-Attached Cluster Private network AX4-5F storage system Cabling Your Cluster Hardware HBA portsAX4-5FX storage system Cabling Your Cluster Hardware Cabling Storage for Your SAN-Attached Cluster Private network Cluster nodeStorage system Cabling Your Cluster Hardware Cluster nodes Fibre Channel switch Connect cluster node 1 to the SAN Cabling a SAN-Attached Cluster to an AX4-5F Storage SystemAX4-5F storage system HBA ports Sw0 There is a maximum of four storage systems per cluster First cluster, connect cluster node 1 to the SANStorage systems Cabling Your Cluster Hardware Tape library Obtaining More InformationStorage systems Tape library Cabling Your Cluster Hardware Cabling Your Cluster Hardware Cluster Configuration Overview Preparing Your Systems for ClusteringPreparing Your Systems for Clustering Preparing Your Systems for Clustering Installation Overview Installing EMC PowerPath Installing the Fibre Channel HBAsInstalling the Fibre Channel HBA Drivers Insert the PowerPath installation media in the CD/DVD driveUsing Worldwide Port Name Zoning Implementing Zoning on a Fibre Channel Switched FabricPort Worldwide Names in a SAN Environment Single Initiator ZoningInstalling Navisphere Storage System Initialization Utility Installing and Configuring the Shared Storage SystemUpgrade Installing the Expansion Pack Using Navisphere ExpressInstalling Navisphere Server Utility Registering a Server With a Storage SystemTo register the server with the storage system Assigning the Virtual Disks to Cluster Nodes Advanced or Optional Storage FeaturesSAN Copy Snapshot ManagementInstalling and Configuring a Failover Cluster Troubleshooting AppendixTroubleshooting If these events do not appear AppendixTroubleshooting Check all cable connections Advanced withMinimum option Services window Click ServicesCluster Services Window, click the RecoveryAppendixTroubleshooting AppendixTroubleshooting Number Appendix Cluster Data Form Cluster Data FormName Seed Appendix Cluster Data Form DAEsAppendix Zoning Configuration Form Zoning Configuration FormNode Zone NameAppendix Zoning Configuration Form Storage systems, 30 connecting to one shared storage IndexIndex Mscs Mouse cabling