Dell CX4 manual Cluster Nodes, Cluster Node Requirements Component Minimum Requirement, Identical

Page 9

Cluster Nodes

Table 1-1 lists the hardware requirements for the cluster nodes.

Table 1-1. Cluster Node Requirements

Component

Minimum Requirement

 

 

Cluster nodes

A minimum of two identical PowerEdge servers are required.

 

The maximum number of nodes that are supported depend

 

on the variant of the Windows Server operating system used

 

in your cluster, and on the physical topology in which the

 

storage system and nodes are interconnected.

 

 

RAM

The variant of the Windows Server operating system that is

 

installed on your cluster nodes determines the minimum

 

RAM required.

 

 

Host Bus Adapter

Two Fibre Channel HBAs per node, unless the server employs

(HBA) ports

an integrated or supported dual-port Fibre Channel HBA.

 

Where possible, place the HBAs on separate PCI buses to

 

improve availability and performance.

 

 

NICs

At least two NICs: one NIC for the public network and

 

another NIC for the private network.

 

NOTE: It is recommended that the NICs on each public network

 

are identical, and that the NICs on each private network are

 

identical.

 

 

Internal disk

One controller connected to at least two internal hard drives

controller

for each node. Use any supported RAID controller or disk

 

controller.

 

Two hard drives are required for mirroring (RAID 1) and at

 

least three are required for disk striping with parity (RAID 5).

 

NOTE: It is strongly recommended that you use

 

hardware-based RAID or software-based disk-fault tolerance

 

for the internal drives.

 

 

NOTE: For more information about supported systems, HBAs and operating system variants, see the Dell Cluster Configuration Support Matrix on the Dell High Availability website at www.dell.com/ha.

Introduction

9

Image 9
Contents Hardware Installation Troubleshooting Guide October Contents Cluster Configuration Overview Installing the Fibre Channel HBAsImplementing Zoning on a Fibre Channel Zoning Configuration Form Contents Introduction IntroductionCluster Solution Cluster Hardware RequirementsCluster nodes Cluster storage Cluster Node Requirements Component Minimum Requirement Cluster NodesIdentical For the internal drivesCluster Storage Requirements Cluster StorageRequirement Enclosure ExpansionNavisphere Manager Storage Views Description Supported Cluster Configurations EMC PowerPath Limitations in a Direct-Attached ClusterDirect-Attached Cluster SAN-Attached Cluster Other Documents You May NeedPage Cabling Your Cluster Hardware Cabling the Power SuppliesCabling the Mouse, Keyboard, and Monitor Cabling Your Cluster HardwareCabling Your Cluster Hardware Cabling Your Cluster for Public and Private Networks Cabling the Public Network Network Connections DescriptionPrivate network Cluster node Cabling the Storage Systems Using Dual-Port Network AdaptersCabling the Private Network NIC TeamingFibre Channel Connections Cabling Storage for Your Direct-Attached ClusterCabling a Two-Node Cluster to a Dell/EMC Storage System Cabling a Cluster to a Dell/EMC Storage SystemCX4-120 or CX4-240 storage system Cluster node HBA portsCabling a Two-Node Cluster to a CX4-960 Storage System Cabling Your Cluster Hardware Cabling Two Two-Node Clusters to a Dell/EMC Storage System Cabling Storage for Your SAN-Attached ClusterStorage system Cabling Your Cluster Hardware Cluster node Private networkStorage system Cabling Your Cluster Hardware Cabling a SAN-Attached Cluster to a Dell/EMC Storage System Connect cluster node 1 to the SAN Sw0 CX4-480 storage system Sw1 Cabling Your Cluster Hardware 12. Cabling a SAN-Attached Cluster to the Dell\EMC CX4-960 CX4-960 storage systemFirst cluster, connect cluster node 1 to the SAN There is a maximum of four storage systems per cluster Connecting a PowerEdge Cluster to Multiple Storage SystemsStorage systems Connecting a PowerEdge Cluster to a Tape LibraryConfiguring Your Cluster With SAN Backup Obtaining More InformationTape library Tape library Storage systems Cabling Your Cluster Hardware 15. Cluster Configuration Using SAN-Based BackupCabling Your Cluster Hardware Cluster Configuration Overview Preparing Your Systems for ClusteringPreparing Your Systems for Clustering Preparing Your Systems for Clustering Installation Overview Installing the Fibre Channel HBAs Implementing Zoning on a Fibre Channel Switched FabricInstalling the Fibre Channel HBA Drivers Port Worldwide Names in a SAN Environment Using Worldwide Port Name ZoningPort Worldwide Names in a SAN Environment Identifier Single Initiator ZoningAccess Control Installing and Configuring the Shared Storage SystemStorage Groups Storage Group Properties Property Description Displays whether the path is enabled or disabledName Name of the host system IP address IP address of the host systemNavisphere Agent Navisphere ManagerEMC PowerPath Preparing Your Systems for Clustering Configuring the Hard Drives on the Shared Storage Systems Configuring and Managing LUNsConfiguring the RAID Level for the Shared Storage Subsystem MirrorView Optional Storage FeaturesSnapView SAN CopyUpdating a Dell/EMC Storage System for Clustering Installing and Configuring a Failover ClusterPreparing Your Systems for Clustering Troubleshooting Troubleshooting Been booted Troubleshooting Check all cable connections Check all zone configurationsAttached storage systems Use the Advanced withCluster Services Window, double-click ServicesRecovery tab My Computer and click ManageNode HBA WWPNs Storage Or Alias WWPNs or Names Zoning Configuration FormZoning Configuration Form Public IP Address Private IP Address Cluster Data Form Cluster Data FormName Seed Cluster Data Form DAEsIndex IndexInstalling SAN Index