Dell CX4 manual EMC PowerPath

Page 49

2Add the following two separate lines to the agentID.txt file, with no special formatting:

First line: Fully qualified hostname. For example, enter node1.domain1.com, if the host name is node1 and the domain name is domain1.

Second line: IP address that you want the agent to register and use to communicate with the storage system.

EMC PowerPath

EMC PowerPath® automatically reroutes Fibre Channel I/O traffic from the host system and a Dell/EMC CX4-series storage system to any available path if a primary path fails for any reason. Additionally, PowerPath provides multiple path load balancing, allowing you to balance the I/O traffic across multiple SP ports.

Enabling Access Control and Creating Storage Groups Using Navisphere

The following subsection provides the required procedures for creating storage groups and connecting your storage systems to the host systems.

CAUTION: Before enabling Access Control, ensure that no hosts are attempting to access the storage system. Enabling Access Control prevents all hosts from accessing any data until they are given explicit access to a LUN in the appropriate storage group. You must stop all I/O before enabling Access Control. It is recommended to turn off all hosts connected to the storage system during this procedure or data loss may occur. After you enable the Access Control software, it cannot be disabled.

1Ensure that Navisphere Agent is started on all host systems.

a Click the Start button and select ProgramsAdministrative Tools, and then select Services.

b In the Services window, verify the following:

In the Name column, Navisphere Agent appears.

In the Status column, Navisphere Agent is set to Started.

In the Startup Type column, Navisphere Agent is set to Automatic.

2Open a Web browser.

Preparing Your Systems for Clustering

49

Image 49
Contents Hardware Installation Troubleshooting Guide October Contents Installing the Fibre Channel HBAs Cluster Configuration OverviewImplementing Zoning on a Fibre Channel Zoning Configuration Form Contents Introduction IntroductionCluster Hardware Requirements Cluster SolutionCluster nodes Cluster storage Cluster Node Requirements Component Minimum Requirement Cluster NodesIdentical For the internal drivesCluster Storage Requirements Cluster StorageRequirement Enclosure ExpansionNavisphere Manager Storage Views Description EMC PowerPath Limitations in a Direct-Attached Cluster Supported Cluster ConfigurationsDirect-Attached Cluster SAN-Attached Cluster Other Documents You May NeedPage Cabling Your Cluster Hardware Cabling the Power SuppliesCabling the Mouse, Keyboard, and Monitor Cabling Your Cluster HardwareCabling Your Cluster Hardware Cabling Your Cluster for Public and Private Networks Network Connections Description Cabling the Public NetworkPrivate network Cluster node Cabling the Storage Systems Using Dual-Port Network AdaptersCabling the Private Network NIC TeamingFibre Channel Connections Cabling Storage for Your Direct-Attached ClusterCabling a Two-Node Cluster to a Dell/EMC Storage System Cabling a Cluster to a Dell/EMC Storage SystemCX4-120 or CX4-240 storage system Cluster node HBA portsCabling a Two-Node Cluster to a CX4-960 Storage System Cabling Your Cluster Hardware Cabling Two Two-Node Clusters to a Dell/EMC Storage System Cabling Storage for Your SAN-Attached ClusterStorage system Cabling Your Cluster Hardware Cluster node Private networkStorage system Cabling Your Cluster Hardware Cabling a SAN-Attached Cluster to a Dell/EMC Storage System Connect cluster node 1 to the SAN Sw0 CX4-480 storage system Sw1 Cabling Your Cluster Hardware 12. Cabling a SAN-Attached Cluster to the Dell\EMC CX4-960 CX4-960 storage systemFirst cluster, connect cluster node 1 to the SAN There is a maximum of four storage systems per cluster Connecting a PowerEdge Cluster to Multiple Storage SystemsStorage systems Connecting a PowerEdge Cluster to a Tape LibraryObtaining More Information Configuring Your Cluster With SAN BackupTape library Tape library Storage systems Cabling Your Cluster Hardware 15. Cluster Configuration Using SAN-Based BackupCabling Your Cluster Hardware Preparing Your Systems for Clustering Cluster Configuration OverviewPreparing Your Systems for Clustering Preparing Your Systems for Clustering Installation Overview Implementing Zoning on a Fibre Channel Switched Fabric Installing the Fibre Channel HBAsInstalling the Fibre Channel HBA Drivers Port Worldwide Names in a SAN Environment Using Worldwide Port Name ZoningPort Worldwide Names in a SAN Environment Identifier Single Initiator ZoningAccess Control Installing and Configuring the Shared Storage SystemStorage Groups Storage Group Properties Property Description Displays whether the path is enabled or disabledName Name of the host system IP address IP address of the host systemNavisphere Agent Navisphere ManagerEMC PowerPath Preparing Your Systems for Clustering Configuring and Managing LUNs Configuring the Hard Drives on the Shared Storage SystemsConfiguring the RAID Level for the Shared Storage Subsystem MirrorView Optional Storage FeaturesSnapView SAN CopyUpdating a Dell/EMC Storage System for Clustering Installing and Configuring a Failover ClusterPreparing Your Systems for Clustering Troubleshooting Troubleshooting Been booted Troubleshooting Check all cable connections Check all zone configurationsAttached storage systems Use the Advanced withCluster Services Window, double-click ServicesRecovery tab My Computer and click ManageNode HBA WWPNs Storage Or Alias WWPNs or Names Zoning Configuration FormZoning Configuration Form Public IP Address Private IP Address Cluster Data Form Cluster Data FormName Seed Cluster Data Form DAEsIndex IndexInstalling SAN Index