HP serviceguard t2808-90006 manual Benefits of Continentalclusters

Page 29

Disaster Tolerance and Recovery in a Serviceguard Cluster

Understanding Types of Disaster Tolerant Clusters

Benefits of Continentalclusters

You can virtually build data centers anywhere and still have the data centers provide disaster tolerance for each other. Since Continentalclusters uses two clusters, theoretically there is no limit to the distance between the two clusters. The distance between the clusters is dictated by the required rate of data replication to the remote site, level of data currency, and the quality of networking links between the two data centers.

In addition, inter-cluster communication can be implemented with either a WAN or LAN topology. LAN support is advantageous when you have data centers in close proximity to each other, but do not want the data centers configured into a single cluster. One example may be when you already have two Serviceguard clusters close to each other and, for business reasons, you cannot merge these two clusters into a single cluster. If you are concerned with one of the centers becoming unavailable, Continentalclusters can be added to provide disaster tolerance. Furthermore, Continentalclusters can be implemented with an existing Serviceguard cluster architecture while keeping both clusters running, and provide flexibility by supporting disaster recovery failover between two clusters that are on the same subnet or on different subnets.

You can integrate Continentalclusters with any storage component of choice that is supported by Serviceguard. Continentalclusters provides a structure to work with any type of data replication mechanism. A set of guidelines for integrating other data replication schemes with Continentalclusters is included in the Designing Disaster Tolerant HA Clusters Using Metrocluster and Continentalclusters user’s guide.

Besides selecting your own storage and data replication solution, you can also take advantage of the following HP pre-integrated solutions:

Storage subsystems implemented by CLX are also pre-integrated with Continentalclusters. Continentalclusters uses the same data replication integration module that CLX implements to check for data status of the application package before package start up.

If Oracle DBMS is used and logical data replication is the preferred method, depending on the version, either Oracle 8i Standby or Oracle 9i Data Guard with log shipping is used to

Chapter 1

29

Image 29
Contents Page Legal Notices Contents Disaster Scenarios and Their Handling Managing an MD Device Contents Contents Editions and Releases Printing HistoryHP Printing Division Document Organization Intended AudiencePage Related Page Disaster Tolerance Evaluating the Need for Disaster Tolerance Evaluating the Need for Disaster Tolerance High Availability Architecture What is a Disaster Tolerant Architecture?Node 1 fails Pkg B Client ConnectionsDisaster Tolerant Architecture Extended Distance Clusters Understanding Types of Disaster Tolerant ClustersFrom both storage devices Extended Distance Cluster Two Data Center Setup Benefits of Extended Distance Cluster Cluster Extension CLX Cluster CLX for Linux Serviceguard Cluster Shows a CLX for a Linux Serviceguard cluster architectureBenefits of CLX Differences Between Extended Distance Cluster and CLX Continental Cluster New York Cluster Los Angeles ClusterData Cent er a Data Center B Continental ClusterBenefits of Continentalclusters Continental Cluster With Cascading Failover Comparison of Disaster Tolerant SolutionsAttributes Extended Distance Comparison of Disaster Tolerant Cluster SolutionsContinentalclusters Cluster HP-UX onlyUnderstanding Types of Disaster Tolerant Clusters Understanding Types of Disaster Tolerant Clusters Understanding Types of Disaster Tolerant Clusters WAN EVA Protecting Nodes through Geographic Dispersion Disaster Tolerant Architecture GuidelinesOff-line Data Replication Protecting Data through ReplicationPhysical Data Replication On-line Data ReplicationDisadvantages of physical replication in hardware are Advantages of physical replication in hardware areAdvantages of physical replication in software are Logical Data Replication Disadvantages of physical replication in software areDisadvantages of logical replication are Ideal Data Replication Using Alternative Power SourcesPower Circuit 1 node Alternative Power SourcesData Center a Node 3 Power Circuit Creating Highly Available NetworkingDisaster Tolerant Wide Area Networking Disaster Tolerant Local Area NetworkingDisaster Tolerant Cluster Limitations Managing a Disaster Tolerant Environment Manage it in-house, or hire a service?How is the cluster maintained? Additional Disaster Tolerant Solutions Information Building an Extended Distance Dwdm Types of Data Link for Storage NetworkingTwo Data Center and Quorum Service Location Architectures Two Data Center and Quorum Service Location Architectures Server Two Data Centers and Third Location with Dwdm and QuorumTwo Data Center and Quorum Service Location Architectures Rules for Separate Network and Data Links Guidelines on Dwdm Links for Network and Data Guidelines on Dwdm Links for Network and Data Guidelines on Dwdm Links for Network and Data Chapter Configuring your Environment Understanding Software RAID Installing XDC Installing the Extended Distance Cluster SoftwareSupported Operating Systems Prerequisites# rpm -Uvh xdc-A.01.00-0.rhel4.noarch.rpm Verifying the XDC InstallationInstalling the Extended Distance Cluster Software Configuring the Environment Configuring the Environment Configuring the Environment Setting the Value of the Link Down Timeout Parameter Configuring Multiple Paths to StorageCluster Reformation Time and Timeout Values Http//docs.hp.com Using Persistent Device NamesTo Create and Assemble an MD Device Creating a Multiple Disk Device# mdadm -A -R /dev/md0 /dev/hpdev/sde1 /dev/hpdev/sdf1 Chapter Linux #RAIDTAB= # MD RAID Commands To Edit the Datarep Variable Creating and Editing the Package Control ScriptsTo Create a Package Control Script Editing the raid.conf File To Edit the Xdcconfig File parameterTo Configure the RAID Monitoring Service Cases to Consider when Setting Rpotarget RPO Target Definitions Chapter Multipledevices and Componentdevices Raidmonitorinterval Configuring your Environment for Software RAID Disaster Scenario What happens when this disaster occursRecovery Process Disaster Scenarios and Their Handling Disaster Scenarios and Their Handling# mdadm --remove /dev/md0 # mdadm -add /dev/md0 Dev/hpdev/mylink-sdf P1 uses a mirror md0 Run the following command to S2 is non-current by less # cmrunpkg packagename Execute the commands that With md0 consisting of only N1, for example Becomes accessible from N2 Center Disaster Scenarios and Their Handling Managing an MD Device Cat /proc/mdstat Viewing the Status of the MD DeviceExample A-1 Stopping the MD Device /dev/md0 Stopping the MD DeviceExample A-2 Starting the MD Device /dev/md0 Starting the MD Device# udevinfo -q symlink -n sdc1 Removing and Adding an MD Mirror Component Disk# mdadm --remove /dev/md0 /dev/hpdev/sde Adding a Mirror Component DeviceIndex 104