HP serviceguard t2808-90006 manual Continental Cluster With Cascading Failover

Page 30

Disaster Tolerance and Recovery in a Serviceguard Cluster

Understanding Types of Disaster Tolerant Clusters

replicate the data between two data centers. HP provides a supported integration toolkit for Oracle 8i Standby DB in the Enterprise Cluster Management Toolkit (ECMT).

RAC is supported by Continentalclusters by integrating it with SGeRAC. In this configuration, multiple nodes in a single cluster can simultaneously access the database (that is, nodes in one data center can access the database). If the site fails, the RAC instances can be recovered at the second site.

Continentalclusters supports a maximum of 4 clusters with up to 16 nodes per cluster (for a maximum of 64 nodes) supporting up to 3 primary clusters and one recovery cluster.

Failover for Continentalclusters is semi-automatic. If a data center fails, the administrator is advised, and is required to take action to bring the application up on the surviving cluster.

Continental Cluster With Cascading Failover

A continental cluster with cascading failover uses three main data centers distributed between a metropolitan cluster, which serves as a primary cluster, and a standard cluster, which serves as a recovery cluster.

Cascading failover means that applications are configured to fail over from one data center to another in the primary cluster and then to a third (recovery) cluster if the entire primary cluster fails. Data replication also follows the cascading model. Data is replicated from the primary disk array to the secondary disk array in the Metrocluster, then replicated to the third disk array in the Serviceguard recovery cluster.

For more information on Cascading Failover configuration, maintenance, and recovery procedures, see the “Cascading Failover in a Continental Cluster” white paper on the high availability documentation web site at http://docs.hp.com -> High Availability -> Continentalcluster.

Comparison of Disaster Tolerant Solutions

Table 1-1 summarizes and compares the disaster tolerant solutions that are currently available:

30

Chapter 1

Image 30
Contents Page Legal Notices Contents Disaster Scenarios and Their Handling Managing an MD Device Contents Contents Printing History Editions and ReleasesHP Printing Division Intended Audience Document OrganizationPage Related Page Disaster Tolerance Evaluating the Need for Disaster Tolerance Evaluating the Need for Disaster Tolerance Node 1 fails What is a Disaster Tolerant Architecture?High Availability Architecture Pkg B Client ConnectionsDisaster Tolerant Architecture Understanding Types of Disaster Tolerant Clusters Extended Distance ClustersFrom both storage devices Extended Distance Cluster Two Data Center Setup Benefits of Extended Distance Cluster Cluster Extension CLX Cluster Shows a CLX for a Linux Serviceguard cluster architecture CLX for Linux Serviceguard ClusterBenefits of CLX Differences Between Extended Distance Cluster and CLX Continental Cluster Data Cent er a Data Center B Los Angeles ClusterNew York Cluster Continental ClusterBenefits of Continentalclusters Comparison of Disaster Tolerant Solutions Continental Cluster With Cascading FailoverContinentalclusters Comparison of Disaster Tolerant Cluster SolutionsAttributes Extended Distance Cluster HP-UX onlyUnderstanding Types of Disaster Tolerant Clusters Understanding Types of Disaster Tolerant Clusters Understanding Types of Disaster Tolerant Clusters WAN EVA Disaster Tolerant Architecture Guidelines Protecting Nodes through Geographic DispersionProtecting Data through Replication Off-line Data ReplicationOn-line Data Replication Physical Data ReplicationAdvantages of physical replication in hardware are Disadvantages of physical replication in hardware areAdvantages of physical replication in software are Disadvantages of physical replication in software are Logical Data ReplicationDisadvantages of logical replication are Using Alternative Power Sources Ideal Data ReplicationData Center a Node 3 Power Circuit Alternative Power SourcesPower Circuit 1 node Creating Highly Available NetworkingDisaster Tolerant Local Area Networking Disaster Tolerant Wide Area NetworkingDisaster Tolerant Cluster Limitations Manage it in-house, or hire a service? Managing a Disaster Tolerant EnvironmentHow is the cluster maintained? Additional Disaster Tolerant Solutions Information Building an Extended Distance Types of Data Link for Storage Networking DwdmTwo Data Center and Quorum Service Location Architectures Two Data Center and Quorum Service Location Architectures Two Data Centers and Third Location with Dwdm and Quorum ServerTwo Data Center and Quorum Service Location Architectures Rules for Separate Network and Data Links Guidelines on Dwdm Links for Network and Data Guidelines on Dwdm Links for Network and Data Guidelines on Dwdm Links for Network and Data Chapter Configuring your Environment Understanding Software RAID Supported Operating Systems Installing the Extended Distance Cluster SoftwareInstalling XDC PrerequisitesVerifying the XDC Installation # rpm -Uvh xdc-A.01.00-0.rhel4.noarch.rpmInstalling the Extended Distance Cluster Software Configuring the Environment Configuring the Environment Configuring the Environment Configuring Multiple Paths to Storage Setting the Value of the Link Down Timeout ParameterCluster Reformation Time and Timeout Values Using Persistent Device Names Http//docs.hp.comCreating a Multiple Disk Device To Create and Assemble an MD Device# mdadm -A -R /dev/md0 /dev/hpdev/sde1 /dev/hpdev/sdf1 Chapter Linux #RAIDTAB= # MD RAID Commands Creating and Editing the Package Control Scripts To Create a Package Control ScriptTo Edit the Datarep Variable To Edit the Xdcconfig File parameter To Configure the RAID Monitoring ServiceEditing the raid.conf File Cases to Consider when Setting Rpotarget RPO Target Definitions Chapter Multipledevices and Componentdevices Raidmonitorinterval Configuring your Environment for Software RAID What happens when this disaster occurs Recovery ProcessDisaster Scenario Disaster Scenarios and Their Handling Disaster Scenarios and Their Handling# mdadm --remove /dev/md0 # mdadm -add /dev/md0 Dev/hpdev/mylink-sdf P1 uses a mirror md0 Run the following command to S2 is non-current by less # cmrunpkg packagename Execute the commands that With md0 consisting of only N1, for example Becomes accessible from N2 Center Disaster Scenarios and Their Handling Managing an MD Device Viewing the Status of the MD Device Cat /proc/mdstatStopping the MD Device Example A-1 Stopping the MD Device /dev/md0Starting the MD Device Example A-2 Starting the MD Device /dev/md0Removing and Adding an MD Mirror Component Disk # udevinfo -q symlink -n sdc1Adding a Mirror Component Device # mdadm --remove /dev/md0 /dev/hpdev/sdeIndex 104