HP serviceguard t2808-90006 manual Guidelines on Dwdm Links for Network and Data

Page 58

Building an Extended Distance Cluster Using Serviceguard and Software RAID

Guidelines on DWDM Links for Network and Data

Guidelines on DWDM Links for Network and Data

There must be less than 200 milliseconds of latency in the network between the data centers.

No routing is allowed for the networks between the data centers.

Routing is allowed to the third data center if a Quorum Server is used in that data center.

The maximum distance supported between the data centers for DWDM configurations is 100 kilometers.

Both the networking and Fibre Channel Data Replication can go through the same DWDM box - separate DWDM boxes are not required.

Since DWDM converters are typically designed to be fault tolerant, it is acceptable to use only one DWDM box (in each data center) for the links between each data center. However, for the highest availability, it is recommended to use two separate DWDM boxes (in each data center) for the links between each data center. If using a single DWDM box for the links between each data center the redundant standby fibre link feature of the DWDM box must be configured. If the DWDM box supports multiple active DWDM links, that feature can be used instead of the redundant standby feature.

At least two dark fiber optic links are required between each Primary data center, each fibre link routed differently to prevent the “backhoe problem.” It is allowable to have only a single fibre link routed from each Primary data center to the third location, however in order to survive the loss of a link between a Primary data center and the third data center, the network routing should be configured so that a Primary data center can also reach the Arbitrator via a route passing through the other Primary data center.

The network switches in the configuration must support DLPI (link level) packets. The network switch can be 100BaseT (TX or FX), 1000BaseT (TX or FX) or FDDI. The connection between the network switch and the DWDM box must be fiber optic.

58

Chapter 2

Image 58
Contents Page Legal Notices Contents Disaster Scenarios and Their Handling Managing an MD Device Contents Contents Printing History Editions and ReleasesHP Printing Division Intended Audience Document OrganizationPage Related Page Disaster Tolerance Evaluating the Need for Disaster Tolerance Evaluating the Need for Disaster Tolerance Node 1 fails What is a Disaster Tolerant Architecture?High Availability Architecture Pkg B Client ConnectionsDisaster Tolerant Architecture Understanding Types of Disaster Tolerant Clusters Extended Distance ClustersFrom both storage devices Extended Distance Cluster Two Data Center Setup Benefits of Extended Distance Cluster Cluster Extension CLX Cluster Shows a CLX for a Linux Serviceguard cluster architecture CLX for Linux Serviceguard ClusterBenefits of CLX Differences Between Extended Distance Cluster and CLX Continental Cluster Data Cent er a Data Center B Los Angeles ClusterNew York Cluster Continental ClusterBenefits of Continentalclusters Comparison of Disaster Tolerant Solutions Continental Cluster With Cascading FailoverContinentalclusters Comparison of Disaster Tolerant Cluster SolutionsAttributes Extended Distance Cluster HP-UX onlyUnderstanding Types of Disaster Tolerant Clusters Understanding Types of Disaster Tolerant Clusters Understanding Types of Disaster Tolerant Clusters WAN EVA Disaster Tolerant Architecture Guidelines Protecting Nodes through Geographic DispersionProtecting Data through Replication Off-line Data ReplicationOn-line Data Replication Physical Data ReplicationAdvantages of physical replication in hardware are Disadvantages of physical replication in hardware areAdvantages of physical replication in software are Disadvantages of physical replication in software are Logical Data ReplicationDisadvantages of logical replication are Using Alternative Power Sources Ideal Data ReplicationData Center a Node 3 Power Circuit Alternative Power SourcesPower Circuit 1 node Creating Highly Available NetworkingDisaster Tolerant Local Area Networking Disaster Tolerant Wide Area NetworkingDisaster Tolerant Cluster Limitations Manage it in-house, or hire a service? Managing a Disaster Tolerant EnvironmentHow is the cluster maintained? Additional Disaster Tolerant Solutions Information Building an Extended Distance Types of Data Link for Storage Networking DwdmTwo Data Center and Quorum Service Location Architectures Two Data Center and Quorum Service Location Architectures Two Data Centers and Third Location with Dwdm and Quorum ServerTwo Data Center and Quorum Service Location Architectures Rules for Separate Network and Data Links Guidelines on Dwdm Links for Network and Data Guidelines on Dwdm Links for Network and Data Guidelines on Dwdm Links for Network and Data Chapter Configuring your Environment Understanding Software RAID Supported Operating Systems Installing the Extended Distance Cluster SoftwareInstalling XDC PrerequisitesVerifying the XDC Installation # rpm -Uvh xdc-A.01.00-0.rhel4.noarch.rpmInstalling the Extended Distance Cluster Software Configuring the Environment Configuring the Environment Configuring the Environment Configuring Multiple Paths to Storage Setting the Value of the Link Down Timeout ParameterCluster Reformation Time and Timeout Values Using Persistent Device Names Http//docs.hp.comCreating a Multiple Disk Device To Create and Assemble an MD Device# mdadm -A -R /dev/md0 /dev/hpdev/sde1 /dev/hpdev/sdf1 Chapter Linux #RAIDTAB= # MD RAID Commands To Create a Package Control Script Creating and Editing the Package Control ScriptsTo Edit the Datarep Variable To Configure the RAID Monitoring Service To Edit the Xdcconfig File parameterEditing the raid.conf File Cases to Consider when Setting Rpotarget RPO Target Definitions Chapter Multipledevices and Componentdevices Raidmonitorinterval Configuring your Environment for Software RAID Recovery Process What happens when this disaster occursDisaster Scenario Disaster Scenarios and Their Handling Disaster Scenarios and Their Handling# mdadm --remove /dev/md0 # mdadm -add /dev/md0 Dev/hpdev/mylink-sdf P1 uses a mirror md0 Run the following command to S2 is non-current by less # cmrunpkg packagename Execute the commands that With md0 consisting of only N1, for example Becomes accessible from N2 Center Disaster Scenarios and Their Handling Managing an MD Device Viewing the Status of the MD Device Cat /proc/mdstatStopping the MD Device Example A-1 Stopping the MD Device /dev/md0Starting the MD Device Example A-2 Starting the MD Device /dev/md0Removing and Adding an MD Mirror Component Disk # udevinfo -q symlink -n sdc1Adding a Mirror Component Device # mdadm --remove /dev/md0 /dev/hpdev/sdeIndex 104