Configuring the remote array RAID Manager instance for automatic startup

After editing the remote array RAID Manager configuration files on the nodes, you must configure the remote array RAID Manager instance to start automatically during node boot and package startup.

Complete the following procedure to configure the remote array RAID Manager instance for automatic startup:

1.Edit the following parameters in the configuration file /etc/rc.config.d/raidmgr:

START_RAIDMGR

Set this parameter to 1.

RAIDMGR_INSTANCE

Specify all the RAID Manager instances that must be started during node boot-up. Include the remote array RAID Manager instance number as the value for this parameter.

For example, if the instance number zero (0) is the local array RAID Manager instance and the instance number 1 is the remote array RAID Manager instance then specify both the instances in a comma separated list as RAIDMGR_INSTANCE parameter value in the /etc/rc.config.d/raidmgr configuration file as follows:

RAIDMGR_INSTANCE=”0,1”

Restrictions

Following are the restrictions that must be considered prior to configuring remote command device in a Metrocluster:

Always use a dedicated command device for the remote array RAID Manager instance.

When using Extended SAN:

Ensure that only the command device is presented to the remote nodes.

None of the replicated LUNs must be presented to the nodes in the remote site.

While editing the Continuous Access device group configuration, the changes must be done in the configuration files of both the RAID Manager instances, on all the cluster nodes.

In a 3-Data center environment, the remote command device must only be configured between DC1 (Primary Site) and DC2 (Hot Standby Site) XP arrays in the Metrocluster.

Defining Storage Units

Both LVM and VERITAS VxVM storage can be used in disaster tolerant clusters. The following sections show how to set up each type of volume group:

Creating and Exporting LVM Volume Groups using Continuous Access P9000 or XP

Use the following procedure to create and export volume groups:

1.NOTE: If you are using the March 2008 version or later of HP-UX 11i v3, skip step1; vgcreate (1m) will create the device file.

Define the appropriate Volume Groups on each host system that might run the application package.

#mkdir /dev/vgxx

#mknod /dev/vgxx/group c 64 0xnn0000

where the name /dev/vgxx and the number nn are unique within the entire cluster.

2.Create the Volume Group on the source volumes.

# pvcreate -f /dev/rdsk/cxtydz

176 Building Disaster Recovery Serviceguard Solutions Using Metrocluster with Continuous Access for P9000 and XP