After the above command completes, start the cluster and create disk groups for shared use as described in the following sections.

Starting the Cluster and Identifying the Master Node

Run the cluster to activate the special CVM package:

#cmruncl

After the cluster is started, it will run with a special system multi-node package named VxVM-CVM-pkgthat is on all nodes. This package is shown in the following output of the cmviewcl -vcommand:

CLUSTER STATUS

bowls up

NODE

STATUS

STATE

spare

up

running

split

up

running

strike

up

running

SYSTEM_MULTI_NODE_PACKAGES:

PACKAGE

STATUS

STATE

VxVM-CVM-pkg up

running

When CVM starts up, it selects a master node. From this node, you must issue the disk group configuration commands. To determine the master node, issue the following command from each node in the cluster:

#vxdctl -c mode

One node will identify itself as the master. Create disk groups from this node.

Converting Disks from LVM to CVM

You can use the vxvmconvert utility to convert LVM volume groups into CVM disk groups. Before you can do this, the volume group must be deactivated—any package that uses the volume group must be halted. This procedure is described in the latest edition of the Managing Serviceguard user guide, Appendix G.

Initializing Disks for CVM

You need to initialize the physical disks that will be employed in CVM disk groups. If a physical disk has been previously used with LVM, you should use the pvremove command to delete the LVM header data from all the disks in the volume group (this is not necessary if you have not previously used the disk with LVM).

To initialize a disk for CVM, log on to the master node, then use the vxdiskadm program to initialize multiple disks, or use the vxdisksetup command to initialize one disk at a time, as in the following example:

#/usr/lib/vxvm/bin/vxdisksetup -i /dev/dsk/c0t3d2

Creating Disk Groups for RAC

Use the vxdg command to create disk groups. Use the -s option to specify shared mode, as in the following example:

#vxdg -s init ops_dg c0t3d2

Verify the configuration with the following command:

#vxdg list

NAME

STATE

ID

rootdg

enabled

971995699.1025.node1

ops_dg

enabled,shared

972078742.1084.node2

56 Serviceguard Configuration for Oracle 10g, 11gR1, or 11gR2 RAC

Page 56
Image 56
HP Serviceguard Extension for RAC (SGeRAC) # cmruncl, # /usr/lib/vxvm/bin/vxdisksetup -i /dev/dsk/c0t3d2, # vxdg list