never completes, the monitor service will terminate after six such consecutive read attempts (a duration of up to six poll intervals).

volume_path

The full path to at least one VxVM volume or LVM logical volume device file for monitoring (required). The pathname must identify a block device file.

Examples

/usr/sbin/cmvolmond -O /log/monlog.log -D 3 /dev/vx/dsk/cvm_dg0/lvol2

This command monitors a single VxVM volume, /dev/vx/dsk/cvm_dg0/lvol2, at log level 3, with a polling interval of 60 seconds, and prints all log messages to /log/monlog.log.

/usr/sbin/cmvolmond /dev/vg01/lvol1 /dev/vg01/lvol2

This command monitors two LVM logical volumes at the default log level of 0, with a polling interval of 60 seconds, and prints all log messages to the console.

/usr/sbin/cmvolmond -t 10 /dev/vg00/lvol1

This command monitors the LVM root logical volume at log level 0, with a polling interval of 10 seconds, and prints all log messages to the console (package log).

Scope of Monitoring

The Volume Monitor detects the following failures:

Failure of the last link to a storage device or set of devices critical to volume operation

Failure of a storage device or set of devices critical to volume operation

An unexpected detachment, disablement, or deactivation of a volume

The Volume Monitor does not detect the following failures:

Failure of a redundant link to a storage device or set of devices where a working link remains

Failure of a mirror or mirrored plex within a volume (assuming at least one mirror or plex is functional)

Corruption of data on a monitored volume.

Planning for NFS-mounted File Systems

As of Serviceguard A.11.20, you can use NFS-mounted (imported) file systems as shared storage in packages.

The same package can mount more than one NFS-imported file system, and can use both cluster-local shared storage and NFS imports.

The following rules and restrictions apply.

NFS mounts are supported for modular, failover packages. It is now possible (as of A.11.20 April 2011 patch release) to create a Multi-Node Package that uses an NFS file share, and this is useful if you want to create a HP Integrity Virtual Machine (HPVM) in a Serviceguard Package, where the virtual machine itself uses a remote NFS share as backing store.

For details on how to configure NFS as a backing store for HPVM, see the HP Integrity Virtual Machines 4.3: Installation, Configuration, and Administration guide at http://www.hp.com/ go/virtualization-manuals—> HP Integrity Virtual Machines and Online VM Migration.

See Chapter 6 (page 227) for a discussion of types of packages.

So that Serviceguard can ensure that all I/O from a node on which a package has failed is flushed before the package restarts on an adoptive node, all the network switches and routers between the NFS server and client must support a worst-case timeout, after which packets and frames are dropped. This timeout is known as the Maximum Bridge Transit Delay (MBTD).

130 Planning and Documenting an HA Cluster

Page 130
Image 130
HP Serviceguard Planning for NFS-mounted File Systems, Volumepath, Usr/sbin/cmvolmond /dev/vg01/lvol1 /dev/vg01/lvol2