HP Serviceguard Toolkit for NFS manual SNFS0=nfs1/hanfs/nfsu011CNFS0=/nfs/nfsu011

Page 49

Installing and Configuring MC/ServiceGuard NFS

Configuring an MC/ServiceGuard NFS Package

The advantage of server-to-server cross-mounting is that every server has an identical view of the file systems. The disadvantage is that, on the node where a file system is locally mounted, the file system is accessed through an NFS mount, which has poorer performance than a local mount.

In order to make a MC/ServiceGuard file system available to all servers, all servers must NFS-mount the file system. That way, access to the file system is not interrupted when the package fails over to an adoptive node. An adoptive node cannot access the file system through the local mount, because it would have to unmount the NFS-mounted file system before it could mount it locally. And in order to unmount the NFS-mounted file system, it would have to kill all processes using the file system.

Follow these steps to set up an NFS package with file systems that are NFS-mounted by MC/ServiceGuard NFS servers:

1.Make a copy of the /etc/cmcluster/nfs/nfs_xmnt script.

cd /etc/cmcluster/nfs cp nfs_xmnt nfs1_xmnt

2.In the copy of the nfs_xmnt script, create an SNFS[n] and CNFS[n] variable for each file system in the package that will be NFS-mounted by servers. The SNFS[n] variable is the server location of the file system, and the CNFS[n] variable is the client mount point of the file system.

SNFS[0]=”nfs1:/hanfs/nfsu011”;CNFS[0]=”/nfs/nfsu011”

In this example, “nfs1” is the name that maps to the package’s relocatable IP address. It must be configured in the name service used by the server (DNS, NIS, or the /etc/hosts file).

If a server for the package will NFS-mount the package’s file systems, the client mount point (CNFS) must be different from the server location (SNFS).

3.Copy the script you have just modified to all the servers that will NFS-mount the file systems in the package.

4.After the package is active on the primary node, execute the

nfs_xmnt script on each server that will NFS-mount the file systems.

/etc/cmcluster/nfs/nfs1_xmnt start

Chapter 2

49

Image 49
Contents Edition Managing MC/ServiceGuard NFS A.11.23.02Legal Notices Contents Index Figures Figures NFS Overview of MC/ServiceGuard NFS Limitations of MC/ServiceGuard NFS Overview of the NFS File Lock Migration Feature Overview of the NFS File Lock Migration Feature Supported Configurations Pkg1 disks Simple Failover to an Idle NFS ServerPkg2 Disks Failover from One Active NFS Server to AnotherPackages Host Configured as Adoptive Node for MultipleCascading Failover with Three Adoptive Nodes NFS NFS Server-to-Server Cross MountingSupported Configurations How the Control and Monitor Scripts Work Starting the NFS ServicesStarting File Lock Migration Halting the NFS Services Monitoring the NFS Services On the Client Side Installing and Configuring Installing and Configuring MC/ServiceGuard NFS Readme Installing MC/ServiceGuard NFSCmmakepkg -p /opt/cmcluster/nfs/nfs.conf ServiceGuard NFS Toolkit Monitoring NFS/TCP Services with MCChapter Before Creating an MC/ServiceGuard NFS Package NUMNFSD=10 Before Creating an MC/ServiceGuard NFS Package Mount -o nointr relocatableip/usr/src /usr/src Configuring an MC/ServiceGuard NFS Package Copying the Template Files Editing the Control Script nfs.cntl IP0=15.13.114.243 SUBNET0=15.13.112.0 Lower Etc/hosts file Configuring an MC/ServiceGuard NFS Package Configuring an MC/ServiceGuard NFS Package Editing the NFS Control Script hanfs.sh Netswitchingenabled Editing the File Lock Migration Script nfs.flm Configuring an MC/ServiceGuard NFS Package Editing the NFS Monitor Script nfs.mon Editing the Package Configuration File nfs.conf Configuring an MC/ServiceGuard NFS Package Configuring Server-to-Server Cross-Mounts Optional SNFS0=nfs1/hanfs/nfsu011CNFS0=/nfs/nfsu011 Configuring an MC/ServiceGuard NFS Package Run the cluster using the following command cmruncl -v-f Configuring an MC/ServiceGuard NFS Package Chapter Sample Configurations Sample Configurations Basil Sage Example One Three-Server Mutual TakeoverThree-Server Mutual Takeover after One Server Fails Cluster Configuration File for Three-Server Mutual Takeover Package Configuration File for pkg01 Nfs.cntl Control Script NFS Control Scripts for pkg01Package Configuration File for pkg02 NFS Control Scripts for pkg02 Package Configuration File for pkg03 NFS Control Scripts for pkg03 Packages with File Lock Migration Example Two One Adoptive Node for TwoSage Basil Pkg02 VOLUMEGROUP/dev/nfsu01 VOLUMEGROUP/dev/nfsu02 Package Configuration File for pkg01 NFS Control Scripts for pkg01 Hanfs.sh Control Script Nfs.flm Script NFS File Lock Migration and Monitor Scripts for pkg01Package Configuration File for pkg02 NFS Control Scripts for pkg02 NFSFLMSCRIPT=$0%/*/nfs2.flm NFS File Lock Migration and Monitor Scripts for pkg02 Failover Example Three Three-Server CascadingCascading Failover with Three Servers after One Server Fails VOLUMEGROUP/dev/nfsu01 VOLUMEGROUP/dev/nfsu02 Package Configuration File for pkg01 NFS Control Scripts for pkg01 NODENAMEthyme NODENAMEsage NODENAMEbasil IP0=15.13.114.244 SUBNET0=15.13.112.0 Example Four Two Servers with NFS Cross-Mounts Thyme Basil Cluster Configuration File for Two-Server NFS Cross-Mount NODENAMEthyme NODENAMEbasil NFS Control Scripts for pkg01 SNFS0=nfs1/hanfs/nfsu011 CNFS0=/nfs/nfsu011 NODENAMEbasil NODENAMEthyme NFS Control Scripts for pkg02 SNFS0=nfs2/hanfs/nfsu021 CNFS0=/nfs/nfsu021 Index Index Package configuration file nfs.conf default values
Related manuals
Manual 28 pages 17.67 Kb Manual 93 pages 21.07 Kb