HP UX Serviceguard Storage Management Software manual Overview of Cluster Volume Management

Page 40

Cluster Volume Manager Administration

Overview of Cluster Volume Management

 

Overview of Cluster Volume Management

 

Tightly coupled cluster systems have become increasingly popular in enterprise-scale

 

mission-critical data processing. The main advantage clusters offer is protection against

 

hardware failure. If the master node fails or otherwise becomes unavailable, applications

 

can continue to run by transferring their execution to standby nodes in the cluster. This

 

ability to provide continuous availability of service by switching to redundant hardware

 

is commonly termed failover.

 

Another major advantage clustered systems offer is their ability to reduce contention for

 

system resources caused by activities such as backup, decision support and report

 

generation. Enhanced value can be derived from cluster systems by performing such

 

operations on lightly loaded nodes in the cluster instead of on the heavily loaded nodes

 

that answer requests for service. This ability to perform some operations on the lightly

 

loaded nodes is commonly termed load balancing.

 

To implement cluster functionality, VxVM works together with the cmvx daemon

 

provided by HP. The cmdx daemon informs VxVM of changes in cluster membership.

 

Each node starts up independently and has its own copies of HP-UX, Serviceguard, and

 

CVM. A node joins a cluster when the cluster monitor is started on that node. When a

 

node joins a cluster, it gains access to shared disks. When a node leaves a cluster, it no

 

longer has access to those shared disks.

 

 

IMPORTANT

The cluster functionality of VxVM is supported only when used in conjunction with the

 

cmvx daemon.

 

Figure 4-1, “Example of a 4-Node Cluster,” illustrates a simple cluster arrangement

 

consisting of four nodes with similar or identical hardware characteristics (CPUs, RAM

 

and host adapters), and configured with identical software (including the operating

 

system). The nodes are fully connected by a private network and they are also separately

 

connected to shared external storage (either disk arrays or JBODs) via FibreChannel.

 

Each node has two independent paths to these disks, which are configured in one or

 

more cluster-shareable disk groups.

 

The private network allows the nodes to share information about system resources and

 

about each other’s state. Using the private network, any node can recognize which other

 

nodes are currently active, which are joining or leaving the cluster, and which have

 

failed. The private network requires at least two communication channels to provide

 

redundancy against one of the channels failing. If only one channel is used (a condition

 

known as network partitioning), its failure will be indistinguishable from node failure.

40

Chapter 4

Image 40
Contents Second Edition Legal Notices Contents Cluster Volume Manager Administration TroubleshootingPrinting History Printing HistoryPage Technical Overview Overview of Cluster File System Architecture Cluster File System DesignCluster File System Failover Group Lock ManagerCFS Supported Features Supported FeaturesVxFS Functionality on Cluster File Systems Unsupported Features CFS Unsupported FeaturesCFS Unsupported Features Advantages To Using CFS Benefits and ApplicationsWhen To Use CFS Benefits and Applications Chapter Cluster File System Architecture Veritas Cluster Volume Manager Functionality Role of Component ProductsCluster Communication Membership PortsCluster File System and The Group Lock Manager About CFSAsymmetric Mounts Parallel I/O Primary and Secondary Mount OptionsCluster File System Backup Strategies Error Handling Policy Synchronizing Time on Cluster File SystemsDistributing Load on a Cluster File System TuneablesAbout Veritas Cluster Volume Manager Functionality Example of a Four-Node ClusterPrivate and Shared Disk Groups Activation Modes for Shared Disk Groups Activation Modes for Shared Disk GroupsConnectivity Policy of Shared Disk Groups Allowed and conflicting activation modesLimitations of Shared Disk Groups About Veritas Cluster Volume Manager Functionality Chapter Cluster File System Administration Cluster File System Administration Cluster Messaging GAB Cluster Communication LLT Volume Manager Cluster Functionality Overview Cluster File System Overview Cluster and Shared MountsAsymmetric Mounts Cluster File System Administration Cluster File System CommandsTime Synchronization for Cluster File Systems Growing a Cluster File SystemFstab file Distributing the Load on a ClusterCluster File System Administration Cluster Snapshot Characteristics Snapshots for Cluster File SystemsPerformance Considerations Creating a Snapshot on a Cluster File System# cfsumount /mnt1snap Cluster Volume Manager Overview of Cluster Volume Management Example of a 4-Node Cluster Disk group activation mode restrictions Either of the write modes on other nodes will fail # cfsdgadm display Disk Group Failure Policy Behavior of Master Node for Different Failure PoliciesRecovery in a CVM Environment Troubleshooting Installation Issues Inaccessible SystemIncorrect Permissions for Root on Remote System Resource Temporarily UnavailableInstallation Issues Unmount Failures Cluster File System ProblemsMount Failures Performance Issues Command FailuresHigh Availability Issues Cluster File System Problems Appendix a