HP UX Serviceguard Storage Management Software About Veritas Cluster Volume Manager Functionality

Page 21

NOTE

Figure 2-1

Cluster File System Architecture

About Veritas Cluster Volume Manager Functionality

About Veritas Cluster Volume Manager Functionality

CVM supports up to 8 nodes in a cluster to simultaneously access and manage a set of disks under VxVM control (VM disks). The same logical view of the disk configuration and any changes are available on each node. When the cluster functionality is enabled, all cluster nodes can share VxVM objects. Features provided by the base volume manager, such as mirroring, fast mirror resync, and dirty region logging are also supported in the cluster environment.

RAID-5 volumes are not supported on a shared disk group.

To implement cluster functionality, VxVM works together with the cmvx daemon provided by HP. The cmdx daemon informs VxVM of changes in cluster membership. Each node starts up independently and has its own copies of HP-UX, Serviceguard, and CVM. When a node joins a cluster it gains access to shared disks. When a node leaves a cluster, it no longer has access to shared disks. A node joins a cluster when Serviceguard is started on that node.

Figure 2-1 illustrates a simple cluster consisting of four nodes with similar or identical hardware characteristics (CPUs, RAM and host adapters), and configured with identical software (including the operating system). The nodes are fully connected by a private network and they are also separately connected to shared external storage (either disk arrays or JBODs) via Fibre Channel. Each node has two independent paths to these disks, which are configured in one or more cluster-shareable disk groups.

The private network allows the nodes to share information about system resources and about each other’s state. Using the private network, any node can recognize which nodes are currently active, which are joining or leaving the cluster, and which have failed. The private network requires at least two communication channels to provide redundancy against one of the channels failing. If only one channel is used, its failure will be indistinguishable from node failure—a condition known as network partitioning.

Example of a Four-Node Cluster

Redundant Private Network

Node 0

Master

Node 1

Slave

Node 2

Slave

Node 3

Slave

Redundant

Fibre Channel

Connectivity

Cluster-Shareable

Disks

Cluster-Shareable

Disk Groups

Chapter 2

21

Image 21
Contents Second Edition Legal Notices Contents Troubleshooting Cluster Volume Manager AdministrationPrinting History Printing HistoryPage Technical Overview Cluster File System Design Overview of Cluster File System ArchitectureCluster File System Failover Group Lock ManagerSupported Features CFS Supported FeaturesVxFS Functionality on Cluster File Systems CFS Unsupported Features Unsupported FeaturesCFS Unsupported Features Benefits and Applications Advantages To Using CFSWhen To Use CFS Benefits and Applications Chapter Cluster File System Architecture Role of Component Products Veritas Cluster Volume Manager FunctionalityCluster Communication Membership PortsAbout CFS Cluster File System and The Group Lock ManagerAsymmetric Mounts Primary and Secondary Mount Options Parallel I/OCluster File System Backup Strategies Synchronizing Time on Cluster File Systems Error Handling PolicyDistributing Load on a Cluster File System TuneablesExample of a Four-Node Cluster About Veritas Cluster Volume Manager FunctionalityPrivate and Shared Disk Groups Activation Modes for Shared Disk Groups Activation Modes for Shared Disk GroupsAllowed and conflicting activation modes Connectivity Policy of Shared Disk GroupsLimitations of Shared Disk Groups About Veritas Cluster Volume Manager Functionality Chapter Cluster File System Administration Cluster File System Administration Cluster Messaging GAB Cluster Communication LLT Volume Manager Cluster Functionality Overview Cluster and Shared Mounts Cluster File System OverviewAsymmetric Mounts Cluster File System Commands Cluster File System AdministrationGrowing a Cluster File System Time Synchronization for Cluster File SystemsFstab file Distributing the Load on a ClusterCluster File System Administration Snapshots for Cluster File Systems Cluster Snapshot CharacteristicsPerformance Considerations Creating a Snapshot on a Cluster File System# cfsumount /mnt1snap Cluster Volume Manager Overview of Cluster Volume Management Example of a 4-Node Cluster Disk group activation mode restrictions Either of the write modes on other nodes will fail # cfsdgadm display Behavior of Master Node for Different Failure Policies Disk Group Failure PolicyRecovery in a CVM Environment Troubleshooting Inaccessible System Installation IssuesIncorrect Permissions for Root on Remote System Resource Temporarily UnavailableInstallation Issues Cluster File System Problems Unmount FailuresMount Failures Command Failures Performance IssuesHigh Availability Issues Cluster File System Problems Appendix a