HP UX Serviceguard Storage Management Software manual Private and Shared Disk Groups

Page 22

Cluster File System Architecture

About Veritas Cluster Volume Manager Functionality

 

To the cmvx daemon, all nodes are the same. VxVM objects configured within shared disk

 

groups can potentially be accessed by all nodes that join the cluster. However, the cluster

 

functionality of VxVM requires one node to act as the master node; all other nodes in the

 

cluster are slave nodes. Any node is capable of being the master node, which is

 

responsible for coordinating certain VxVM activities.

 

 

NOTE

You must run commands that configure or reconfigure VxVM objects on the master node.

 

Tasks that must be initiated from the master node include setting up shared disk groups

 

and creating and reconfiguring volumes.

 

VxVM designates the first node to join a cluster as the master node. If the master node

 

leaves the cluster, one of the slave nodes is chosen to be the new master node. In the

 

preceding example, node 0 is the master node and nodes 1, 2 and 3 are slave nodes.

Private and Shared Disk Groups

There are two types of disk groups:

• Private (non- CFS) disk groups, which belong to only one node.

A private disk group is only imported by one system. Disks in a private disk group may be physically accessible from one or more systems, but import is restricted to one system only. The root disk group is always a private disk group.

• Shared (CFS) disk groups, which are shared by all nodes.

A shared (or cluster-shareable) disk group is imported by all cluster nodes. Disks in a shared disk group must be physically accessible from all systems that may join the cluster.

Disks in a shared disk group are accessible from all nodes in a cluster, allowing applications on multiple cluster nodes to simultaneously access the same disk. A volume in a shared disk group can be simultaneously accessed by more than one node in the cluster, subject to licensing and disk group activation mode restrictions.

You can use the vxdg command to designate a disk group as cluster-shareable. When a disk group is imported as cluster-shareable for one node, each disk header is marked with the cluster ID. As each node subsequently joins the cluster, it recognizes the disk group as being cluster-shareable and imports it. You can also import or deport a shared disk group at any time; the operation takes places in a distributed fashion on all nodes. See “Cluster File System Commands” on page 34 for more information.

Each physical disk is marked with a unique disk ID. When cluster functionality for VxVM starts on the master node, it imports all shared disk groups (except for any that have the noautoimport attribute set). When a slave node tries to join a cluster, the master node sends it a list of the disk IDs that it has imported, and the slave node checks to see if it can access all of them. If the slave node cannot access one of the listed disks, it abandons its attempt to join the cluster. If it can access all of the listed disks, it imports the same shared disk groups as the master node and joins the cluster. When a node leaves the cluster, it deports all of its imported shared disk groups, but they remain imported on the surviving nodes.

Reconfiguring a shared disk group is performed with the co-operation of all nodes. Configuration changes to the disk group happen simultaneously on all nodes and the changes are identical. Such changes are atomic in nature, which means that they either occur simultaneously on all nodes, or not at all.

22

Chapter 2

Image 22
Contents Second Edition Legal Notices Contents Cluster Volume Manager Administration TroubleshootingPrinting History Printing HistoryPage Technical Overview Cluster File System Failover Overview of Cluster File System ArchitectureCluster File System Design Group Lock ManagerCFS Supported Features Supported FeaturesVxFS Functionality on Cluster File Systems Unsupported Features CFS Unsupported FeaturesCFS Unsupported Features Advantages To Using CFS Benefits and ApplicationsWhen To Use CFS Benefits and Applications Chapter Cluster File System Architecture Cluster Communication Veritas Cluster Volume Manager FunctionalityRole of Component Products Membership PortsCluster File System and The Group Lock Manager About CFSAsymmetric Mounts Parallel I/O Primary and Secondary Mount OptionsCluster File System Backup Strategies Distributing Load on a Cluster Error Handling PolicySynchronizing Time on Cluster File Systems File System TuneablesAbout Veritas Cluster Volume Manager Functionality Example of a Four-Node ClusterPrivate and Shared Disk Groups Activation Modes for Shared Disk Groups Activation Modes for Shared Disk GroupsConnectivity Policy of Shared Disk Groups Allowed and conflicting activation modesLimitations of Shared Disk Groups About Veritas Cluster Volume Manager Functionality Chapter Cluster File System Administration Cluster File System Administration Cluster Messaging GAB Cluster Communication LLT Volume Manager Cluster Functionality Overview Cluster File System Overview Cluster and Shared MountsAsymmetric Mounts Cluster File System Administration Cluster File System CommandsFstab file Time Synchronization for Cluster File SystemsGrowing a Cluster File System Distributing the Load on a ClusterCluster File System Administration Performance Considerations Cluster Snapshot CharacteristicsSnapshots for Cluster File Systems Creating a Snapshot on a Cluster File System# cfsumount /mnt1snap Cluster Volume Manager Overview of Cluster Volume Management Example of a 4-Node Cluster Disk group activation mode restrictions Either of the write modes on other nodes will fail # cfsdgadm display Disk Group Failure Policy Behavior of Master Node for Different Failure PoliciesRecovery in a CVM Environment Troubleshooting Incorrect Permissions for Root on Remote System Installation IssuesInaccessible System Resource Temporarily UnavailableInstallation Issues Unmount Failures Cluster File System ProblemsMount Failures Performance Issues Command FailuresHigh Availability Issues Cluster File System Problems Appendix a