HP UX Serviceguard Storage Management Software manual Overview of Cluster File System Architecture

Page 8

Technical Overview

Overview of Cluster File System Architecture

NOTE

Overview of Cluster File System Architecture

CFS allows clustered servers to mount and use the same file system simultaneously, as if all applications using the file system are running on the same server. CVM makes logical volumes and raw device applications accessible throughout a cluster.

Cluster File System Design

Beginning with version 5.0, CFS uses a Symmetric architecture in which all nodes in the cluster can simultaneously function as metadata servers. CFS 5.0 has some remnants of the master/slave node concept from version 4.1, but this functionality has changed in version 5.0 along with a different naming convention. The first server to mount each cluster file system becomes the primary CFS node; all other nodes in the cluster are considered secondary CFS nodes. Applications access user data directly from the node they are running on. Each CFS node has its own intent log. File system operations, such as allocating or deleting files, can originate from any node in the cluster.

The master/slave node naming convention continues to be used when referring to Veritas Cluster Volume Manager (CVM) nodes.

Cluster File System Failover

If the server designated as the CFS primary node fails, the remaining nodes in the cluster elect a new primary node. The new primary node reads the intent log of the old primary node and completes any metadata updates that were in process at the time of the failure.

Failure of a secondary node does not require metadata repair, because nodes using a cluster file system in secondary mode do not update file system metadata directly. The Multiple Transaction Server distributes file locking ownership and metadata updates across all nodes in the cluster, enhancing scalability without requiring unnecessary metadata communication throughout the cluster. CFS recovery from secondary node failure is therefore faster than from primary node failure.

Group Lock Manager

CFS uses the Veritas Group Lock Manager (GLM) to reproduce UNIX single-host file system semantics in clusters. This is most important in write behavior. UNIX file systems make writes appear to be atomic. This means that when an application writes a stream of data to a file, any subsequent application that reads from the same area of the file retrieves the new data, even if it has been cached by the file system and not yet written to disk. Applications can never retrieve stale data, or partial results from a previous write.

To reproduce single-host write semantics, system caches must be kept coherent and each must instantly reflect any updates to cached data, regardless of the cluster node from which they originate. GLM locks a file so that no other node in the cluster can simultaneously update it, or read it before the update is complete.

8

Chapter 1

Image 8
Contents Second Edition Legal Notices Contents Cluster Volume Manager Administration TroubleshootingPrinting History Printing HistoryPage Technical Overview Overview of Cluster File System Architecture Cluster File System DesignCluster File System Failover Group Lock ManagerVxFS Functionality on Cluster File Systems Supported FeaturesCFS Supported Features Unsupported Features CFS Unsupported FeaturesCFS Unsupported Features Advantages To Using CFS Benefits and ApplicationsWhen To Use CFS Benefits and Applications Chapter Cluster File System Architecture Veritas Cluster Volume Manager Functionality Role of Component ProductsCluster Communication Membership PortsAsymmetric Mounts About CFSCluster File System and The Group Lock Manager Parallel I/O Primary and Secondary Mount OptionsCluster File System Backup Strategies Error Handling Policy Synchronizing Time on Cluster File SystemsDistributing Load on a Cluster File System TuneablesAbout Veritas Cluster Volume Manager Functionality Example of a Four-Node ClusterPrivate and Shared Disk Groups Activation Modes for Shared Disk Groups Activation Modes for Shared Disk GroupsConnectivity Policy of Shared Disk Groups Allowed and conflicting activation modesLimitations of Shared Disk Groups About Veritas Cluster Volume Manager Functionality Chapter Cluster File System Administration Cluster File System Administration Cluster Messaging GAB Cluster Communication LLT Volume Manager Cluster Functionality Overview Cluster File System Overview Cluster and Shared MountsAsymmetric Mounts Cluster File System Administration Cluster File System CommandsTime Synchronization for Cluster File Systems Growing a Cluster File SystemFstab file Distributing the Load on a ClusterCluster File System Administration Cluster Snapshot Characteristics Snapshots for Cluster File SystemsPerformance Considerations Creating a Snapshot on a Cluster File System# cfsumount /mnt1snap Cluster Volume Manager Overview of Cluster Volume Management Example of a 4-Node Cluster Disk group activation mode restrictions Either of the write modes on other nodes will fail # cfsdgadm display Disk Group Failure Policy Behavior of Master Node for Different Failure PoliciesRecovery in a CVM Environment Troubleshooting Installation Issues Inaccessible SystemIncorrect Permissions for Root on Remote System Resource Temporarily UnavailableInstallation Issues Mount Failures Cluster File System ProblemsUnmount Failures High Availability Issues Command FailuresPerformance Issues Cluster File System Problems Appendix a