Cluster File System Architecture
About CFS
About CFS
If the CFS primary node fails, the remaining cluster nodes elect a new primary node. The new primary node reads the file system intent log and completes any metadata updates that were in process at the time of the failure. Application I/O from other nodes may block during this process and cause a delay. When the file system becomes consistent again, application processing resumes.
Failure of a secondary node does not require metadata repair, because nodes using a cluster file system in secondary mode do not update file system metadata directly. The Multiple Transaction Server distributes file locking ownership and metadata updates across all nodes in the cluster, enhancing scalability without requiring unnecessary metadata communication throughout the cluster. CFS recovery from secondary node failure is therefore faster than from primary node failure.
See “Distributing Load on a Cluster” on page 20.
Cluster File System and The Group Lock Manager
CFS uses the Veritas Group Lock Manager (GLM) to reproduce UNIX
To simulate
Asymmetric Mounts
A Veritas™ File System (VxFS) mounted with the mount
Asymmetric mounts allow shared file systems to be mounted with different read/write capabilities. One node in the cluster can mount read/write, while other nodes mount
You can specify the cluster
See the mount_vxfs(1M) manual page for more information.
Chapter 2 | 17 |