HP UX Serviceguard Storage Management Software manual Cluster File System Backup Strategies

Page 19

Cluster File System Architecture

About CFS

 

Cluster File System Backup Strategies

 

The same backup strategies used for standard VxFS can be used with CFS, because the

 

APIs and commands for accessing the namespace are the same. File system checkpoints

 

provide an on-disk, point-in-time copy of the file system. HP recommends file system

 

checkpoints over file system snapshots (described below) for obtaining a frozen image of

 

the cluster file system, because the performance characteristics of a checkpointed file

 

system are better in certain I/O patterns.

 

 

NOTE

See the Veritas File System Administrator's Guide, HP-UX, 5.0 for a detailed explanation

 

and comparison of checkpoints and snapshots.

 

A file system snapshot is another method for obtaining a file system on-disk frozen

 

image. The frozen image is non-persistent, in contrast to the checkpoint feature. A

 

snapshot can be accessed as a read-only mounted file system to perform efficient online

 

backups. Snapshots implement “copy-on-write” semantics that incrementally copy data

 

blocks when they are overwritten on the “snapped” file system. Snapshots for cluster file

 

systems extend the same copy-on-write mechanism for the I/O originating from any

 

cluster node.

 

Mounting a snapshot filesystem for backups increases the load on the system because of

 

the resources used to perform copy-on-writes and to read data blocks from the snapshot.

 

In this situation, cluster snapshots can be used to do off-host backups. Off-host backups

 

reduce the load of a backup application on the primary server. Overhead from remote

 

snapshots is small when compared to overall snapshot overhead. Therefore, running a

 

backup application by mounting a snapshot from a relatively less loaded node is

 

beneficial to overall cluster performance.

 

There are several characteristics of a cluster snapshot, including:

 

• A snapshot for a cluster mounted file system can be mounted on any node in a

 

cluster. The file system can be a primary, secondary, or secondary-only. A stable

 

image of the file system is provided for writes from any node.

 

• Multiple snapshots of a cluster file system can be mounted on the same or different

 

cluster nodes.

 

• A snapshot is accessible only on the node it is mounted on. The snapshot device

 

cannot be mounted on two nodes simultaneously.

 

• The device for mounting a snapshot can be a local disk or a shared volume. A shared

 

volume is used exclusively by a snapshot mount and is not usable from other nodes

 

as long as the snapshot is active on that device.

 

• On the node mounting a snapshot, the snapped file system cannot be unmounted

 

while the snapshot is mounted.

 

• A CFS snapshot ceases to exist if it is unmounted or the node mounting the snapshot

 

fails. However, a snapshot is not affected if a node leaves or joins the cluster.

 

• A snapshot of a read-only mounted file system cannot be taken. It is possible to

 

mount a snapshot of a cluster file system only if the snapped cluster file system is

 

mounted with the crw option.

 

In addition to file-level frozen images, there are volume-level alternatives available for

 

shared volumes using mirror split and rejoin. Features such as Fast Mirror Resync and

 

Space Optimized snapshot are also available.

Chapter 2

19

Image 19
Contents Second Edition Legal Notices Contents Troubleshooting Cluster Volume Manager AdministrationPrinting History Printing HistoryPage Technical Overview Group Lock Manager Overview of Cluster File System ArchitectureCluster File System Design Cluster File System FailoverCFS Supported Features Supported FeaturesVxFS Functionality on Cluster File Systems CFS Unsupported Features Unsupported FeaturesCFS Unsupported Features Benefits and Applications Advantages To Using CFSWhen To Use CFS Benefits and Applications Chapter Cluster File System Architecture Membership Ports Veritas Cluster Volume Manager FunctionalityRole of Component Products Cluster CommunicationCluster File System and The Group Lock Manager About CFSAsymmetric Mounts Primary and Secondary Mount Options Parallel I/OCluster File System Backup Strategies File System Tuneables Error Handling PolicySynchronizing Time on Cluster File Systems Distributing Load on a ClusterExample of a Four-Node Cluster About Veritas Cluster Volume Manager FunctionalityPrivate and Shared Disk Groups Activation Modes for Shared Disk Groups Activation Modes for Shared Disk GroupsAllowed and conflicting activation modes Connectivity Policy of Shared Disk GroupsLimitations of Shared Disk Groups About Veritas Cluster Volume Manager Functionality Chapter Cluster File System Administration Cluster File System Administration Cluster Messaging GAB Cluster Communication LLT Volume Manager Cluster Functionality Overview Cluster and Shared Mounts Cluster File System OverviewAsymmetric Mounts Cluster File System Commands Cluster File System AdministrationDistributing the Load on a Cluster Time Synchronization for Cluster File SystemsGrowing a Cluster File System Fstab fileCluster File System Administration Creating a Snapshot on a Cluster File System Cluster Snapshot CharacteristicsSnapshots for Cluster File Systems Performance Considerations# cfsumount /mnt1snap Cluster Volume Manager Overview of Cluster Volume Management Example of a 4-Node Cluster Disk group activation mode restrictions Either of the write modes on other nodes will fail # cfsdgadm display Behavior of Master Node for Different Failure Policies Disk Group Failure PolicyRecovery in a CVM Environment Troubleshooting Resource Temporarily Unavailable Installation IssuesInaccessible System Incorrect Permissions for Root on Remote SystemInstallation Issues Unmount Failures Cluster File System ProblemsMount Failures Performance Issues Command FailuresHigh Availability Issues Cluster File System Problems Appendix a