HP Serviceguard manual Use of a Lock LUN as the Cluster Lock

Page 18

Arbitration for Data Integrity in Serviceguard Clusters

Use of a Lock LUN as the Cluster Lock

Use of a Lock LUN as the Cluster Lock

The lock LUN is similar to the HP-UX cluster lock disk but different in certain respects. As with the lock disk, a lock LUN is marked when a node obtains the cluster lock, so that other nodes will see the lock as “taken.” This mark will survive an off-on power cycle of the disk device unlike SCSI disk reservations. As with HP-UX, the lock LUN can be used with clusters of up to four nodes. The lock LUN is not mirrored.

Here are the important differences between the lock disk in HP-UX and the lock LUN in Linux or HP-UX:

Only a single lock LUN can be configured. Dual cluster locking with lock LUN is not supported. Therefore for extended-distance disaster tolerant configurations a Quorum Server is required.

The lock LUN is created on a Linux partition, or HP-UX partition or disk, directly, not through LVM. The lock LUN is not part of a volume group.

The lock LUN partition is dedicated for cluster lock use; however, in Linux clusters, and HP-UX Integrity clusters, other partitions on the same storage unit can be used for shared storage. A lock LUN requires about 100 KB.

In clusters consisting of HP-UX Integrity servers only, you can use the idisk (1m) utility to create a partition for the lock LUN. In clusters that include HP 9000 servers, you must use an entire disk or LUN. On Linux systems, use the fdisk command to define the partition as type Linux (83).

Serviceguard periodically checks the health of the lock LUN and writes messages to the syslog file when a lock LUN fails the health check. This file should be monitored for early detection of lock disk problems.

18

Image 18
Contents Arbitration For Data Integrity Serviceguard Clusters Manufacturing Part Number B3936-90078 JulyLegal Notices Arbitration for Data Integrity in Serviceguard Clusters Cluster Membership Concepts MembershipCluster Membership Concepts Quorum Split-BrainTie-Breaking To Arbitrate or Not to Arbitrate No Arbitration-Multiple PathsMultiple Heartbeat Failures No Arbitration-Multiple Media Single Node FailureMultiple Paths with Different Media Additional Multiple Paths with Different MediaNo Arbitration-Risks How Serviceguard Uses Arbitration Cluster StartupStartup and Re-Formation Dynamic Cluster Re-Formation Cluster Quorum and Cluster LockingCluster Lock No Cluster Lock Lock Requirements Use of a Lock Disk as the Cluster Lock Lock Disk OperationSingle Cluster Lock Dual Cluster LockUse of a Lock LUN as the Cluster Lock Oot Irror Lock LUN OperationUse of a Quorum Server as the Cluster Lock Quorum Server OperationSetting up the Quorum Server Running the Quorum ServerSpecifying a Quorum Server Quorum Server Status and StateViewing Quorum Server Status and State Viewing Quorum Server System DataUse of Arbitrator Nodes Use of Arbitrator NodeArbitration in Disaster-Tolerant Clusters Extended Distance ClustersMetropolitan Clusters Arbitrator Nodes Quorum ServerContinental Clusters Use of Dual Lock Disks in Extended Distance ClustersDisk area is not mirrored Arbitration for Data Integrity in Serviceguard Clusters Arbitration Advantages Disadvantages Mode SummaryComparison of Different Arbitration Methods Arbitration for Data Integrity in Serviceguard Clusters Summary
Related manuals
Manual 407 pages 39.81 Kb