404Administering cluster functionality

Overview of cluster volume management

Note: The activation mode of a disk group controls volume I/O from different nodes in the cluster. It is not possible to activate a disk group on a given node if it is activated in a conflicting mode on another node in the cluster. When enabling activation using the defaults file, it is recommended that this file be made identical on all nodes in the cluster. Otherwise, the results of activation are unpredictable.

If the defaults file is edited while the vxconfigd daemon is already running, run the vxconfigd -kcommand on all nodes to restart the process.

If the default activation mode is anything other than off, an activation following a cluster join, or a disk group creation or import can fail if another node in the cluster has activated the disk group in a conflicting mode.

To display the activation mode for a shared disk group, use the vxdg list diskgroup command as described in Listing shared disk groups” on page 421.

You can also use the vxdg command to change the activation mode on a shared disk group as described in Changing the activation mode on a shared disk group” on page 425.

For a description of how to configure a volume so that it can only be opened by a single node in a cluster, see Creating volumes with exclusive open access by a node” on page 426 and Setting exclusive open access to a volume by a node” on page 426.

Connectivity policy of shared disk groups

A shared disk group provides concurrent read and write access to the volumes that it contains for all nodes in a cluster. A shared disk group can only be created on the master node. This has the following advantages and implications:

All nodes in the cluster see exactly the same configuration.Only the master node can change the configuration.

Any changes on the master node are automatically coordinated and propagated to the slave nodes in the cluster.

Any failures that require a configuration change must be sent to the master node so that they can be resolved correctly.

As the master node resolves failures, all the slave nodes are correctly updated. This ensures that all nodes have the same view of the configuration.

The practical implication of this design is that I/O failure on any node results in the configuration of all nodes being changed. This is known as the global detach