w w w . d e l l . c o m s u p p o r t . d e l l . c o m
Using the Quorum Disk for Cluster Integrity
The quorum disk is also used to ensure cluster integrity by performing the following functions:
•Maintaining the cluster node database
•Ensuring cluster unity
When a node joins or forms a cluster, the Cluster Service must update the node's private copy of the cluster database. When a node joins an existing cluster, the Cluster Service can retrieve the data from the other active nodes. However, when a node forms a cluster, no other node is available. The Cluster Service uses the quorum disk's recovery logs to update the node's cluster database, thereby maintaining the correct version of the cluster database and ensuring that the cluster is intact.
For example, if node 1 fails, node 2 continues to operate, writing changes to the cluster database. Before you can restart node 1, node 2 fails. When node 1 becomes active, it updates its private copy of the cluster database with the changes made by node 2 using the quorum disk’s recovery logs to perform the update.
To ensure cluster unity, the operating system uses the quorum disk to ensure that only one set of active, communicating nodes is allowed to operate as a cluster. A node can form a cluster only if it can gain control of the quorum disk. A node can join a cluster or remain in an existing cluster only if it can communicate with the node that controls the quorum disk.
For example, if the private network (cluster interconnect) between cluster nodes 1 and 2 fails, each node assumes that the other node has failed, causing both nodes to continue operating as the cluster. If both nodes were allowed to operate as the cluster, the result would be two separate clusters using the same cluster name and competing for the same resources. To solve this problem, MSCS uses the node that owns the quorum disk to maintain cluster unity and solve this problem. In this scenario, the node that gains control of the quorum disk is allowed to form a cluster, and the other fails over its resources and becomes inactive.
Resource Failure
A failed resource is not operational on the current host node. At periodic intervals, the Cluster Service checks to see if the resource appears operational by periodically invoking the Resource Monitor. The Resource Monitor uses the resource DLL for each resource to detect if the resource is functioning properly. The resource DLL communicates the results back through the Resource Monitor to the Cluster Service.
Adjusting the Poll Intervals
You can specify how frequently the Cluster Service checks for failed resources by setting the Looks Alive (general resource check) and Is Alive (detailed resource check) poll intervals. The Cluster Service requests a more thorough check of the resource's state at each Is Alive interval than it does at each Looks Alive interval; therefore, the Is Alive poll interval is typically longer than the Looks Alive poll interval.
NOTE: Do not adjust the Looks Alive and Is Alive settings unless instructed by technical support.
64