w w w . d e l l . c o m s u p p o r t . d e l l . c o m
After failover, the Cluster Administrator can reset the following recovery policies:
•Application dependencies
•Application restart on the same cluster node
•Workload rebalancing (or failback) when a failed cluster node is repaired and brought back online
Failover Process
The Cluster Service attempts to fail over a group when any of the following conditions occur:
•The node currently hosting the group becomes inactive for any reason.
•One of the resources within the group fails, and it is configured to affect the group.
•Failover is forced by the System Administrator.
When a failover occurs, the Cluster Service attempts to perform the following procedures:
•The group’s resources are taken offline.
The resources in the group are taken offline by the Cluster Service in the order determined by the group's dependency hierarchy: dependent resources first, followed by the resources on which they depend.
For example, if an application depends on a Physical Disk resource, the Cluster Service takes the application offline first, allowing the application to write changes to the disk before the disk is taken offline.
•The resource is taken offline.
Cluster Service takes a resource offline by invoking, through the Resource Monitor, the resource DLL that manages the resource. If the resource does not shut down within a specified time limit, the Cluster Service forces the resource to shut down.
•The group is transferred to the next preferred host node.
When all of the resources are offline, the Cluster Service attempts to transfer the group to the node that is listed next on the group's list of preferred host nodes.
For example, if cluster node 1 fails, the Cluster Service moves the resources to the next cluster node number, which is cluster node 2.
•The group’s resources are brought back online.
If the Cluster Service successfully moves the group to another node, it tries to bring all of the group's resources online. Failover is complete when all of the group's resources are online on the new node.
68