4 Cluster administration
One important feature of HP StorageWorks P4000 G2 Unified NAS Gateways is that they can operate as a single node or as a cluster. This chapter discusses cluster installation and management issues.
Cluster overview
Up to eight server nodes can be connected to each other and deployed as a no single point of failure (NSPOF) cluster. Utilizing a private network allows for communication amongst servers, allowing you to track the state of each cluster node. Each node sends out periodic messages to the other nodes; these messages are called heartbeats. If a node stops sending heartbeats, the cluster service fails over any resources that the node owns to another node. For example, if the node that owns the Quorum disk is shut down for any reason, its heartbeat stops. The other nodes detect the lack of the heartbeat and another node takes over ownership of the Quorum disk and the cluster.
Clustering servers greatly enhances the availability of file serving by enabling file shares to fail over to an alternative server if problems arise. Clients see only a brief interruption of service as the file share resource transitions from one server node to the other.
Cluster terms and components
Nodes
The most basic parts of a cluster are the servers, referred to as nodes. A server node is any individual server in a cluster, or a member of the cluster.
Resources
Hardware and software components that are managed by the cluster service are called cluster resources. Cluster resources have three defining characteristics:
•They can be brought online and taken offline.
•They can be managed in a cluster.
•They can be owned by only one node at a time.
Some resources are created automatically by the system and other resources must be set up manually. Resource types include:
•IP address resource
•Cluster name resource
•Cluster quorum disk resource
•Physical disk resource
•Virtual server name resources
•CIFS file share resources
•NFS file share resources
P4000 G2 Unified NAS Gateway User Guide 39