Later, when the power outage in the primary datacenter is corrected and the storage array is brought back online, the status of the remote copy volume group goes to the failsafe status. At this time, if we failback the corresponding cluster application role from the secondary datacenter to one of the hosts in the primary datacenter, then the physical disk resources may fail to come online even though CLX resource has come online successfully.

This issue occurs if the LUN WWNs of the virtual volumes in the RC Groups are same for both the primary and secondary storage arrays.

To avoid this situation (physical disk resources not coming online), ensure that the disk rescan is done in the Microsoft failover cluster hosts manually before you failback the corresponding cluster application role from the secondary datacenter to the host in the primary datacenter.

Also, the physical disk must be connected to the primary datacenter storage array as soon as the storage array is brought back online.

Another way to avoid this situation is to configure the CLX preexec script which does the rescan of the disks. The preexec script can be configured in CLX as DiskRescanDiskpart.bat in the same Windows folder and having the following line in this script.

echo rescan diskpart

NOTE: The syntax of the command in the preexec script must be proper, otherwise preexec script operation fails and then the CLX failover operation also does not succeed.

If the storage array in the secondary datacenter is brought back online or started after the array shutdown due to the datacenter disaster or Inform OS upgrade, the disks presented to the hosts in the secondary datacenter are not recognized by the Windows operating system. Due to this issue, if Failover of the cluster resources is triggered to the secondary datacenter, the disk cluster resource does not come online even though the corresponding CLX resource comes online.

To avoid this situation (physical disk resources not coming online), ensure that the disk rescan is done in the Microsoft Failover cluster hosts manually, connected to the secondary datacenter storage array as soon as the storage array is brought back online and before you Failover the corresponding cluster application role to the secondary datacenter hosts.

Another way to avoid this situation is to configure the CLX preexec script which does the rescan of the disks. The preexec script can be configured in CLX as DiskRescanDiskpart.bat in the same Windows folder and having the following line in this script.

echo rescan diskpart

NOTE: The syntax of the command in the preexec script must be proper, otherwise preexec script operation fails and then the CLX failover operation also does not succeed.

Cannot connect to HP 3PAR storage system

During HP 3PAR Cluster Extension configuration, if you are unable to connect to the 3PAR storage system, ensure that the storage system is up and running, and the network ports are functioning properly. To check for response from the storage system over the network, use the ping command from cluster nodes to the storage system's network name or IP address.

ping <storage system network name or IP address>

If you are using storage system network name, verify it is resolving to proper IP address using nslookup command from the cluster nodes.

nslookup <storage system network name>

74 Troubleshooting

Page 74
Image 74
HP Cluster Software Cannot connect to HP 3PAR storage system, Echo rescan diskpart, Nslookup storage system network name

Cluster Software specifications

HP Cluster Software is a robust solution designed to enhance the reliability, availability, and scalability of computing environments in enterprise settings. This software is instrumental in managing clusters of servers, providing a unified framework that allows for efficient resource management, workload distribution, and high availability.

One of the main features of HP Cluster Software is its ability to deliver high availability through failover mechanisms. In the event of a hardware or software failure, the software automatically shifts workloads from the affected node to a standby node within the cluster, minimizing downtime. This feature is critical for organizations that require continuous access to their data and applications.

Scalability is another significant characteristic of HP Cluster Software. Organizations can easily add or remove nodes from the cluster without disrupting ongoing operations. This flexibility ensures that enterprises can adapt to changing workloads and resource demands efficiently, making it suitable for environments ranging from small businesses to large data centers.

Load balancing is a key technology employed by HP Cluster Software. It intelligently distributes workloads across the available nodes, optimizing resource utilization and ensuring that no single server is overwhelmed. By balancing the load, organizations can achieve better performance and enhance the response times of applications, which are essential for user satisfaction.

HP Cluster Software supports various clustering topologies, including active-active and active-passive configurations. This versatility allows organizations to choose the architecture that best fits their operational requirements. Additionally, the software integrates seamlessly with various HP and third-party hardware and software solutions, thus providing a holistic environment for managing IT resources.

Moreover, HP Cluster Software offers centralized management tools that simplify cluster administration. Administrators can monitor cluster performance, manage workloads, and configure settings all from a single interface. This ease of use reduces the complexity often associated with managing large clusters and empowers IT teams to respond rapidly to issues.

In summary, HP Cluster Software is an essential tool for organizations looking to enhance their IT infrastructure's availability, reliability, and performance. With its failover capabilities, scalability options, load balancing technology, and centralized management features, it stands out as a comprehensive solution for modern computing challenges.