IMPORTANT: Although cross-subnet topology can be implemented on a single site, it is most commonly used by extended-distance clusters, and specifically site-aware disaster-tolerant clusters, which require Metrocluster (HP add-on software).

Design and configuration of such clusters are covered in the disaster-tolerant documentation delivered with Serviceguard. For more information, see the following documents under http:// www.hp.com/go/hpux-serviceguard-docs:

Understanding and Designing Serviceguard Disaster Tolerant Architectures

Designing Disaster Tolerant HA Clusters Using Metrocluster and Continentalclusters

The white paper Configuration and Administration of Oracle 10g R2 RAC Database in HP Metrocluster

Replacing Failed Network Cards

Depending on the system configuration, it is possible to replace failed network cards while the cluster is running. The process is described under “Replacement of LAN Cards” in the chapter “Troubleshooting Your Cluster.” With some restrictions, you can also add and delete LAN interfaces to and from the cluster configuration while the cluster is running; see “Changing the Cluster Networking Configuration while the Cluster Is Running” (page 297).

Redundant Disk Storage

Each node in a cluster has its own root disk, but each node is also physically connected to several other disks in such a way that more than one node can obtain access to the data and programs associated with a package it is configured for. This access is provided by a Storage Manager, such as Logical Volume Manager (LVM), or Veritas Volume Manager (VxVM) (or Veritas Cluster Volume Manager (CVM). LVM and VxVM disk storage groups can be activated by no more than one node at a time, but when a failover package is moved, the storage group can be activated by the adoptive node. All of the disks in the storage group owned by a failover package must be connected to the original node and to all possible adoptive nodes for that package. Disk storage is made redundant by using RAID or software mirroring.

Supported Disk Interfaces

The following interfaces are supported by Serviceguard for disks that are connected to two or more nodes (shared data disks):

Single-ended SCSI

SCSI

Fibre Channel

Not all SCSI disks are supported. See the HP Unix Servers Configuration Guide (available through your HP representative) for a list of currently supported disks.

NOTE: In a cluster that contains systems with PCI SCSI adapters, you cannot attach both PCI and NIO SCSI adapters to the same shared SCSI bus.

External shared Fast/Wide SCSI buses must be equipped with in-line terminators for disks on a shared bus. Refer to the “Troubleshooting” chapter for additional information.

When planning and assigning SCSI bus priority, remember that one node can dominate a bus shared by multiple nodes, depending on what SCSI addresses are assigned to the controller for each node on the shared bus. All SCSI addresses, including the addresses of all interface cards, must be unique for all devices on a shared bus.

32 Understanding Serviceguard Hardware Configurations

Page 32
Image 32
HP Serviceguard manual Redundant Disk Storage, Replacing Failed Network Cards, Supported Disk Interfaces