Planning the Fabric

Hyper Messaging Protocol (HMP)

Switch Management: Not Supported

Switch Management is not supported. Switch management will not operate properly if it is enabled on a HyperFabric cluster.

Diagnostics: Supported

Diagnostics can be run to obtain information on many of the HyperFabric components via the clic_diag, clic_probe and clic_stat commands, as well as the Support Tools Manager (STM).

For more detailed information on HyperFabric diagnostics, see “Running Diagnostics” on page 103 on page 149.

Configuration Parameters

This section details, in general, the maximum limits for HMP HyperFabric configurations. There are numerous variables that can impact the performance of any particular HyperFabric configuration. See the “HMP Supported Configurations” section for guidance on specific HyperFabric configurations for HMP applications.

HyperFabric is only supported on the HP 9000 series unix servers and workstations.

HMP is only supported on the PCI 4X adapters, A6092A and A6386A.

Although HMP is supported on A6092A HF1 (copper) adapters, the performance advantages HMP offers will not be fully realized unless it is used with A6386A HF2 (fibre) adapters and related fibre hardware. See Table 2-6 on page 41 for details.

Maximum Supported Nodes and Adapter Cards:

HyperFabric clusters running HMP applications are limited to supporting a maximum of 64 adapter cards.

In point to point configurations running HMP applications, the complexity and performance limitations of having a large number of nodes in a cluster make it necessary to include switching in the fabric. Typically, point to point configurations consist of only 2 or 3 nodes.

In switched configurations running HMP applications, HyperFabric supports a maximum of 64 interconnected adapter cards.

A maximum of 8 HyperFabric adapter cards are supported per instance of the HP-UX operating system. The actual number of adapter cards a particular node is able to accommodate also depends on slot availability and system resources. See node specific documentation for details.

A maximum of 8 configured IP addresses are supported by the HyperFabric subsystem per instance of the HP-UX operating system.

Maximum Number of Switches:

Up to 4 switches (16 port copper, 16 port fibre or Mixed 8 fibre ports / 4 copper ports) can be interconnected (meshed) in a single HyperFabric cluster.

Trunking Between Switches (multiple connections).

HMP is supported in configurations where switches are interconnected through multiple cables. However, with the current release of HMP software, this configuration will not eliminate a single point of failure or increase performance.

Chapter 2

39

Page 39
Image 39
HP HyperFabric manual Configuration Parameters