
Planning the Fabric
TCP / IP
•Load Balancing: Supported
When a HP 9000 HyperFabric cluster is running TCP/IP applications, the HyperFabric driver balances the load across all available resources in the cluster including nodes, adapter cards, links, and multiple links between switches.
•Switch Management: Not Supported
Switch Management is not supported. Switch management will not operate properly if it is enabled on a HyperFabric cluster.
•Diagnostics: Supported
Diagnostics can be run to obtain information on many of the HyperFabric components via the clic_diag, clic_probe and clic_stat commands, as well as the Support Tools Manager (STM).
For more detailed information on HyperFabric diagnostics see “Running Diagnostics” on page 103 on page 149.
Configuration Parameters
This section details, in general, the maximum limits for TCP/IP HyperFabric configurations. There are numerous variables that can impact the performance of any particular HyperFabric configuration. See the “TCP/IP Supported Configurations” section for guidance on specific HyperFabric configurations for TCP/IP applications.
•HyperFabric is only supported on the HP 9000 series unix servers and workstations.
•TCP/IP is supported for all HyperFabric hardware and software.
•Maximum Supported Nodes and Adapter Cards:
In point to point configurations the complexity and performance limitations of having a large number of nodes in a cluster make it necessary to include switching in the fabric. Typically, point to point configurations consist of only 2 or 3 nodes.
In switched configurations, HyperFabric supports a maximum of 64 interconnected adapter cards.
A maximum of 8 HyperFabric adapter cards are supported per instance of the
A maximum of 8 configured IP addresses are supported by the HyperFabric subsystem per instance of the
•Maximum Number of Switches:
Up to 4 switches (16 port copper, 16 port fibre or Mixed 8 fibre ports / 4 copper ports) can be interconnected (meshed) in a single HyperFabric cluster.
•Trunking Between Switches (multiple connections)
Trunking between switches can be used to increase bandwidth and cluster throughput. Trunking is also a way to eliminate a possible single point of failure. The number of trunked cables between nodes is only limited by port availability. To assess the effects of trunking on the performance of any particular HyperFabric configuration, consult with your HP representative.
Chapter 2 | 27 |