Table 1-4 Supported Interconnects

 

 

 

INTERCONNECT FAMILIES

 

 

 

 

 

 

 

 

 

 

 

 

InfiniBand PCI

InfiniBand

 

 

Cluster

 

 

Express Single

 

 

Platform and

 

 

Data Rate and

ConnectX

 

 

Hardware

Gigabit

InfiniBand®

Double Data

Double Data

Myrinet® (Rev.

QsNetII

Model

Ethernet

PCI-X

Rate (DDR)

Rate (DDR)1

D, E, and F)

CP3000

X

X

X

X

X

 

CP3000BL

X

 

X

X

 

 

CP4000

X

X

X

X

X

X2

CP4000BL

X

 

X

X

 

 

CP6000

X

X

X3

X3

 

X

1Mellanox ConnectX Infiniband Cards require OFED Version 1.2.5 or later.

2The HP ProLiant DL385 G2 and DL145 G3 servers require a PCI Express card in order to use this interconnect.

3 This interconnect is supported by CP6000 hardware models with PCI Express.

Mixing Adapters

Within a given interconnect family, several different adapters can be supported. However, HP requires that all adapters must be from the same interconnect family; a mix of adapters from different interconnect families is not supported.

InfiniBand Double Data Rate

All components in a network must be DDR to achieve DDR performance levels.

ConnectX InfiniBand Double Data Rate

Currently ConnectX adapters cannot be mixed with other types of adapters.

Myrinet Adapters

The Myrinet adapters can be either the single-port M3F-PCIXD-2 (Rev. D) or the dual port M3F2–PCIXE-2 (Rev. E and Rev F); mixing adapter types is not supported.

QsNetII

The QsNetII high-speed interconnect from Quadrics, Ltd. is the only version of Quadrics interconnect that is supported.

1.9 Large-Scale Systems

A typical HP XC system contains from 5 to 512 nodes. To allow systems of a greater size, an HP XC system can be arranged into a large-scale configuration with up to 1024 compute nodes (HP might consider larger systems as special cases).

This configuration arranges the HP XC system as a collection of hardware regions that are tied together through a ProCurve 2848 Ethernet switch.

The nodes of the large-scale system are divided as equally as possible between the individual HP XC systems, which are known as regions. The head node for a large-scale HP XC system is always the head node of region 1.

32 Hardware and Network Overview