1 Introduction to Technology

Understanding the Fabric Clustering System

HP Fabric Clustering System

The HP Fabric Clustering System provides a high-performance InfiniBand computing environment intended for use for applications sensitive to latency, bandwidth, and CPU-consumption.

HP Fabric is an RDMA-interconnect. Remote Direct Memory Access (RDMA) technology allows cooperating endnodes to expose their local memory buffers for direct data placement by peer endnodes (including loop back operation) over reliable communication fabrics (fabrics may be internal, e.g. an endnode backplane, or external, e.g., an I/O attached device / fabric). RDMA technology reduces the performance constraints imposed by memory subsystem bottlenecks on networked computing systems.

Readers are advised also to consult the InfiniBand specifications found at

http://www.infinibandta.org for more information on the technology.

Understanding InfiniBand™

This section provides a brief and high-level overview of the InfiniBand architecture.

What is InfiniBand?

InfiniBand is supported by HP Fabric Clustering System as a fabric for high-performance technical computing clusters and as a fabric for commercial database clusters.

InfiniBand is a switched I/O fabric architecture. It was created to meet the increasing demands of the data center, and allows data centers to harness server computing power by delivering an I/O fabric that provides reliable low latency communication both from one server to another server, and between server and their shared I/O resources.

InfiniBand technology refers to both a communications and management infrastructure that increases the communication speed between CPUs, devices within servers, and subsystems located throughout a network. It defines a switched communications fabric that allows many devices to concurrently communicate with high-bandwidth and low latency in a protected, easily managed environment.

InfiniBand Advantages

Infiniband addresses four problems:

Application to application latency.

Application CPU consumption.

Network bandwidth.

Fabric Management.

InfiniBand Capabilities

InfiniBand is a standard-based interconnect that enables:

High-bandwidth, 10 Gbps connectivity

Extremely low-latency Remote Direct Memory Access (RDMA)

Understanding the Fabric Clustering System

19

Page 19
Image 19
HP UX 11i v2 Networking Software manual Introduction to Technology, Understanding the Fabric Clustering System