Components of the HP Cluster Platform

Because the SVA is an extension of the HP Cluster Platform, you can begin by understanding its base components without any visualization nodes. The following are the key architectural components of an HP Cluster Platform system without visualization nodes:

Compute Nodes and

The compute cluster consists of compute nodes and administrative or

Administrative/Service Nodes

service nodes. Parallel applications are allocated exclusive use of the

 

compute nodes on which they run. The other nodes provide

 

administration, software installation, remote login, file I/O, external

 

network access, and so on. These nodes are shared by multiple jobs,

 

and are not allocated to individual jobs.

System Interconnect (SI)

A high-bandwidth, low-latency network which connects all nodes. This

 

supports communication among the compute nodes (for example, MPI

 

and sockets) and file I/O between compute nodes and a shared file

 

system.

Administrative Network

An Administrative Network connects all nodes in the cluster. In an HP

 

XC compute cluster, this consists of two branches, the Administrative

 

Network and the Console Network. This private local Ethernet network

 

runs TCP/IP. The Administrative Network is Gigabit Ethernet (GigE);

 

the Console Network is 10/100 BaseT. (Because visualization nodes

 

do not support console functions, visualization nodes are not

 

connected to a console branch.)

Linux

The nodes of the cluster run a derivative of 64-bit Red Hat® Enterprise

 

Linux Advanced Server.

Note

All nodes must attach to two networks using different ports, one for the SI and one for the Administrative Network.

Main Visualization Cluster Tasks

The SVA has a number of tasks that are unique to a visualization cluster. It accomplishes these tasks using a set of unique node types that differ in their hardware configurations, and so are capable of different functional tasks. The main tasks are as follows:

Render images.

A node must have a graphics card to render images. A visualization

 

job uses multiple nodes to render image data in parallel. A render

 

node typically communicates over the SI with other render and display

 

nodes to composite and display images.

Display images.

The final output of a visualization application is a complete displayed

 

image that is the result of the parallel rendering that takes place during

 

an application job. To make this possible, a display node must contain

 

a graphics card connected to a display device. The display can show

 

images integrated with the application user interface, or full screen

 

images. The output can be a complete display or one tile of an

 

aggregate display.

Remote images.

The SVA also supports the transmission of a complete image to a

 

system external to the cluster over an external network for remote

 

viewing; for example, to an office workstation outside the lab. A node

 

with a port connected to the external network is recommended.

 

Alternatively, you can connect to the external network by routing

 

through another cluster node with such a port.

Integrate an application user

An application user interface (UI) usually runs on a cluster node. The

interface.

UI typically controls the parts of the distributed application running

 

on other nodes. A node that provides users with acess to the UI can

 

have an attached keyboard, mouse, and monitor for user interaction.

 

Alternatively, the node can export the application UI to an external

 

node using the X protocol or using the HP Remote Graphics Software

18 SVA Architecture

Page 18
Image 18
HP Scalable Visualization Array (SVA) Software Components of the HP Cluster Platform, Main Visualization Cluster Tasks