Assumptions and Goal

This example assumes you have a visualization application that currently runs on a single workstation. It also assumes that you have not specifically modified it to take advantage of the parallel features of a cluster.

This example also assumes that your goal is to run the application on the SVA and to take advantage of the multi-tile capabilities of the cluster.

Chromium Overview and Usage Notes

Chromium creates a way for many programs using the OpenGL standard to take advantage of cluster technology by automatically distributing OpenGL. Chromium provides a common parallel graphics programming interface to support clusters such as the SVA. In addition, it enables many existing applications to display on multiple tiles without modification.

Chromium provides the following features:

A method for synchronizing parallel graphics commands.

Streaming graphics pipeline based on the industry standard OpenGL API.

Support for multiple physical display devices clustered together, such as powerwall displays.

Support for aggregation of the output of multiple graphics cards to drive a single display at higher levels of performance and capability.

Chromium is automatically installed and configured on the SVA in several ways of interest to application developers:

Autostart is not used.

CR-Servers and CR Mothership are launched by the SVA launch script. See “Launch Script” (pg. 42).

Tile information is taken from the SVA Configuration Data Files, which eliminates the need to hard code this information in the Chromium configuration files.

Chromium uses tilesort and TCP/IP over the SI for DMX and Chromium connections.

There is a ten second delay between the time that the Mothership and Clients launch. This adds a brief delay to the startup time.

Although Chromium has several configuration files that you typically need to edit, the SVA launch script eliminates this need by using configuration data from the SVA Configuration Data Files.

A link to the Chromium documentation is available from the SVA Documentation Library.

Distributed Multi-Head X (DMX)

Xdmx is a proxy X Server that provides multi-head support for multiple displays attached to different machines (each of which is running a typical X Server). A simple application of Xdmx provides multi-head support using two desktop machines, each of which has a single display device attached to it. A complex application of Xdmx unifies a four by four grid of 1280x1024 displays, each attached to one of 16 computers, into a unified 5120x4096 display.

The front end proxy X Server removes the limit on the number of physical devices that can coexist in a single machine (for example, due to the number of PCI-Express slots available for graphics cards). Thus, large tiled displays are possible.

A link to the DMX documentation is available from the SVA Documentation Library.

Location for Application Execution and Control

Although an application can run on any node in the SVA, HP recommends that you run it on one of the display nodes. The SVA is configured to use the default Execution Host for the Display Surface you choose when launching the visualization job. The Execution Host for a Display Surface is the default location for running an application. You can locate the default Execution Host by reading the value for the SVA_EXECUTION_HOST tag in the Site Configuration File, /opt/sva/etc/sva.conf.

Each instance of a named Display Surface in the Site Configuration File has an associated default Execution Host. You can override the default by setting the SVA_EXECUTION_HOST tag in your User Configuration

Running a Workstation Application Using a Multi-Tile Display 39

Page 39
Image 39
HP Scalable Visualization Array (SVA) Software manual Chromium Overview and Usage Notes, Distributed Multi-Head X DMX