HP Scalable Visualization Array (SVA) Software manual Chromium Overview and Usage Notes

Page 39

Assumptions and Goal

This example assumes you have a visualization application that currently runs on a single workstation. It also assumes that you have not specifically modified it to take advantage of the parallel features of a cluster.

This example also assumes that your goal is to run the application on the SVA and to take advantage of the multi-tile capabilities of the cluster.

Chromium Overview and Usage Notes

Chromium creates a way for many programs using the OpenGL standard to take advantage of cluster technology by automatically distributing OpenGL. Chromium provides a common parallel graphics programming interface to support clusters such as the SVA. In addition, it enables many existing applications to display on multiple tiles without modification.

Chromium provides the following features:

A method for synchronizing parallel graphics commands.

Streaming graphics pipeline based on the industry standard OpenGL API.

Support for multiple physical display devices clustered together, such as powerwall displays.

Support for aggregation of the output of multiple graphics cards to drive a single display at higher levels of performance and capability.

Chromium is automatically installed and configured on the SVA in several ways of interest to application developers:

Autostart is not used.

CR-Servers and CR Mothership are launched by the SVA launch script. See “Launch Script” (pg. 42).

Tile information is taken from the SVA Configuration Data Files, which eliminates the need to hard code this information in the Chromium configuration files.

Chromium uses tilesort and TCP/IP over the SI for DMX and Chromium connections.

There is a ten second delay between the time that the Mothership and Clients launch. This adds a brief delay to the startup time.

Although Chromium has several configuration files that you typically need to edit, the SVA launch script eliminates this need by using configuration data from the SVA Configuration Data Files.

A link to the Chromium documentation is available from the SVA Documentation Library.

Distributed Multi-Head X (DMX)

Xdmx is a proxy X Server that provides multi-head support for multiple displays attached to different machines (each of which is running a typical X Server). A simple application of Xdmx provides multi-head support using two desktop machines, each of which has a single display device attached to it. A complex application of Xdmx unifies a four by four grid of 1280x1024 displays, each attached to one of 16 computers, into a unified 5120x4096 display.

The front end proxy X Server removes the limit on the number of physical devices that can coexist in a single machine (for example, due to the number of PCI-Express slots available for graphics cards). Thus, large tiled displays are possible.

A link to the DMX documentation is available from the SVA Documentation Library.

Location for Application Execution and Control

Although an application can run on any node in the SVA, HP recommends that you run it on one of the display nodes. The SVA is configured to use the default Execution Host for the Display Surface you choose when launching the visualization job. The Execution Host for a Display Surface is the default location for running an application. You can locate the default Execution Host by reading the value for the SVA_EXECUTION_HOST tag in the Site Configuration File, /opt/sva/etc/sva.conf.

Each instance of a named Display Surface in the Site Configuration File has an associated default Execution Host. You can override the default by setting the SVA_EXECUTION_HOST tag in your User Configuration

Running a Workstation Application Using a Multi-Tile Display 39

Image 39
Contents HP Scalable Visualization Array Version Page Table of Contents Glossary Index Application ExamplesList of Figures Page List of Tables Page Typographic Conventions About This DocumentIntended Audience Document OrganizationRelated Information Publishing HistoryHP Encourages Your Comments Introduction Where SVA Fits in the High Performance Computing EnvironmentSVA Clusters Flexibility DisplaysSVA Functional Attributes ScalabilityOpenGL Applications Application SupportScenegraph Applications Page Background on Linux Clusters SVA ArchitectureSVA as a Cluster Architectural DesignMain Visualization Cluster Tasks Components of the HP Cluster PlatformCluster Data Flow Configuration FlexibilitySVA Operation Components of an SVASVA Data Flow Overview File AccessHardware Component Summary SVA Hardware and SoftwareSystem Interconnect SI Network ConfigurationsAdministrative Network Connections Display DevicesLinux Operating System SVA Software SummaryAdditional System Software HP XC Clustering SoftwareSVA Visualization System Software Reference Guide Page Setting Up and Running a Visualization Session Configuration Data FilesRunning an Application Using Scripts Modifying a Script Template Selecting a TemplateRunning an Interactive Session Using a Script to Launch an ApplicationSetting Up and Running a Visualization Session Application Examples Running an Existing Application on a Single SVA WorkstationAssumptions and Goal Location for Application Execution and Control HP Remote Graphics Software and UseData Access Use of Display SurfacesLaunch Script Non-Interactive Example ParaView Overview Running Render and Display Applications Using ParaViewLocation for Application Execution and Control Paraview Server Launch Script Template Running a Workstation Application Using a Multi-Tile DisplayDistributed Multi-Head X DMX Chromium Overview and Usage NotesApplication Examples Using Display Surfaces Launch Script Is limited in size to one to three racks. The bounded GlossaryHptccluster/sva/job/id.conf. This file has UBB Page RGS IndexSVA