Example 8-1displays how to perform a system interconnect selection.

Example 8-1: Performing System Interconnect Selection

%export MPI_IC_ORDER="elan:TCP:gm:itapi"

%export MPIRUN_SYSTEM_OPTIONS="-subnet 192.168.1.1"

%export MPIRUN_OPTIONS="-prot"

%mpirun -srun -n4 ./a.out

The command line for the above will appear to mpirun as:

$ mpirun -subnet 192.168.1.1 -prot -srun -n4 ./a.out

The system interconnect decision will look for the presence of Elan and use it if found. Otherwise, TCP/IP will be used and the communication path will be on the subnet

192.168.1.*.

Example 8-2illustrates using TCP/IP over Gigabit Ethernet, assuming Gigabit Ethernet is installed and 192.168.1.1 corresponds to the Ethernet interface with Gigabit Ethernet. Note the implicit use of -subnet 192.168.1.1 is required to effectively get TCP/IP over the proper subnet.

Example 8-2: Using TCP/IP over Gigabit Ethernet

%export MPI_IC_ORDER="elan:TCP:gm:itapi"

%export MPIRUN_SYSTEM_OPTIONS="-subnet 192.168.1.1"

%mpirun -prot -TCP -srun -n4 ./a.out

Example 8-3illustrates using TCP/IP over Elan4, assuming Elan4 is installed and configured. The subnet information is omitted, Elan4 is installed and configured, and TCP/IP by means of -TCPis explicitly requested.

Example 8-3: Using TCP/IP over Elan4

%export MPI_IC_ORDER="elan:TCP:gm:itapi"

%export MPIRUN_SYSTEM_OPTIONS=" "

%$MPI_ROOT/bin/mpirun -prot -TCP -srun -n4 ./a.out

This shows in the "protocol map" that TCP is being used, but it is TCP over Elan4.

8.3.4 Using LSF and HP-MPI

HP-MPI jobs can be submitted using LSF. LSF uses the SLURM srun launching mechanism. Because of this, HP-MPI jobs need to specify the -srunoption when LSF is used. This section provides a brief overview of using LSF with HP-MPI in the HP XC environment.

A full description of using LSF with HP XC is provided in Chapter 7. In addition, for your convenience, the HP XC documentation CD contains HP XC LSF manuals from Platform Computing.

In Example 8-4, LSF is used to create an allocation of two processors and -srunis used to attach to it.

Example 8-4: Allocating and Attaching Processors

$ bsub -I -n2 $MPI_ROOT/bin/mpirun -srun ./a.out 1

In Example 8-5, LSF creates an allocation of twelve processors and -srunuses one CPU per node (six nodes). The example assumes two CPUs per node.

8-6Using HP-MPI