The following command runs a.out with four ranks, two ranks per node, ranks are block allocated, and two nodes are used:

$ mpirun -srun-n4 ./a.outhost1 rank1

host1 rank2 host2 rank3 host2 rank4

The following command runs a.out with six ranks (oversubscribed), three ranks per node, ranks are block allocated, and two nodes are used:

$ mpirun -srun-n6-O-N2-m block ./a.out host1 rank1

host1 rank2 host1 rank3 host2 rank4 host2 rank5 host2 rank6

The following example runs a.out with six ranks (oversubscribed), three ranks per node, ranks are cyclically allocated, and two nodes used:

$ mpirun -srun -n6 -O -N2 -m cyclic ./a.out host1 rank1

host2 rank2 host1 rank3 host2 rank4 host1 rank5 host2 rank6

8.3.3.2Creating Subshells and Launching Jobsteps

Other forms of usage include allocating the nodes you wish to use, which creates a subshell. Then jobsteps can be launched within that subshell until the subshell is exited.

The following commands demonstrate how to create a subshell and launch jobsteps.

This command allocates six nodes and creates a subshell: $ mpirun -srun-A-N6

This command allocates four ranks on four nodes cyclically. A block was requested in this command.

$ mpirun -srun-n4-m block ./a.out host1 rank1

host2 rank2 host3 rank3 host4 rank4

This command allocates four ranks on two nodes, blocked. Note that this was forced to happen within the allocation by using oversubscription:

$ mpirun -srun -n4 -N2 -O -m cyclic ./a.out host1 rank1

host1 rank2 host2 rank3 host2 rank4

:MPI_Init: cyclic node allocation not supported for ranks > # of nodes

:MPI_Init: Cannot set srun startup protocol

8.3.3.3System Interconnect Selection

This section provides examples of how to perform system interconnect selection.

Using HP-MPI 8-5