2.3.5.2 Submitting a Non-MPI Parallel Job

Submitting non-MPI parallel jobs is discussed in detail in Section 7.4.4. The LSF bsub command format to submit a simple non-MPI parallel job is:

bsub -nnum-procs [bsub-options] srun [srun-options]executable [executable-options]

The bsub command submits the job to LSF-HPC.

The -nnum-procsparameter specifies the number of processors requested for the job. This parameter is required for parallel jobs.

The inclusion of the SLURM srun command is required in the LSF-HPC command line to distribute the tasks on the allocated compute nodes in the LSF partition.

The executable parameter is the name of an executable file or command.

Consider an HP XC configuration where lsfhost.localdomain is the LSF-HPC execution host and nodes n[1-10]are compute nodes in the SLURM lsf partition. All nodes contain two processors, providing 20 processors for use by LSF-HPC jobs. The following example shows one way to submit a non-MPI parallel job on this system:

Example 2-2: Submitting a Non-MPI Parallel Job

$ bsub -n4 -I srun hostname

Job <21> is submitted to default queue <normal> <<Waiting for dispatch ...>>

<<Starting on lsfhost.localdomain>> n1

n1

n2

n2

In the above example, the job output shows that the job “srun hostname” was launched from the LSF execution host lsfhost.localdomain, and that it ran on four processors from the allotted nodes n1 and n2.

Refer to Section 7.4.4 for an explanation of the options used in this command, and for full information about submitting a parallel job.

Using SLURM Options with the LSF External Scheduler

An important option that can be included in submitting parallel jobs is LSF-HPC’s external scheduler option. The LSF-HPC external SLURM scheduler provides additional capabilities at the job and queue levels by allowing the inclusion of several SLURM options in the LSF-HPC command line. For example, it can be used to submit a job to run one task per node, or to submit a job to run on only specified nodes.

The format for this option is:

-ext"SLURM[slurm-arguments]"

The slurm-argumentscan consist of one or more srun allocation options (in long format).

Refer to Section 7.4.2 for additional information about using the LSF-HPC external scheduler. The Platform Computing LSF documentation provide more information on general external scheduler support. Also see the lsf_diff(1) manpage for information on the specific srun options available in the external SLURM scheduler.

The following example uses the external SLURM scheduler to submit one task per node (on SMP nodes):

Using the System 2-9