The output for this command could also have been 1 core on each of 4 compute nodes in the SLURM allocation.

Submitting a Non-MPI Parallel Job

Use the following format of the LSF bsub command to submit a parallel job that does not make use of HP-MPI:

bsub -nnum-procs [bsub-options] srun [srun-options]jobname [job-options]The bsub command submits the job to LSF-HPC.

The -nnum-procsparameter, which is required for parallel jobs, specifies the number of cores requested for the job.

The SLURM srun command is the user job launched by the LSF bsub command. SLURM launches the jobname in parallel on the reserved cores in the lsf partition.

The jobname parameter is the name of an executable file or command to be run in parallel.

Example 5-5illustrates a non-MPI parallel job submission. The job output shows that the job “srun hostname” was launched from the LSF execution host lsfhost.localdomain, and that it ran on 4 cores from the compute nodes n1 and n2.

Example 5-5 Submitting a Non-MPI Parallel Job$ bsub -n4 -I srun hostnameJob <21> is submitted to default queue <normal> <<Waiting for dispatch ...>>

<<Starting on lsfhost.localdomain>> n1

n1

n2

n2

You can use the LSF-SLURM external scheduler to specify additional SLURM options on the command line. As shown in Example 5-6, it can be used to submit a job to run one task per compute node (on SMP nodes):

Example 5-6 Submitting a Non-MPI Parallel Job to Run One Task per Node

$ bsub -n4 -ext "SLURM[nodes=4]" -I srun hostname Job <22> is submitted to default queue <normal> <<Waiting for dispatch ...>>

<<Starting on lsfhost.localdomain>> n1

n2

n3

n4

Submitting a Parallel Job That Uses the HP-MPI Message Passing Interface

Use the following format of the LSF bsub command to submit a parallel job that makes use of HP-MPI:

bsub -nnum-procs[bsub-options]mpijob

The bsub command submits the job to LSF-HPC.

The -nnum-procsparameter, which is required for parallel jobs, specifies the number of cores requested for the job.

The mpijob argument has the following format:mpirun [mpirun--options][-srun[srun-options]]mpi-jobnameSee the mpirun(1) manpage for more information on this command.

The mpirun command's -srunoption is required if the MPI_USESRUN environment variable is not set or if you want to use additional srun options to execute your job. The srun command, used by the mpirun command to launch the MPI tasks in parallel in the lsf partition, determines the number of tasks to launch from the SLURM_NPROCS environment variable that was set by LSF-HPC; this environment variable is equivalent

48 Submitting Jobs