LSF-HPC node allocation (compute nodes). LSF-HPC node allocation is created by -nnum-procsparameter, which specifies the number of cores the job requests. The num-procsparameter may be expressed as minprocs[,maxprocs] where minprocs specifies the minimum number of cores and the optional value maxprocs specifies the maximum number of cores. Refer to "Submitting a Non-MPI Parallel Job" for information

about running jobs. Refer to "Submitting a Batch Job or Job Script"for information about running scripts.
bsub -nnum-procs [bsub-options] srun [srun-options]jobname[job-arguments]

This is the bsub command format to submit a parallel job to an LSF-HPC node allocation (compute nodes). An LSF-HPC node allocation is created by the -nnum-procsparameter, which specifies the minimum number of cores the job requests. The num-procsparameter may be expressed as minprocs[,maxprocs] where minprocs specifies the minimum number of cores and the optional value maxprocs specifies the maximum number of cores. An srun command is required to run jobs on an LSF-HPC node allocation. Refer to

"Submitting a Non-MPI Parallel Job" .

bsub-nnum-procs [bsub-options] mpirun [mpirun-options] \ -srun[srun-options]mpi-jobname [job-options]

This is the bsub command format to submit an HP-MPI job. The -srunoption is required. Refer to "Submitting a Parallel Job That Uses the HP-MPI Message Passing Interface" .

bsub -n num-procs-ext "SLURM[slurm-arguments]" \[bsub-options] [srun [srun-options]]jobname [job-options]

This is the bsub command format to submit a parallel job to an LSF-HPC node allocation (compute nodes) using the external scheduler option. The external scheduler option provides additional capabilities at the job level and queue level by allowing the inclusion of several SLURM options in the LSF-HPC command line. Refer to "LSF-SLURM External Scheduler" .

LSF-SLURM External Scheduler

An important option that can be included in submitting parallel jobs with LSF-HPC is the external scheduler option: The external scheduler option provides application-specific external scheduling options for jobs capabilities and enables inclusion of several SLURM options in the LSF command line. For example, this option could be used to submit a job to run one task per node when you have a resource-intensive job which needs to have sole access to the full resources of a node. If your job needs particular resources found only on a specific set of nodes, this option could be used to submit a job to those specific nodes. There are several options available for use with the external scheduler. Refer to the list in this section.

The format for the external scheduler is:

-ext"SLURM[slurm-arguments]"

slurm-argumentscan consist of one or more of the following srun options, separated by semicolons:

SLURM Arguments

Function

nodes=min[-max]

Minimum and maximum number of nodes allocated to job. The job allocation will

 

at least contain the minimum number of nodes.

mincpus=<ncpus>

Specify minimum number of cores per node. Default value is 1.

mem=<value>

Specify a minimum amount of real memory of each node.

tmp=<value>

Specify a minimum amount of temporary disk space of each node.

constraint=<value>

Specify a list of constraints. The list may include multiple features separated by “&

 

or “”. “&” represents AND-ed,” represents OR-ed.

nodelist=<list of nodes>

Request a specific list of nodes. The job will at least contain these nodes. The list

 

may be specified as a comma-separated list of nodes, or a range of nodes.

exclude=<list of nodes>

Requests that a specific list of hosts not be included in resource allocated to this

 

job. The list may be specified as a comma-separated list of nodes, or a range of

 

nodes.

contiguous=yes

Request a mandatory contiguous range of nodes.

 

 

When this option is added to an LSF command line, it looks like the following:bsub -n num-procs-ext "SLURM[slurm-arguments]" [bsub-options] [srun[srun-options]]jobname [job-options]

78 Using LSF