comfortable interactive session, but every job submitted to this queue is executed on the LSF execution host instead of the first allocated node.

Example 7-23shows this subtle difference. Note that the LSF execution host in this example is

n20:

Example 7-23: Submitting an Interactive Shell Program on the LSF Execution Host

$ bsub -Is -n4 -ext "SLURM[nodes=4]" -q noscript /bin/bash Job <96> is submitted to default queue <noscript>. <<Waiting for dispatch ...>>

<<Starting on lsfhost.localdomain>> $ hostname

n20

$ srun hostname n1

n1

n2

n2

$ exit

$

7.7 LSF Equivalents of SLURM srun Options

Table 7-2describes the srun options and lists their LSF equivalents.

Table 7-2: LSF Equivalents of SLURM srun Options

srun Option

Description

LSF Equivalent

-n, --ntasks=ntasks

Number of processes (tasks) to run.

-c, --cpus-per-task=ncpus

Number of CPUs per task. Min CPUs

 

per node = MAX(ncpus, mincpus)

-N, --nodes=min[-max]

Minimum and maximum number

 

of nodes allocated to job. The

 

job allocation will at least contain

 

minimum number of nodes.

bsub -n <num>

HP XC does not provide this option because the meaning of this option can be covered by “bsub –n” and “mincpus=n”.

-ext “SLURM[nodes=min[-max]]”

--mincpus=n

Specify minimum number of CPUs

-ext “SLURM[mincpus=n]”

 

per node. Min cpus per node =

 

 

MAX(-c ncpus, --mincpus=n).

 

 

Default value is 1.

 

--mem=MB

Specify a minimum amount of real

-ext “SLURM[mincpus=n]”

 

memory of each node. By default,

 

 

job does not require.

 

--tmp=MB

Specify a minimum amount of

“SLURM[tmp=MB]”

 

temporary disk space of each node.

 

 

By default, job does not require -ext.

 

-C, --constraint=list

Specify a list of constraints. The

-ext “SLURM[constraint=list]”

 

list may include multiple features

 

 

separated by “&” or “”. “&”

 

 

represents ANDed, “” represents

 

 

ORed. By default, job does not

 

 

require.

 

Using LSF 7-23