To illustrate how the external scheduler is used to launch an application, consider the following command line, which launches an application on ten nodes with one task per node:
$ bsub
The following command line launches the same application, also on ten nodes, but stipulates that node n16 should not be used:
$ bsub
7.1.3 Notes on LSF-HPC
The following are noteworthy items for users of
•You must run jobs as a
•A SLURM partition named lsf is used to manage LSF jobs. You can view information about this partition with the sinfo command.
•LSF daemons only run on one node in the HP XC system. As a result, the lshosts and bhosts commands only list one host that represents all the resources of the HP XC system. The total number of CPUs for that host should be equal to the total number of CPUs found in the nodes assigned to the SLURM lsf partition.
The total number of processors for that host should be equal to the total number of processors assigned to the SLURM lsf partition.
•When a job is submitted and the resources are available,
SLURM_JOBID | This environment variable is created so that subsequent srun |
| commands make use of the SLURM allocation created by |
| |
| query information about the SLURM allocation, as shown here: |
| $ squeue |
SLURM_NPROCS | This environment variable passes along the total number of |
| tasks requested with the bsub |
| srun commands. User scripts can override this value with the |
| srun |
| equal to the original number of requested tasks. |
•
If this job starter script is not configured for a queue, the user jobs begin execution locally on the
The bqueues
For example, consider an
Using LSF