LSF integrated with SLURM only runs daemons on one node within the HP XC system. This node hosts an HP XC LSF Alias, which is an IP address and corresponding host name specifically established for LSF integrated with SLURM on HP XC to use. The HP XC system is known by this HP XC LSF Alias within LSF.

Various LSF commands, such as lsid , lshosts, and bhosts, display HP XC LSF Alias in their output. The default value of the HP XC LSF Alias, lsfhost.localdomain is shown in the following examples:

$ lsid

Platform LSF HPC version, Update n, build date stamp

Copyright 1992-2008 Platform Computing Corporation

My cluster name is hptclsf

My master name is lsfhost.localdomain

$ lshosts

 

 

 

 

 

 

 

 

 

HOST_NAME

type

model

cpuf ncpus maxmem maxswp server RESOURCES

lsfhost.loc SLINUX6 Opteron8

60.0

8

2007M

-

Yes (slurm)

 

$ bhosts

 

 

 

 

 

 

 

 

 

HOST_NAME

 

STATUS

JL/U

MAX

NJOBS

RUN

SSUSP

USUSP

RSV

lsfhost.localdomai ok

-

8

0

0

0

0

0

All HP XC nodes are dynamically configured as “LSF Floating Client Hosts” so that you can execute LSF commands from any HP XC node. When you do execute an LSF command from an HP XC node, an entry in the output of the lshosts acknowledges the node is licensed to run LSF commands.

In the following example, node n15 is configured as an LSF Client Host, not the LSF execution host. This is shown in the output when you run lshosts command is run on that node: The values for the type and model are UNKNOWN and the value for server is No.

$ lshosts

 

 

 

 

 

 

 

 

HOST_NAME

type

model

cpuf ncpus maxmem maxswp server

RESOURCES

lsfhost.loc SLINUX6 Opteron8

60.0

8

2007M

-

Yes

(slurm)

$ ssh n15 lshosts

 

 

 

 

 

 

 

HOST_NAME

type

model

cpuf ncpus maxmem maxswp server

RESOURCES

lsfhost.loc SLINUX6 Opteron8

60.0

8

2007M

-

Yes

(slurm)

n15

UNKNOWN UNKNOWN_

1.0

-

-

-

No

()

The job-level run-time limits enforced by LSF integrated with SLURM are not supported.

LSF integrated with SLURM does not support parallel or SLURM-based interactive jobs in PTY mode (bsub -Isand bsub -Ip). However, after LSF dispatches a user job on the HP XC system, you can use the srun or ssh command to access the job resources directly accessible. For more information, see “Working Interactively Within an Allocation”.

LSF integrated with SLURM does not support user-account mapping and system-account mapping.

LSF integrated with SLURM does not support chunk jobs. If a job is submitted to chunk queue, SLURM lets the job pend.

LSF integrated with SLURM does not support topology-aware advanced reservation scheduling.

10.4Job Terminology

The following terms are used to describe jobs submitted to LSF integrated with SLURM:

Batch job

A job submitted to LSF or SLURM that runs without any I/O

 

connection back to the terminal from which the job was

 

submitted. This job may run immediately, or it may run

10.4 Job Terminology

89