Preemption

LSF-HPC uses the SLURM "node share" feature to facilitate preemption. When a low-priority is job preempted, job processes are suspended on allocated nodes, and LSF-HPC places the high-priority job on the same node. After the high-priority job completes, LSF-HPC resumes suspended low-priority jobs.

Determining the LSF Execution Host

The lsid command displays the name of the HP XC system, and the name of the LSF execution host, along with some general LSF-HPC information.

$ lsid

Platform LSF HPC 6.1 for SLURM, date and time stamp

Copyright 1992-2005 Platform Computing Corporation

My cluster name is hptclsf

My master name is lsfhost.localdomain

In this example, hptclsf is the LSF cluster name (where is user is logged in and which contains the compute nodes), and lsfhost.localdomain is the virtual IP name of the node where LSF-HPC is installed and runs (LSF execution host).

Determining Available LSF-HPC System Resources

For best use of system resources when launching an application, it is useful to know beforehand what system resources are available for your use. This section describes how to obtain information about system resources such as the number of cores available, LSF execution host node information, and LSF-HPC system queues.

Getting the Status of LSF-HPC

The bhosts command displays LSF-HPC resource usage information. This command is useful to examine the status of the system cores. The bhosts command provides a summary of the jobs on the system and information about the current state of LSF-HPC. For example, it can be used to determine if LSF-HPC is ready to start accepting batch jobs.

LSF-HPC daemons run on only one node in the HP XC system, so the bhosts command will list one host, which represents all the resources of the HP XC system. The total number of cores for that host should be equal to the total number of cores assigned to the SLURM lsf partition.

By default, this command returns the host name, host status, and job state statistics.

The following example shows the output from the bhosts command:

$ bhosts

 

 

 

 

 

 

 

 

HOST_NAME

STATUS JL/U

MAX

NJOBS RUN SSUSP USUSP RSV

lsfhost.localdomain

ok

-

16

0

0

0

0

0

Of note in the bhosts output:

The HOST_NAME column displays the name of the LSF execution host.

The MAX column displays the total core count (usable cores) of all available computer nodes in the lsf partition.

The STATUS column shows the state of LSF-HPC and displays a status of either ok or closed.

The NJOBS column displays the number of jobs. Note that in LSF terminology, a parallel job with 10 tasks counts as 10 jobs.

Getting Information About LSF Execution Host Node

The lshosts command displays resource information about the LSF-HPC cluster. This command is useful for verifying machine-specific information.

LSF-HPC daemons run on only one node in the HP XC system, so the lshosts command will list one host

which represents all the resources assigned to it by the HP XC system. The total number of cores for that host should be equal to the total number of cores assigned to the SLURM lsf partition.

By default, lshosts returns the following information: host name, host type, host model, core factor, number of cores, total memory, total swap space, server information, and static resources.

Using LSF-HPC 75