This bsub command launches a request for four cores (from the
#!/bin/sh hostname srun hostname
mpirun -srun ./hellompi
3.
In this example, four cores spread over four nodes (n1,n2,n3,n4) are allocated for myscript, and the SLURM job id of 53 is assigned to the allocation.
4.
SLURM_JOBID is the SLURM job ID of the job. Note that this is not the same as the
and the
SLURM_NPROCS is the number of processes allocated.
These environment variables are intended for use by the user's job, whether it is explicitly (user scripts may use these variables as necessary) or implicitly (the srun commands in the user’s job use these variables to determine its allocation of resources).
The value for SLURM_NPROCS is 4 and the SLURM_JOBID is 53 in this example.
5.The user job myscript begins execution on compute node n1.
The first line in myscript is the hostname command. It executes locally and returns the name of node, n1.
6.The second line in the myscript script is the srun hostname command. The srun command in myscript inherits SLURM_JOBID and SLURM_NPROCS from the environment and executes the hostname command on each compute node in the allocation.
7.The output of the hostname tasks (n1, n2, n3, and n4). is aggregated back to the srun launch command (shown as dashed lines in Figure
The last line in myscript is the mpirun
The output of the hellompi tasks is aggregated back to the srun launch command where it is collected by
The command executes on the allocated compute nodes n1, n2, n3, and n4.
When the job finishes,
Notes About Using
This section provides some additional information that should be noted about using
Job Startup and Job Control
When
74 Using LSF