HP XC System 3.x Software Developing Applications, Application Development Environment Overview

Page 37

4 Developing Applications

This chapter discusses topics associated with developing applications in the HP XC environment. Before reading this chapter, you should you read and understand Chapter 1. Overview of the User Environment and Chapter 2. Using the System .

This chapter addresses the following topics:

Application Development Environment Overview (page 37)

Compilers (page 37)

Examining Nodes and Partitions Before Running Jobs (page 38)

Interrupting a Job (page 38)

Setting Debugging Options (page 39)

Developing Serial Applications (page 39)

Developing Parallel Applications (page 40)

Developing Libraries (page 43)

Application Development Environment Overview

The HP XC system provides an application development environment that enables developing, building, and running applications using multiple nodes with multiple cores. These applications can be parallel applications using many cores, or serial applications using a single core.

The HP XC system is made up of nodes that are assigned one or more various roles. Nodes with the login role (login nodes) and nodes with the compute role (compute nodes) are important to the application developer:

Compute nodes run user applications.

Login nodes are where you log in and interact with the system to perform various tasks, such as executing commands, compiling and linking applications, and launching applications. A login node can also execute single-core applications and commands, just as on any other standard Linux system.

Applications are launched from login nodes, and then distributed and run on one or more compute nodes.

The HP XC environment uses the LSF-HPC batch job scheduler to launch and manage parallel and serial applications. When a job is submitted, LSF-HPC places the job in a queue and allows it to run when the necessary resources become available. When a job is completed, LSF-HPC returns job output, job information, and any errors. In addition to batch jobs, LSF-HPC can also run interactive batch jobs and interactive jobs. An LSF-HPC interactive batch job is a batch job that allows you to interact with the application, yet still take advantage of LSF-HPC scheduling policies and features. An LSF-HPC interactive job is run without using the batch processing features of LSF-HPC, but is dispatched immediately by LSF-HPC on the LSF execution host node. LSF-HPC is described in detail in Chapter 9. Using LSF .

Regardless of whether an application is parallel or serial, or whether it is run interactively or as a batch job, the general steps to developing an HP XC application are as follows:

1.Build the code by compiling and linking with the correct compiler. Note that compiler selection, and set up of appropriate parameters for specific compilers, is made easier by the use of modules.

2.Launch the application with the bsub, srun, or mpirun command.

The build and launch commands are executed from the node to which you are logged in.

Compilers

You can use compilers acquired from other vendors on an HP XC system. For example, Intel C/C++ and Fortran compilers for the 64-bit architecture and Portland Group C/C++ and Fortran compilers on the CP4000 platform.

Intel, PGI, and Pathscale compilers are not supplied with the HP XC system.

You can use other compilers and libraries on the HP XC system as on any other system, provided they contain single-core routines and have no dependencies on another message-passing system.

Application Development Environment Overview 37

Image 37
Contents HP XC System Software Users Guide Page Table of Contents Developing Applications Configuring Your Environment with ModulefilesSubmitting Jobs Using Slurm Tuning ApplicationsUsing LSF Debugging ApplicationsExamples Advanced TopicsGlossary 109 Index 115 List of Figures Page Determining the Node Platform List of TablesPage Submitting a Job Script List of ExamplesPage Intended Audience About This DocumentDocument Organization This document is organized as followsHP XC Information For More Information $ man lsfcommandnameSupplementary Information Manpages Related Information$ man discover $ man 8 discover $ man -k keywordRelated MPI Web Sites Related Linux Web SitesRelated Compiler Web Sites Additional PublicationsHP Encourages Your Comments Typographic ConventionsEnvironment Variable User inputSystem Architecture Overview of the User EnvironmentHP XC System Software Operating SystemStorage and I/O Node SpecializationSAN Storage File SystemLocal Storage File System LayoutSystem Interconnect Network Determining System Configuration InformationNetwork Address Translation NAT User Environment CommandsModules Run-Time Environment Application Development EnvironmentParallel Applications Serial ApplicationsStandard LSF Load Sharing Facility LSF-HPCHow LSF-HPC and Slurm Interact Components, Tools, Compilers, Libraries, and Debuggers Mpirun commandLVS Login Routing Using the SystemUsing the Secure Shell to Log Logging In to the SystemGetting Information About Queues IntroductionGetting Information About Resources Performing Other Common User Tasks $ man sinfo Getting System Help and InformationOverview of Modules Configuring Your Environment with ModulefilesSupplied Modulefiles Viewing Available Modulefiles Modulefiles Automatically Loaded on the SystemViewing Loaded Modulefiles Loading a Modulefile Unloading a Modulefile Automatically Loading a Modulefile at Login Modulefile Conflicts Loading a Modulefile for the Current SessionViewing Modulefile-Specific Help Creating a Modulefile$ module load modules $ man modulefile $ module help totalviewPage Application Development Environment Overview Developing ApplicationsCompilers Interrupting a Job Examining Nodes and Partitions Before Running JobsMPI Compiler Partition Avail Timelimit Nodes State NodelistDeveloping Serial Applications Setting Debugging OptionsSerial Application Build Environment Building Serial ApplicationsParallel Application Build Environment Developing Parallel ApplicationsModulefiles OpenMPQuadrics Shmem PthreadsMPI Library Intel Fortran and C/C++CompilersBuilding Parallel Applications Designing Libraries for the CP4000 Platform Developing LibrariesExamples of Compiling and Linking HP-MPI Applications Linkcommand 64-bit -L/opt/mypackage/lib/x8664 -lmystuff Linkcommand 32-bit -L/opt/mypackage/lib/i686 -lmystuffTo build a 64-bit application, you might enter Overview of Job Submission Submitting JobsExtSLURMslurm-arguments Submitting a Serial Job Using LSF-HPC Submitting a Serial Job Using Standard LSFSubmitting a Serial Job with the LSF bsub Command $ bsub hostnameSubmitting a Serial Job Through Slurm only $ bsub -n4 -I srun hostname Submitting a Non-MPI Parallel JobBsub -nnum-procsbsub-optionsmpijob Mpirun mpirun--options-srunsrun-optionsmpi-jobname$ bsub -n4 -I mpirun -srun ./helloworld Submitting a Batch Job or Job ScriptBsub -nnum-procs bsub-optionsscript-name Srun hostname mpirun -srun hellompi $ cat myscript.sh #!/bin/sh$ bsub -I -n4 Myscript.sh $ bsub -n4 -ext SLURMnodes=4 -I ./myscript.sh$ bsub -n4 -I ./myscript.sh Running Preexecution Programs$ cat ./envscript.sh #!/bin/sh name=`hostname` Opt/hptc/bin/srun Mypreexec Debugging Serial Applications Debugging ApplicationsDebugging Parallel Applications TotalViewUsing TotalView with Slurm Setting Up TotalViewSSH and TotalView Module load mpimodule load totalviewSetting TotalView Preferences Using TotalView with LSF-HPCDebugging an Application Debugging Running Applications Sourcefile initfdte.f was not found, using assembler modeDirectories in File ⇒ Search Path $ mpirun -srun -n2 PsimpleExiting TotalView $ scancel --user username$ squeue Page Using the Intel Trace Collector and Intel Trace Analyzer Tuning ApplicationsBuilding a Program Intel Trace Collector and HP-MPI Visualizing Data Intel Trace Analyzer and HP-MPI Running a Program Intel Trace Collector and HP-MPILibs CldflagsUsing the Intel Trace Collector and Intel Trace Analyzer Page Launching Jobs with the srun Command Using SlurmSrun Squeue Scancel Sinfo Scontrol Introduction to SlurmUsing the srun Command with HP-MPI Monitoring Jobs with the squeue CommandUsing the srun Command with LSF-HPC Srun Roles and ModesGetting System Information with the sinfo Command Terminating Jobs with the scancel CommandJob Accounting Security Fault Tolerance# chmod a+r /hptccluster/slurm/job/jobacct.log Using Standard LSF on an HP XC System Using LSFUsing LSF-HPC Overview of LSF-HPC Introduction to LSF-HPC in the HP XC EnvironmentHostname Differences Between LSF-HPC and Standard LSFResources Hostname Status JL/U MAX Njobs RUN Ssusp Ususp RSV$ ssh n15 lshosts Job TerminologyUnknown Unknown SLURMnodelist =nodelist if specified HP XCCompute Node Resource Support$ bsub -n 10 -ext SLURMnodes=10 -I srun hostname $ bsub -n 10 -I srun hostname$ bsub -n 10 -ext SLURMnodes=10exclude=n16 -I srun hostname $ bsub -n 10 -ext SLURMconstraint=dualcore -I srun hostname$ bsub -n4 -ext SLURMnodes=4 -o output.out ./myscript How LSF-HPC and Slurm Launch and Manage a Job#!/bin/sh hostname srun hostname Mpirun -srun ./hellompi Job Startup and Job ControlDetermining Available LSF-HPC System Resources Determining the LSF Execution HostGetting the Status of LSF-HPC Getting Information About LSF Execution Host NodeExamining LSF-HPC System Queues Getting Host Load InformationGetting Information About the lsf Partition SLINUX6$ sinfo -p lsf Summary of the LSF bsub Command Format$ sinfo -p lsf -lNe For information about running scripts LSF-SLURM External SchedulerBsub -n num-procs-ext SLURMslurm-arguments \ Bsub-options srun srun-optionsjobname job-optionsStarting on lsfhost.localdomain n6 Submitting a Job from a Non-HP XC HostWaiting for dispatch ... Starting on lsfhost.localdomain n1 Type=SLINUX64Getting Job Allocation Information Getting Information About JobsSlurmid=slurmjobidncpus=slurmnprocsslurmalloc=nodelist $ bjobs -l$ bhist -l Examining the Status of a JobTime stamp $ bjobs$ bhist Viewing the Historical Information for a JobSummary of time in seconds spent Various States Jobid User Jobname Pend Psusp RUN Ususp Ssusp Unkwn TotalTranslating Slurm and LSF-HPC JOBIDs $ bsub -I -n4 -ext SLURMnodes=4 /bin/bash Working Interactively Within an LSF-HPC Allocation$ bjobs -l 124 grep slurm $ srun --jobid=150 hostnameAlternatively, you can use the following $ unset Slurmjobid$ export SLURMJOBID=150 $ export SLURMNPROCS=4 $ unset Slurmjobid $ unset SlurmnprocsLSF-HPC Equivalents of Slurm srun Options Job 125 is submitted to the default queue normal$ srun --jobid=250 uptime $ bsub -n4 -ext SLURMnodes=4 -o %J.out sleepBsub -iinputfile Mpi=mpitype Quit-on-interrupt Page Enabling Remote Execution with OpenSSH Advanced TopicsRunning an X Terminal Session from a Remote Node Determining IP Address of Your Local MachineRunning an X terminal Session Using LSF-HPC Running an X terminal Session Using SlurmLogging in to HP XC System $ bsub -n4 -Ip srun -n1 xterm -display Using the GNU Parallel Make Capability$ srun -n4 hostname n46 $ srun -n2 hostname n46$ cd subdir srun -n1 -N1 $MAKE -j4 $ make PREFIX=’srun -n1 -N1 MAKEJ=-j4 Example ProcedurePerformance Considerations Local Disks on Compute NodesModified Makefile is invoked as follows $ make PREFIX=srun -n1 -N1 MAKEJ=-j4Shared File View Communication Between NodesPrivate File View Fp = fopen myfile, a+Page Building and Running a Serial Application Appendix a ExamplesLaunching a Serial Interactive Shell Through LSF-HPC Examine the LSF execution host informationRunning LSF-HPC Jobs with a Slurm Allocation Request Example 2. Four cores on Two Specific Nodes Launching a Parallel Interactive Shell Through LSF-HPCR15s r1m r15m It tmp swp mem LoadSched LoadStop SLURMnodes=2$ hostname n16 $ srun hostname n5 $ bjobs Examine the the running jobs information124 Lsfad Examine the the finished jobs information Submitting a Simple Job Script with LSF-HPCShow the environment Display the scriptSubmitting an Interactive Job with LSF-HPC Run some commands from the pseudo-terminalSubmit the job Show the job allocationExit the pseudo-terminal Submitting an HP-MPI Job with LSF-HPCView the interactive jobs View the finished jobsView the finished job View the running jobLsfhost.localdomai States by date and time Using a Resource Requirements String in an LSF-HPC Command$ bsub -n 8 -R ALPHA5 SLINUX64 \ -ext SLURMnodes=4-4 myjob 108 Glossary First-come See Fcfs First-served Global storage To the queueAs local storage Are not appropriate for replicationLogin requests and directs them to a node with a login role Single commandLinux Virtual See LVS Server Load file LSF master hostRemotely. PXE booting is configured at the Bios level Network See NIS Information ServicesNotably to install and remove software packages Slurm backupSsh Power available per unit of spaceSymmetric See SMP Multiprocessing 114 Index Index PGI Utilities, 63 Slurm commands