Escali 4.4 manual Running Scali MPI Connect programs, Naming conventions

Page 35

Section: 3.3 Running Scali MPI Connect programs

3.2.5 Notes on Compiling and linking on Power series

The Power series processors (PowerPC, POWER4 and POWER5) are both 32 and 64 bit capable. There are only 64 bit versions of Linux provided by SUSE and RedHat, and only a 64 bit OS is supported by Scali. However the Power families are capable of running 32 bit programs at full speed while running a 64 bit OS. For this reason Scali supports running both 32 bit and 64 bit MPI programs.

Note that gcc default compiles 32 bit on Power, use the the gcc/g77 flags -m32 and -m64 to explicitly select code generation.

The PowerPC and POWER4/5 have a common core instruction set but different extensions, be sure to read the specifics in the documentation on the compilers code generations flags for optimal performance.

It is not possible to link 32 and 64 bit object code into one executable, (no cross dynamic linking either) so there must be double set of libraries. It is common convention on ppc64 systems that all 32 bit libraries are placed in lib directories and all 64 bit libraries in lib64. This means that when linking a 64 bit application with Scali MPI, you must use the -L$MPI_HOME/lib64 argument instead of the normal -L$MPI_HOME/lib.

3.2.6 Notes on compiling with MPI-2 features

To compile and link with the Scali MPI-IO features you need to do the following depending on whether it is a C or a Fortran program:

For C programs mpio.h must be included in your program and you must link with the libmpio shared library in addition to the Scali MPI 1.2 C shared library (libmpi):

<CC> <program>.o -I/opt/scali/inlude -L/opt/scali/lib -lmpio -lmpi -o <program>

For Fortran programs you will need to include mpiof.h in your program and link with the libmpio shared library in addition to the Scali MPI 1.2 C and Fortran shared libraries:

<F77> <program>.o -I/opt/scali/include -L/opt/scali/lib -lmpio -lmpif -lmpi -o <program>

3.3 Running Scali MPI Connect programs

Note that executables issuing SMC calls cannot be started directly from a shell prompt. SMC programs can either be started using the MPI monitor program mpimon, the wrapper script mpirun, or from the Scali Manage GUI [See Scali Manage User Guide for details].

3.3.1 Naming conventions

When an application program is started, Scali MPI Connect is modifying the program name argv[0] to help in identifying the instances. The following convention is used for the executable, reported on the command line using the Unix utility ps:

<userprogram>-<rank number>(mpi:<pid>@<nodename>)

where:

<userprogram> is the name of the application program. <rank number> is the application’s MPI-process rank number.

Scali MPI Connect Release 4.4 Users Guide

23

Image 35
Contents Scali MPI ConnectTM Users Guide Copyright 1999-2005 Scali AS. All rights reserved AcknowledgementScali Bronze Software Certificate Grant of License MaintenanceII Software License Terms Commencement Export Requirements SupportLicense Manager Sub-license and distributionLICENSEE’s Obligations SCALI’s ObligationsTransfer Title to Intellectual Property RightsCompliance with Licenses Warranty of Title and Substantial PerformanceScali MPI Connect Release 4.4 Users Guide Vii Limitation on Remedies and LiabilitiesMiscellaneous Proprietary InformationGoverning Law Scali MPI Connect Release 4.4 Users Guide Table of contents Profiling with Scali MPI Connect Appendix a Example MPI code Scali MPI Connect Release 4.4 Users Guide Scali MPI Connect product context ChapterPlatforms supported Scali mailing lists SMC FAQ SMC release documentsProblem reports SupportFeedback How to read this guideAcronyms and abbreviations LicensingNIC GUI style font Terms and conventionsTypographic conventions Typographic conventions Scali MPI Connect components Description of Scali MPI ConnectSMC network devices Ethernet Devices Direct Access Transport DATNetwork devices Shared Memory Device3.2 DET Using detctlUsing detstat 5.1 IB MyrinetInfiniband 4.1 GMChannel buffer Communication protocols on DAT-devices6 SCI Transporter protocol Inlining protocolEagerbuffering protocol Zerocopy protocol MPI-2 FeaturesSupport for other interconnects Scali MPI Connect Release 4.4 Users Guide MPI-2 Features Running Setting up a Scali MPI Connect environmentCompiling and linking Scali MPI Connect environment variablesLinker flags Compiler supportNaming conventions Running Scali MPI Connect programsIdentity of parallel processes Mpimon monitor programBasic usage Program spec Controlling options to mpimonStandard input Standard outputNetwork options How to provide options to mpimonMpirun usage Mpirun wrapper scriptPart part Running with tcp error detection TfdrSuspending and resuming jobs Running with dynamic interconnect failover capabilitiesDebugging with a sequential debugger Debugging and profilingDebugging with Etnus Totalview Using built-in segment protect violation handlerBuilt-in-tools for debugging Assistance for external profilingChannelinlinethreshold size to set threshold for inlining Controlling communication resourcesCommunication resources on DAT-devices Matching MPIRecv with MPIProbe Using MPIIsend, MPIIrecvUsing MPIBsend Good programming practice with SMCUnsafe MPI programs Error and warning messagesUser interface errors and warnings Fatal errorsMpimon options Postfix Giving numeric values to mpimonPrefix Scali MPI Connect Release 4.4 Users Guide Example Profiling with Scali MPI ConnectTracing Using Scali MPI Connect built-in trace+relSecs S eTime where AbsRank MPIcallcommNamerankcall-dependant-parameters whereExample FeaturesTiming Using Scali MPI Connect built-in timingMPIcallDcallsDtimeDfreq TcallsTtimeTfreq Count!avrLen!zroLen!inline!eager!transporter! where Using the scanalyzeCommrank recv from fromworldFromcommonFields Commrank send to toworldTocommonFields whereFor timing This produces the following report Using SMCs built-in CPU-usage functionalityScali MPI Connect Release 4.4 Users Guide Automatic buffer management Tuning communication resourcesFirst iteration is very slow How to optimize MPI performanceBenchmarking Caching the application program on the nodesMemory consumption increase after warm-up Collective operationsFinding the best algorithm Image contrast enhancement Appendix aPrograms in the ScaMPItst package Scali MPI Connect Release 4.4 Users Guide Original File formatAppendix B When things do not work troubleshootingWhy does not my program start to run? General problems Why can I not start mpid?Why does my program terminate abnormally? Appendix C Per node installation of Scali MPI ConnectExample Install Scali MPI Connect for TCP/IPInstall Scali MPI Connect for Direct Ethernet Install Scali MPI Connect for MyrinetLicense options Install Scali MPI Connect for InfinibandInstall Scali MPI Connect for SCI Install and configure SCI management softwareScali kernel drivers Uninstalling SMCTroubleshooting Network providers Troubleshooting the GM provider Troubleshooting 3rdparty DAT providersScali MPI Connect Release 4.4 Users Guide Grouping Appendix D Bracket expansion and groupingBracket expansion Scali MPI Connect Release 4.4 Users Guide Related documentation Appendix EScali MPI Connect Release 4.4 Users Guide List of figures Scali MPI Connect Release 4.4 Users Guide Transporter protocol IndexSSP