Escali 4.4 manual Appendix C, Per node installation of Scali MPI Connect

Page 68

Appendix C

Install Scali MPI Connect

 

 

 

 

Scali MPI Connect can be installed on clusters in one of two ways, either as part of installing clusters from scratch with Scali Manage, or by installing it on each particular node in systems that do not use Scali Manage. In the first case the default when building clusters is to include Scali MPI Connect as well, whereas in the second case the cluster is probably managed with some other suite of tools that do not integrate with Scali MPI Connect. In the following the steps needed to manually install Scali MPI Connect are detailed.

C-1 Per node installation of Scali MPI Connect

Scali MPI Connect must be installed on every node in the cluster. When running smcinstall you should give arguments to specify your interconnects.

The -h option gives you details on the installation command and shows you which options you need to specify in order to install the software components you want :

root# ./smcinstall -h

This is the Scali MPI Connect (SMC) installation program. The script will install and configure Scali MPI Connect at the current node.

Usage: smcinstall [-atemszulixVh?]

-a

Automatically accept license terms.

-t

Install Scali MPI Connect for TCP/IP.

-e <eth devs>

Install Scali MPI Connect for Direct Ethernet.

 

Use comma separated list for channel aggregation, and

 

additional -e options for multiple providers.

-m <filenamepath>Install Scali MPI Connect for Myrinet.

 

<filename> is gm-2.x source file package (.tar.gz).

 

<path> path to pre-installed GM-2 software.

-b <filenamepath>Install Scali MPI Connect for Infiniband.

 

<filename> is Mellanox SDK-3.x or IBGD source

 

file package (.tar.gz)

 

Please make sure to use the correctdriver-kernel version!

 

Consult the Mellanox Release Notes if in doubt.

 

<path> path to pre-installed Mellanox compatible software

-s

(ex. software from InfiniCon, TopSpin or Voltaire)

Install Scali MPI Connect for SCI.

-z

Install and configure SCI management software.

-u <licensefile>

Install/upgrade license file and software.

-n <hostname>

Specify hostname of scali license server.

-l

Create license request (only necessary on license server).

-i

Ignore SSP check.

-x

Ignore errors.

-V

Print version.

-h/-?

Show this help message.

Note: You must have root privileges to install SMC.

One or more of the product selection options (-t, -e, -m, -b and -s) must be specified to install a working MPI environment. The -u option can be used to install the license manager software on a license sever, and the -z option can be used to install the SCI management software.

Scali MPI Connect Release 4.4 Users Guide

56

Image 68
Contents Scali MPI ConnectTM Users Guide Acknowledgement Copyright 1999-2005 Scali AS. All rights reservedScali Bronze Software Certificate Grant of License MaintenanceII Software License Terms Commencement Support License ManagerSub-license and distribution Export RequirementsSCALI’s Obligations LICENSEE’s ObligationsTitle to Intellectual Property Rights TransferWarranty of Title and Substantial Performance Compliance with LicensesLimitation on Remedies and Liabilities Scali MPI Connect Release 4.4 Users Guide ViiProprietary Information MiscellaneousGoverning Law Scali MPI Connect Release 4.4 Users Guide Table of contents Profiling with Scali MPI Connect Appendix a Example MPI code Scali MPI Connect Release 4.4 Users Guide Chapter Scali MPI Connect product contextScali mailing lists SMC FAQ SMC release documents Problem reportsSupport Platforms supportedHow to read this guide Acronyms and abbreviationsLicensing FeedbackNIC GUI style font Terms and conventionsTypographic conventions Typographic conventions Description of Scali MPI Connect Scali MPI Connect componentsSMC network devices Direct Access Transport DAT Network devicesShared Memory Device Ethernet Devices3.2 DET Using detctlUsing detstat Myrinet Infiniband4.1 GM 5.1 IBChannel buffer Communication protocols on DAT-devices6 SCI Transporter protocol Inlining protocolEagerbuffering protocol Zerocopy protocol MPI-2 FeaturesSupport for other interconnects Scali MPI Connect Release 4.4 Users Guide MPI-2 Features Setting up a Scali MPI Connect environment Compiling and linkingScali MPI Connect environment variables RunningCompiler support Linker flagsRunning Scali MPI Connect programs Naming conventionsIdentity of parallel processes Mpimon monitor programBasic usage Controlling options to mpimon Standard inputStandard output Program specHow to provide options to mpimon Network optionsMpirun wrapper script Mpirun usageRunning with tcp error detection Tfdr Suspending and resuming jobsRunning with dynamic interconnect failover capabilities Part partDebugging and profiling Debugging with a sequential debuggerUsing built-in segment protect violation handler Built-in-tools for debuggingAssistance for external profiling Debugging with Etnus TotalviewChannelinlinethreshold size to set threshold for inlining Controlling communication resourcesCommunication resources on DAT-devices Using MPIIsend, MPIIrecv Using MPIBsendGood programming practice with SMC Matching MPIRecv with MPIProbeError and warning messages User interface errors and warningsFatal errors Unsafe MPI programsMpimon options Postfix Giving numeric values to mpimonPrefix Scali MPI Connect Release 4.4 Users Guide Profiling with Scali MPI Connect ExampleUsing Scali MPI Connect built-in trace TracingAbsRank MPIcallcommNamerankcall-dependant-parameters where +relSecs S eTime whereFeatures ExampleUsing Scali MPI Connect built-in timing TimingMPIcallDcallsDtimeDfreq TcallsTtimeTfreq Using the scanalyze Commrank recv from fromworldFromcommonFieldsCommrank send to toworldTocommonFields where Count!avrLen!zroLen!inline!eager!transporter! whereFor timing Using SMCs built-in CPU-usage functionality This produces the following reportScali MPI Connect Release 4.4 Users Guide Tuning communication resources Automatic buffer managementHow to optimize MPI performance BenchmarkingCaching the application program on the nodes First iteration is very slowCollective operations Memory consumption increase after warm-upFinding the best algorithm Image contrast enhancement Appendix aPrograms in the ScaMPItst package Scali MPI Connect Release 4.4 Users Guide File format OriginalAppendix B When things do not work troubleshootingWhy does not my program start to run? General problems Why can I not start mpid?Why does my program terminate abnormally? Per node installation of Scali MPI Connect Appendix CInstall Scali MPI Connect for TCP/IP Install Scali MPI Connect for Direct EthernetInstall Scali MPI Connect for Myrinet ExampleInstall Scali MPI Connect for Infiniband Install Scali MPI Connect for SCIInstall and configure SCI management software License optionsScali kernel drivers Uninstalling SMCTroubleshooting Network providers Troubleshooting 3rdparty DAT providers Troubleshooting the GM providerScali MPI Connect Release 4.4 Users Guide Grouping Appendix D Bracket expansion and groupingBracket expansion Scali MPI Connect Release 4.4 Users Guide Appendix E Related documentationScali MPI Connect Release 4.4 Users Guide List of figures Scali MPI Connect Release 4.4 Users Guide Index Transporter protocolSSP