Escali 4.4 manual How to optimize MPI performance, Benchmarking, First iteration is very slow

Page 60

Section: 5.2 How to optimize MPI performance

5.2 How to optimize MPI performance

There is no universal recipe for getting good performance out of a message passing program. Here are some do’s and don’t’s for SMC.

5.2.1 Performance analysis

Learn about the performance behaviour of your particular MPI applications on a Scali System by using a performance analysis tool.

5.2.2 Using processor-power to poll

To maximize performance, ScaMPI is using poll when waiting for communication to terminate, instead of using interrupts. Polling means that the CPU is performing busy-wait (looping) when waiting for data over the interconnect. All exotic interconnects require polling.

Some applications create treads which may end up having more active threads than you have CPUs. This will have huge impact on MPI performance. In threaded application with irregular communication patterns you probably have other threads that could make use of the processor. To increase performance in this case, Scali has provided a “backoff” feature in ScaMPI. The backoff feature will still poll when waiting for data, but will start to enter sleep states on intervals when no data is coming. The algorithm is as follows: ScaMPI polls for a short time (idle time), then stops for a periode, and polls again.

The sleep periode starts a parameter controlled minimum and is doubled every time until it reaches the maximum value. The following environment variables set the parameters:

SCAMPI_BACKOFF_ENABLE (turns the mechanism on)

SCAMPI_BACKOFF_IDLE=n (defines idle-period as n ms [Default = 20 ms]) SCAMPI_BACKOFF_MIN=n (defines minimum backoff-time in ms [Default = 10 ms]) SCAMPI_BACKOFF_MAX=n (defines maximum backoff-time in ms [Default = 100 ms])

5.2.3 Reorder network traffic to avoid conflicts

Many-to-one communication may introduce bottlenecks. Zero-byte messages are low-cost. In a many-to-one communication, performance may improve if the receiver sends ready-to- receive tokens (in the shape of a zero-byte message) to the MPI-process wanting to send data.

5.3 Benchmarking

Benchmarking is that part of performance evaluation that deals with the measurement and analysis of computer performance using various kinds of test programs. Benchmark figures should always be handled with special care when making comparisons with similar results.

5.3.1 How to get expected performance

Caching the application program on the nodes.

For benchmarks with short execution time, total execution time may be reduced when running the process repetitively. For large configurations, copying the application to the local file system on each node will reduce startup latency and improve disk I/O bandwidth.

The first iteration is (very) slow.

This may happen because the MPI-processes in an application are not started simultaneously. Inserting an MPI_Barrier() before the timing loop will eliminate this.

Scali MPI Connect Release 4.4 Users Guide

48

Image 60
Contents Scali MPI ConnectTM Users Guide Acknowledgement Copyright 1999-2005 Scali AS. All rights reservedScali Bronze Software Certificate Maintenance II Software License Terms CommencementGrant of License Support License ManagerSub-license and distribution Export RequirementsSCALI’s Obligations LICENSEE’s ObligationsTitle to Intellectual Property Rights TransferWarranty of Title and Substantial Performance Compliance with LicensesLimitation on Remedies and Liabilities Scali MPI Connect Release 4.4 Users Guide ViiProprietary Information MiscellaneousGoverning Law Scali MPI Connect Release 4.4 Users Guide Table of contents Profiling with Scali MPI Connect Appendix a Example MPI code Scali MPI Connect Release 4.4 Users Guide Chapter Scali MPI Connect product contextScali mailing lists SMC FAQ SMC release documents Problem reportsSupport Platforms supportedHow to read this guide Acronyms and abbreviationsLicensing FeedbackNIC Terms and conventions Typographic conventionsGUI style font Typographic conventions Description of Scali MPI Connect Scali MPI Connect componentsSMC network devices Direct Access Transport DAT Network devicesShared Memory Device Ethernet DevicesUsing detctl Using detstat3.2 DET Myrinet Infiniband4.1 GM 5.1 IBCommunication protocols on DAT-devices 6 SCIChannel buffer Inlining protocol Eagerbuffering protocolTransporter protocol MPI-2 Features Support for other interconnectsZerocopy protocol Scali MPI Connect Release 4.4 Users Guide MPI-2 Features Setting up a Scali MPI Connect environment Compiling and linkingScali MPI Connect environment variables RunningCompiler support Linker flagsRunning Scali MPI Connect programs Naming conventionsMpimon monitor program Basic usageIdentity of parallel processes Controlling options to mpimon Standard inputStandard output Program specHow to provide options to mpimon Network optionsMpirun wrapper script Mpirun usageRunning with tcp error detection Tfdr Suspending and resuming jobsRunning with dynamic interconnect failover capabilities Part partDebugging and profiling Debugging with a sequential debuggerUsing built-in segment protect violation handler Built-in-tools for debuggingAssistance for external profiling Debugging with Etnus TotalviewControlling communication resources Communication resources on DAT-devicesChannelinlinethreshold size to set threshold for inlining Using MPIIsend, MPIIrecv Using MPIBsendGood programming practice with SMC Matching MPIRecv with MPIProbeError and warning messages User interface errors and warningsFatal errors Unsafe MPI programsMpimon options Giving numeric values to mpimon PrefixPostfix Scali MPI Connect Release 4.4 Users Guide Profiling with Scali MPI Connect ExampleUsing Scali MPI Connect built-in trace TracingAbsRank MPIcallcommNamerankcall-dependant-parameters where +relSecs S eTime whereFeatures ExampleUsing Scali MPI Connect built-in timing TimingMPIcallDcallsDtimeDfreq TcallsTtimeTfreq Using the scanalyze Commrank recv from fromworldFromcommonFieldsCommrank send to toworldTocommonFields where Count!avrLen!zroLen!inline!eager!transporter! whereFor timing Using SMCs built-in CPU-usage functionality This produces the following reportScali MPI Connect Release 4.4 Users Guide Tuning communication resources Automatic buffer managementHow to optimize MPI performance BenchmarkingCaching the application program on the nodes First iteration is very slowCollective operations Memory consumption increase after warm-upFinding the best algorithm Appendix a Programs in the ScaMPItst packageImage contrast enhancement Scali MPI Connect Release 4.4 Users Guide File format OriginalWhen things do not work troubleshooting Why does not my program start to run?Appendix B Why can I not start mpid? Why does my program terminate abnormally?General problems Per node installation of Scali MPI Connect Appendix CInstall Scali MPI Connect for TCP/IP Install Scali MPI Connect for Direct EthernetInstall Scali MPI Connect for Myrinet ExampleInstall Scali MPI Connect for Infiniband Install Scali MPI Connect for SCIInstall and configure SCI management software License optionsUninstalling SMC Troubleshooting Network providersScali kernel drivers Troubleshooting 3rdparty DAT providers Troubleshooting the GM providerScali MPI Connect Release 4.4 Users Guide Appendix D Bracket expansion and grouping Bracket expansionGrouping Scali MPI Connect Release 4.4 Users Guide Appendix E Related documentationScali MPI Connect Release 4.4 Users Guide List of figures Scali MPI Connect Release 4.4 Users Guide Index Transporter protocolSSP