for the purpose of determining how much memory to pin for RDMA message transfers on InfiniBand and Myrinet GM. The value determined by HP-MPI can be displayed using the -ddoption. If HP-MPI specifies an incorrect value for physical memory, this environment variable can be used to specify the value explicitly:

%export MPI_PHYSICAL_MEMORY=1048576

The above example specifies that the system has 1GB of physical memory.

8.9.5 MPI_PIN_PERCENTAGE

MPI_PIN_PERCENTAGE communicates the maximum percentage of physical memory (see MPI_PHYSICAL_MEMORY above) that can be pinned at any time. The default is 20%.

%export MPI_PIN_PERCENTAGE=30

The above example permits the HP-MPI library to pin (lock in memory) up to 30% of physical memory. The pinned memory is shared between ranks of the host that were started as part of the same mpirun invocation. Running multiple MPI applications on the same host can cumulatively cause more than one application’s MPI_PIN_PERCENTAGE to be pinned. Increasing MPI_PIN_PERCENTAGE can improve communication performance for communication intensive applications in which nodes send and receive multiple large messages at a time, such as is common with collective operations. Increasing MPI_PIN_PERCENTAGE allows more large messages to be progressed in parallel using RDMA transfers, however pinning too much of physical memory may negatively impact computation performance. MPI_PIN_PERCENTAGE and MPI_PHYSICAL_MEMORY are ignored unless InfiniBand

or Myrinet GM is in use.

8.9.6 MPI_PAGE_ALIGN_MEM

MPI_PAGE_ALIGN_MEM causes the HP-MPI library to page align and page pad memory.

%export MPI_PAGE_ALIGN_MEM=1

For more information on when this setting should be used, refer to the “Work-arounds” section of the HP-MPI V2.1 for XC4000 and XC6000 Clusters Release Notes.

8.9.7 MPI_MAX_WINDOW

MPI_MAX_WINDOW is used for one-sided applications. It specifies the maximum number of windows a rank can have at the same time. It tells HP-MPI to allocate enough table entries. The default is 5.

%export MPI_MAX_WINDOW=10

8.9.8MPI_ELANLOCK

By default, HP-MPI only provides exclusive window locks via Elan lock when using the Elan system interconnect. In order to use HP-MPI shared window locks, you must turn off Elan lock and use window locks via shared memory. In this way, both exclusive and shared locks are from shared memory. To turn off Elan locks:

%export MPI_ELANLOCK=0

8.9.9MPI_USE_LIBELAN

By default when Elan is in use, the HP-MPI library uses Elan’s native collective operations for performing MPI_Bcast and MPI_Barrier operations on MPI_COMM_WORLD sized communicators. This behavior can be changed by setting MPI_USE_LIBELAN to “false” or “0”, in which case these operations will be implemented using point-to-point Elan messages.

Using HP-MPI 8-11