Q

3 – Using InfiniPath MPI InfiniPath MPI and Hybrid MPI/OpenMP Applications

accessed via some network file system, typically NFS. Parallel programs usually need to have some data in files to be shared by all of the processes of an MPI job. Node programs may also use non-shared, node-specific files, such as for scratch storage for intermediate results or for a node’s share of a distributed database.

There are different styles of handling file I/O of shared data in parallel programming. You may have one process, typically on the front end node or on a file server, which is the only process to touch the shared files, and which passes data to and from the other processes via MPI messages. On the other hand, the shared data files could be accessed directly by each node program. In this case, the shared files would be available through some network file support, such as NFS. Also, in this case, the application programmer would be responsible for ensuring file consistency, either through proper use of file locking mechanisms offered by the OS and the programming language, such as fcntl in C, or by the use of MPI synchronization operations.

3.9.2

MPI-IO with ROMIO

MPI-IO is the part of the MPI2 standard, supporting collective and parallel file IO. One of the advantages in using MPI-IO is that it can take care of managing file locks in case of file data shared among nodes.

InfiniPath MPI includes ROMIO version 1.2.6, a high-performance, portable implementation of MPI-IO from Argonne National Laboratory. ROMIO includes everything defined in the MPI-2 I/O chapter of the MPI-2 standard except support for file interoperability and user-defined error handlers for files. Of the MPI-2 features, InfiniPath MPI includes only the MPI-IO features implemented in ROMIO version 1.2.6 and the generalized MPI_Alltoallw communication exchange. See the ROMIO documentation in http://www.mcs.anl.gov/romio for details.

3.10

InfiniPath MPI and Hybrid MPI/OpenMP Applications

InfiniPath MPI supports hybrid MPI/OpenMP applications provided that MPI routines are only called by the master OpenMP thread. This is called the funneled thread model. Instead of MPI_Init/MPI_INIT (for C/C++ and Fortran respectively), the program can call MPI_Init_thread/MPI_INIT_THREAD to determine the level of thread support and the value MPI_THREAD_FUNNELED will be returned.

To use this feature the application should be compiled with both OpenMP and MPI code enabled. To do this, use the -mpflag on the mpicc compile line.

As mentioned above, MPI routines must only be called by the master OpenMP thread. The hybrid executable is executed as usual using mpirun, but typically only one MPI process is run per node and the OpenMP library will create additional threads to utilize all CPUs on that node. If there are sufficient CPUs on a node, it

IB6054601-00 D

3-19

Page 61
Image 61
Q-Logic IB6054601-00 D manual MPI-IO with Romio, InfiniPath MPI and Hybrid MPI/OpenMP Applications

IB6054601-00 D specifications

The Q-Logic IB6054601-00 D is a high-performance InfiniBand adapter card designed for data centers and enterprise applications requiring robust connectivity and low-latency communication. This adapter is part of QLogic's extensive portfolio of networking solutions, catering to the needs of high-performance computing (HPC), cloud computing, and virtualization environments.

One of the standout features of the IB6054601-00 D is its capability to support data transfer rates of up to 56 Gbps. This makes it ideal for applications demanding large bandwidth and quick data processing. The adapter is optimized for RDMA (Remote Direct Memory Access) technology, which allows data to be transferred directly between the memory of different computers without involving the CPU. This reduces latency and CPU overhead, leading to enhanced overall system performance.

The architecture of the IB6054601-00 D includes support for a dual-port design, which offers increased bandwidth, redundancy, and fault tolerance. This dual-port configuration is especially advantageous for environments that require high availability and reliability, such as financial services and mission-critical applications.

The adapter utilizes advanced error detection and correction mechanisms, ensuring that data integrity is maintained during transmission. With features like adaptive routing and congestion management, the IB6054601-00 D is capable of optimizing the handling of data flows, thereby enhancing performance even under heavy loads.

In terms of compatibility, the Q-Logic IB6054601-00 D supports a wide range of operating systems and virtualization technologies, making it easy to integrate into diverse IT environments. It also includes drivers and software packages that facilitate seamless deployment and management.

In addition to high-speed connectivity, the adapter is designed with power efficiency in mind. It adheres to Energy Star regulations, helping organizations lower their operational costs while minimizing their environmental footprint.

Overall, the Q-Logic IB6054601-00 D stands out for its high throughput, low latency, and reliability. Its combination of advanced features and technologies positions it as an excellent choice for organizations looking to enhance their data center performance and maximize the efficiency of their network infrastructure. With the growing demands for faster and more efficient data transfer, solutions like the IB6054601-00 D are essential in meeting the evolving needs of modern enterprises.