Chapter 1 | Introduction |
|
|
|
|
This manual describes Scali MPI Connect (SMC) in detail. SMC is sold as a separate
This manual is written for users who have a basic programming knowledge of C or Fortran, as well as an understanding of MPI.
1.1 Scali MPI Connect product context
Figure 1-1: A cluster system
Figure 1-1: shows a simplified view of the underlying architecture of clusters using Scali MPI
Connect: A number of compute nodes are connected together in a Ethernet network through which a front-end interfaces the cluster with the corporate network. A high performance interconnect can be attached to service communication requirements of key applications. The front-end imports services like file systems from the corporate network to allow users to run applications and access their data.
Scali MPI Connect implements the MPI standard for a number of popular high performance interconnects, like Gigiabit Ethenet, Infiniband, Myrinet and SCI.
While the high performance interconnect is optional, the networking infrastructure is mandatory. Without it the nodes in the cluster will have no way of sharing resources. TCP/IP functionality implemented by the Ethernet network enables the front-end to issue commands to the nodes, provide them with data and application images, and collect results from the processing the nodes perform.
The Scali Software Platform provides the necessary software components to combine a number of commodity computers running Linux into a single computer entity, henceforth called a cluster.
Scali is targeting its software at users involved in High Performance Computing, also known as supercomputing, which typically includes CPU-intensive parallel applications. Scali aims to produce software tools which assist its users in maximizing the power and ease of use of the computing hardware purchased.
Scali MPI Connect Release 4.4 Users Guide | 5 |