traffic could be separated from other TCP/IP and Fibre Channel traffic. Moreover, adding Ethernet and Fibre Channel I/O cards to the servers better distributed the I/O loads and resulted in increased performance.

A thorough evaluation of the application transactions is recommended in conjunction with an analysis of the following:

Optimum buffer credit configuration

Number of Ethernet cards in each server

Number of Fibre Channel cards in each server

Number of I/O slots and cards in the DWDM device

The next steps

A variety of performance enhancements at various levels within the Extended Cluster for RAC architectural solution are available, such as network buffering, workload distribution, and database partitioning. In many cases, these enhancements will be environment-specific. As these solutions are deployed, a collection of best practices will be shared.

To provide additional business value, HP is continually investing in the development of new solutions and further enhancing existing portfolios. An example of this commitment is the future capability to handle numbers of nodes even higher than the current specifications.

Solution flexibility

The Extended Cluster for RAC solution was tested using the components detailed earlier in the “Test architecture” section. However, the configuration is quite accommodating in the choice of acceptable hardware and software. This flexibility enables customers to leverage existing IT assets and rapidly move toward an operational implementation without the unnecessary purchase of a completely new infrastructure.

During the testing period, a wide range of storage devices was introduced into the configuration. The majority of HP-UX enterprise server-compatible storage devices will perform suitably in the Extended Cluster for RAC environment, from the high-end HP StorageWorks XP disk array models to the lower- cost HP EVA storage solution.

In addition, HP Serviceguard Quorum Server can be introduced to provide cluster lock management for tie-breaking and autonomous clustering, following any failure that impacts cluster integrity.

There are a variety of options for load balancing using access clients, including Resonate’s Central Dispatch product and a selection of hardware-based choices from Cisco.

Given the transparency of the DWDM devices, it is possible to run a wide variety of protocols across the fiber. During testing, Gigabit Ethernet was used for system-to-system connectivity. Network switches can be 100Base-T (TX or FX), 1000Base-T (TX or FX), or Fiber Distributed Data Interface (FDDI). The connections between the network switches and the DWDM boxes must currently remain fiber optic.

As demonstrated by the Oracle9i RAC implementation, HP has introduced the VSE, enhanced for specific application server and database environments. HP released the VSE Reference Architecture for BEA WebLogic Server and Oracle databases on HP-UX, which provides the most effective way for customers to quickly implement virtualization within their application server and database environments to achieve the benefits of an Adaptive Enterprise.

12