/etc/cmcluster — the directory in which Serviceguard keeps its legacy configuration files and the node specific package runtime directories

Local database client software needs to be stored locally on each node. Details can be found in the database sections below.

Part of the content of the local group of directories must be synchronized manually between all nodes of the cluster.

SAP instance (startup) profile names contain either local hostnames or virtual hostnames. SGeSAP will always prefer profiles that use local hostnames to allow individual startup profiles for each host, which might be useful if the failover hardware differs in size.

In clustered SAP environments prior to 7.x releases it is required to install local executables. Local executables help to prevent several causes for package startup or package shutdown hangs due to the unavailability of the centralized executable directory. Availability of executables delivered with packaged SAP components is mandatory for proper package operation. Experience has shown that it is a good practice to create local copies for all files in the central executable directory. This includes shared libraries delivered by SAP.

To automatically synchronize local copies of the executables, SAP components deliver the sapcpe mechanism. With every startup of the instance, sapcpe matches new executables stored centrally with those stored locally.

Directories that Reside on Shared Disks

Volume groups on a SAN shard storage get configured as part of the SGeSAP package.

Instance specific volume groups are required by only one SAP instance or one database instance. They usually get included with exactly the package that is set up for this instance. In this configuration option the instance specific volume groups are included in package.

System specific volume groups get accessed from all instances that belong to a particular SAP System. Environment specific volume groups get accessed from all instances that belong to any SAP System installed in the whole SAP scenario. System and environment specific volume groups should be set up using HA NFS to provide access capabilities to SAP instances on nodes outside of the cluster. The cross-mounting concept of option 1 is not required.

A valuable naming convention for most of these shared volume groups is vg<INSTNAME><SID> or alternatively vg<INSTNAME><SID><INR>. Table 2-4 provide an overview of SAP shared storage for this special setup and maps them to the component and package type for which they occur.

Table 2-4 File systems for the SGeSAP package in NFS Idle Standby Clusters

Mount Point

Access Point

Remarks

VG Name

Device minor number

/sapmnt/<SID>

shared disk and HA

required

 

 

 

NFS

 

 

 

/usr/sap/<SID>

shared disk

 

 

 

/usr/sap/trans

shared disk and HA

optional

 

 

 

NFS

 

 

 

The table can be used to document used device minor numbers. The device minor numbers of logical volumes need to be identical for each distributed volume group across all cluster nodes.

If you have more than one system, place/usr/sap/put on separate volume groups created on shared drives. The directory should not be added to any package. This ensures that they are independent from any SAP WAS system and you can mount them on any host by hand if needed.

Option 3: SGeSAP CFS Cluster

SGeSAP supports the use of HP Serviceguard Cluster File System for concurrent shared access. CFS is available with selected HP Serviceguard Storage Management Suite bundles. CFS replaces NFS technology for all SAP related file systems. All related instances need to run on cluster nodes to have access to the shared files.

SAP related file systems that reside on CFS are accessible from all nodes in the cluster. Concurrent reads or writes are handled by the CFS layer. Each required CFS disk group and each required CFS mount point

SAP Instance Storage Considerations 25

Page 25
Image 25
HP Serviceguard Extension for SAP (SGeSAP) manual Option 3 SGeSAP CFS Cluster