6 SGeSAP Cluster Administration

SGeSAP clusters follow characteristic hardware and software setups. An SAP application is no longer treated as though it runs on a dedicated host. It is wrapped up inside one or more Serviceguard packages and packages can be moved to any of the hosts that are inside of the Serviceguard cluster. The Serviceguard packages provide a virtualization layer that keeps the application independent of specific server hardware. The virtualization is transparent in many aspects, but in some areas special considerations apply. This affects the way a system gets administered. Topics presented in this chapter include:

Change Management

Mixed Clusters

Switching SGeSAP On and Off

Change Management

Serviceguard keeps information about the cluster configuration. It especially needs to know the relocatable IP addresses and its subnets, your Volume Groups, the Logical Volumes and their mountpoints. Check with your HP consultant for information about the way Serviceguard is configured to suite your SAP system. If you touch this configuration, you may have to reconfigure your cluster.

System Level Changes

SGeSAP provides some flexibility for hardware change management. If you have to maintain the server on which an (A)SCS instance is running, this instance can temporarily be moved to the host that runs its Replicated Enqueue without interrupting ongoing work. Some users might experience a short delay in the response time for their ongoing transaction. No downtime is required for the maintenance action.

If you add new hardware and SAP software needs access to it to work properly, make sure to allow this access from any host of the cluster by appropriately planning the hardware connectivity. E.g. it is possible to increase database disk space by adding a new shared LUN from a SAN device as physical volume to the shared volume groups on the primary host on which a database runs. The changed volume group configuration has to be redistributed to all cluster nodes afterwards via vgexport(1m) and vgimport(1m).

It is a good practice to keep a list of all directories that were identified in Chapter Two to be common directories that are kept local on each node. As a rule of thumb, files that get changed in these directories need to be manually copied to all other cluster nodes afterwards. There might be exceptions. E.g. /home/<SID>adm does not need to be the same on all of the hosts. In clusters that do not use CFS, it is possible to locally install additional Dialog Instances on hosts of the cluster, although it will not be part of any package. SAP startup scripts in the home directory are only needed on this dedicated host. You do not need to distribute them to other hosts.

If remote shell access is used, never delete the mutual .rhosts entries of the root user and <sid>adm on any of the nodes. Never delete the secure shell setup in case it is specified for SGeSAP.

Entries in /etc/hosts, /etc/services, /etc/passwd or /etc/group should be kept unified across all nodes.

If you use an ORACLE database, be aware that the listener configuration file of SQL*Net V2 is kept in a local copy as /etc/listener.ora by default, too.

Files in the following directories and all subdirectories are typically shared:

/usr/sap/<SID>/DVEBMGS<INR>

/export/usr/sap/trans (except for stand-alone J2EE)

/export/sapmnt/<SID>

/oracle/<SID> or /export/sapdb

Chapter Two can be referenced for a full list. These directories are only available on a host if the package they belong to is running on it. They are empty on all other nodes. Serviceguard switches the directory content to a node with the package.

All directories below /exporthave an equivalent directory whose fully qualified path comes without this prefix. These directories are managed by the automounter. NFS file systems get mounted automatically as

Change Management 137

Page 137
Image 137
HP Serviceguard Extension for SAP (SGeSAP) manual Change Management, System Level Changes