The detection of volumes that need replication as part of the standby startup is dynamically identified within the startup procedure of the standby. It does not require manual maintenance steps to trigger volume pair synchronizations and subsequent split operations. Usually, synchronizations occur only in rare cases, for example for the first startup of a standby or if a standby got intentionally shut down for a longer period of time. In all other cases, the liveCache logging devspaces will contain enough delta information to update the standby data without the requirement to do hardware replications of full LUNs.

The ongoing operation of the standby as well as the master failover does not require the business copy mechanisms. The standby synchronizes the data regularly by accessing the master log files, which therefore reside on CVM/CFS volumes. No liveCache content data needs to be transferred via LAN at any point in time.

The liveCache logging gets continuously verified during operation. An invalid entry in the log files gets detected immediately. This avoids the hazardous situation of not becoming aware of corrupted log files until they fail to restore a production liveCache instance.

A data storage corruption that could happen during operation of the master does not get replicated to the only logically coupled standby. The standby LUNs used for devspace content do not necessarily keep the same data as the master LUNs. At least not on a physical level. But logically the standby keeps consistency and stays close to the content of the master LUNs. The standby can then be promoted to become the new master immediately. With access to the original log of the master, it is able to update itself without any data loss.

Planning the Volume Manager Setup

In the following, the lc package of SGeSAP gets described. The lc package was developed according to the SAP recommendations and fulfills all SAP requirements for liveCache failover solutions.

liveCache distinguishes an instance dependant path /sapdb/<LCSID> and two instance independent paths IndepData and IndepPrograms. By default all three point to a directory below /sapdb.

NOTE: <LCSID> denotes the three-letter database name of the liveCache instance in uppercase. <lcsid> is the same name in lowercase.

There are different configuration options for the storage layout and the filesystems caused by a trade-off between simplicity and flexibility. The options are described below ordered by increasing complexity. The cluster layout constraints that need to be fulfilled to allow the simplifications of a given option are stated within a bullet list.

The subsequent sections refer to the options by the numbers that are introduced here.

Option 1: Simple Clusters with Separated Packages

Cluster Layout Constraints:

The liveCache package does not share a failover node with the APO Central Instance package.

There is no MAXDB or additional liveCache running on cluster nodes.

There is no intention to install additional APO Application Servers within the cluster.

There is no hot standby liveCache system configured.

Table 4-2 File System Layout for liveCache Package running separate from APO (Option 1)

Storage Type

Package

Mount Point

shared

lc<LCSID>

/sapdb/data

shared

lc<LCSID>

/sapdb/<LCSID>/datan

shared

lc<LCSID>

/sapdb/<LCSID>/logn

shared

lc<LCSID>

/var/spool/sql

shared

lc<LCSID>

/sapdb/programs

In the above layout all relevant files get shared via standard procedures. The setup causes no administrative overhead for synchronizing local files. SAP default paths are used.

Planning the Volume Manager Setup 95

Page 95
Image 95
HP Serviceguard Extension for SAP (SGeSAP) manual Planning the Volume Manager Setup