NOTE: The dataserver and the monitor server, both need to be installed on the same disk as monitor server is dependent on the presence of the dataserver for its working.
2.Make sure that the 'sybase' user has the same user id and group id on all nodes in the cluster. Create the user or group with the following commands and ensure the uid/gid by editing the
/etc/passwd file:
#groupadd sybase
#useradd
3.Create a volume group, logical volume, and file system to hold the necessary configuration information and symbolic links to the Sybase ASE executables. This file system will be defined as SYBASE in the package configuration file and master control script. Since the volume group and file system must be uniquely named within the cluster, use the name of the database instance (ASE_SERVER) in the names:
Assuming that the name of the ASE instance is 'SYBASE0', create the following:
A volume group: /dev/vg0_SYBASE0
A logical volume: /dev/vg0_SYBASE0/lvol1
A file system: /dev/vg0_SYBASE0/lvol1 mounted at /SYBASE0
After the volume group, logical volume, and file system have been created on one node, it must be imported to the other nodes that will run this database. Create the directory /SYBASE0 on all nodes so that /dev/vg0_SYBASE0/lvol1 can be mounted on that node, if the package is to be run on the node.
For more information on creating, importing, or managing the VG and file system, refer to the chapter entitled Building an HA Cluster in the Managing ServiceGuard manual available at
4.The ASE system files, data tablespaces and files must be located in the file system /SYBASE0 during the initial creation of database. See Sybase ASE documentation for information on setting up these files.
Multiple Instance Configuration
If multiple instances will be run in the same cluster, repeat the preceding steps for each instance. For example, if a second instance (SYBASE_TEST1) is to be included in the configuration, create a second volume group (for example, /dev/vg0_SYBASE_TEST1), logical volume and file system with mount point (/SYBASE_TEST1) for the second instance. All configuration information for SYBASE_TEST1 will reside in /SYBASE_TEST1/dbs. Similar to SYBASE0, symbolic links need to be created for all subdirectories (other than /SYBASE_TEST1/dbs/), to link /SYBASE_TEST1 to $SYBASE for that instance.
This configuration makes it possible to run several Sybase ASE instances on one node, facilitating failover/failback of Sybase ASE packages between nodes in the cluster.
Set up additional database logical volumes.
It is possible to have the database reside on the same VG/LVOL as $Sybase ASE_HOME/dbs, but more commonly, the database will probably reside on several volume groups and logical volumes, all of which need to be shared among the nodes that are able to run the Sybase ASE instance. Again, using a naming convention for the VG(s) that includes the instance name (${ASE_SERVER}) could be used to associate a VG with a unique instance.
For Example:
Use with Asynchronous disk access and file systems: