Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009
4.2.3. One application per zone
[ug] Another paradigm is to always install one application per zone.
A zone has very little overhead; basically, only the application processes are separated from the rest
of the system by tagging them with the zone ID. This is implemented by the zone technology.
The profits gained by this decision are very high:
The administrator of the application can receive the password for root in the local zone without
being in a position to jeopardize the operation of the entire computer in case of error.
If many applications are consolidated, the number of users with root privileges is reduced.
If you would like to install the application automatically, it is much easier since you know for sure
that there is no other application that has modified the system.
Dependencies/malfunctions between applications in the file system and/or configuration files are
omitted completely, which results in safer operation.
4.2.4. Clustered containers
[tf/du] Clustered containers can be considered and configured as virtual nodes or as resources. By
means of containers as virtual nodes, virtu al clusters, so-called Solaris container clusters, can be
implemented since Sun Cluster 3.2 1/09. More information is provided in the following chapter.
In a cluster, operation as a black box container or in fully formed resource topologies is p ossible. In a
black box container, the applications run are configured in the container only. The cluster does not
know them; they are controlled within the container by Solaris only. This leads to particularly simple
cluster configurations.
For a fully formed resource topology, each application is run under cluster control. Each application is
thus integrated into the cluster as a resource and is started and monitored by it in the correct
sequence. For fully formed resource topologies in containers, the HA container agent, or containers
as "virtual nodes", have been available since Sun Cluster 3.2. For a container as a virtual node,
applications are moved among containers bundled into resource groups. Several resource groups can
run in one container and pan independently of one another.
Both concepts, HA containers, and containers as virtual nodes, have their own strengths. Helpful
criteria for selecting the right concept for the respective use case can be found in "Sun Cluster
Concepts Guide for Solaris OS“: http://docs.sun.com/app/docs/doc/819-2969/gcbkf?a=view
In cluster topology, applications in a container can be brought under cluster control. If containers are
run as virtual nodes, standard agents or self-written agents are used for application control. If
containers are run under the control of the HA container agent, selected standard agents, or the shell
script or SMF component of the Sun Cluster Container Agent, can be used. When using the HA
container agent, hybrids between black box and fully formed resource topologies are permitted at any
time. For virtual node containers, a fully formed resource topology is required. Clusters can be run in
active-passive configuration or in active-active configuration.
Active-passive means that one node serves the configured containers or applications and the
second node waits for the first one to fail.
Active-active means that each node serves containers or applications. If the capacity of each
node is insufficient for all containers or applications, reduced performance must be accepted
if a node fails.
If the requirement consists in achieving high availability in a data center, or high availability between
two data centers, Sun Cluster is sufficient for scheduled and emergency relocation of containers or
applications among computers. The maximum distance between two cluster nodes used to be set to
400 km for Sun Cluster and required the application of certified DWDMs (Dense Wave Division
Multiplexers). Nowadays, evidence of maximum latency of 15 ms (one-way) and a maximum bit error
rate (BER) of 10-10 is considered sufficient.
If these requirements are met it unfortunately does not yet mean that the performance requirements
of the application are also met. That is to say, if the latency of the mirroring or replication technology
involved does not meet the performance requirements, Sun Cluster Geographic Edition can be used
instead.
Sun Cluster Geographic Edition assumes clusters running in several data centers. The storage used
is replicated by means of suitable techniques from data center A to data center B. Currently, Sun
StorEdge Availability Suite (AVS) as a host-based replication, Hitachi Truecopy and EMC SRDF as a
controller-based replication and, with Sun Cluster 3.2 1/09, Oracle DataGuard as an application-
based replication are integrated. Sun Cluster Geographic Edition allows you to move the
containers/services from data center A to data center B. If a data center fails, Sun Cluster
Geographic Edition suggests data center p anning. Panning is performed following confirmation by an
47