Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009
2.1.5. Zones and resource management
[ug] In Solaris 9, resource management was introduced on the basis of projects, tasks and resource
pools. In Solaris 10, resource management can be applied to zones as well. The following resources
can be managed:
CPU resources (processor sets, CPU capping and fair share scheduler)
Memory use (real memory, virtual memory, shared segments)
Monitoring network traffic (IPQoS = IP Quality of Service)
Zone-specific settings for shared memory, semaphore, swap
(System V IPC Resource Controls)
2.1.5.1. CPU resources
[ug] Three stages of resource managements can be used for zones:
Partitioning of CPUs in processor sets that can be assigned to resource pools.
Resource pools are then assigned to local zones, thus defining the usable CPU quantity.
Using the fair share scheduler (FSS) in a resource pool that is used by one or more local zones.
This allows fine granular allocation of CPU resources to zones in a defined ratio as soon as
zones compete for CPU time. This is the case if syst em capacity is at 1 00%. Thus, the FSS
ensures the response time for zones, if configured accordingly.
Using the FSS in a local zone. This allows fine granular allocation of CPU resources to projects
(groups of processes) in a defined ratio if projects compete fo r CPU. This takes place, when the
capacity of the CPU time available for this zone is at 100%. Thus, the FSS ensures the process
response time.
Processor sets in a resource pool
Just like a project, a local zone can have a resource pool assigned to it where all zone processes
proceed (zonecfg: set pool=). CPUs can be assigned to a resource pool. Zone processes
will then run only on the CPUs assigned to the resource pool. Several zones (or even projects) can
also be assigned to a resource pool which will then share the CPU resources of the resource pool.
The most frequent case is to create a separate resource pool with CPUs per zone. To simplify
matters, the number of CPUs for a zone can then be configured in the zone configuration
(zonecfg: add dedicated-cpu). When starting up a zone, a temporary resource pool is
then generated automatically that contains the configured number of CPUs. When the zone is shut
down, the resource pools and CPUs are released again (since Solaris 10 8/07).
Fair share scheduler in a resource pool
In the event that several zones run together in a resource pool, the fa ir share scheduler (FSS) allows
the allocation of CPU re sources within a resource pool to be managed. To this end, each zone or
each project can have a share assigned to it. The settings for zones and projects in a resource pool
are used to manage the CPU resources in the event that the local zones or projects compete for
CPU time:
If the workload of the processor set is less than 100%, no management is done since free CPU
capacity is still available.
If the workload is at 100%, the fair share scheduler is activated and modifies the priority of the
participating processes such that the assigned CPU capacity of a zone or a project corresponds
to the defined share.
The defined share is calculated from the share value of an active zone/project) divided by the
sum of the shares of all active zones/projects.
The allocation can be changed dynamically while running.
CPU resource management within a zone
In a local zone it is furthermore possible to define projects and resource pools and to apply CPU
resources via FSS to projects running in the zone (see previous paragraph).
CPU capping
The maximum CPU usage of zones can be set (cpu-caps). Th is setting is an absolute limit with
regard to the CPU capacity used and can be adjusted to 1/100 CPU exactly (starting with Solaris 10
5/08). With this configuration option, the allocation can be adjusted much mo re finely than with
processor sets (1/100 CPU instead of 1 CPU).
Furthermore, CPU capping offers another control option if several zones run within one resource pool
(with or without FSS) in order to limit all users to the capacity that will be available later on.
5