Chapter 3 System Preparation
156 September 2002 HPSS Installation Guide
Release 4.5, Revision 2
There are two loops (a and b) per adapter and two ports per loop (a1, a2, b1, b2)
The physical order of the disks are shown from the perspective of each port
A disk is accessed according to its closest port (e.g., either a1 or a2, b1 or b2)
When planning to configure striped SSA disks in HPSS, it is important to select disks
for each striped virtual volume that span ports, loops, and/or adapters. These
decisions can be made after determining the probable bottlenecks and then selecting
individual disks in a virtual volume to alleviate bottlenecks in the port, loop, or
adapter, as desired.
For SSA disks on an AIX SP node, usemaymap to identify which loop the SSA disk is on.
Create volume groups for all disks to be used by HPSS.
Create all necessary raw disk logical volumes to be used by the HPSS Disk Mover(s).
To create a volume group for a physical disk, use SMIT or the following:
% mkvg -f -y<volumeGroup> -s<partitionSize> <physicalDisk>
To create a logical volume, use SMIT or the following:
% mklv -y<logicalVolume> -traw <volumeGroup> <numPartitions>
Notethat there are additional options for specifying exactly where on the physical disk the
logical volume should be placed, if that is considered important.
To view all physical disks and associated volume groups:
% lspv
To view all logical volumes on a given volume group:
% lsvg -l <volumeGroup>
On each Disk Mover node, measure the raw read and write I/O performance of all HPSS
disks and verify that they are at expected levels. Create one or more tables documenting
theresults. An example table can be found above. Theoutput of these test should be stored
in/var/hpss/stats for later analysis.
Usethe iocheck.ksh script from the deployment tools package to show the performance of
one or more individual disk devices as well as show the peak aggregate performance of
concurrent I/O across multiple disks (e.g., to show the peak performance of adapters).
To measure the individual and aggregate throughput of hdisks 4, 5, 6, and 7:
% iocheck.ksh 4 5 6 7
To measure read performance on a single disk:
% iocheck -r -t 20 -b 1mb /dev/r<logicalVolume>