
Tuning HADB
Performance
For best performance, all HADB processes (clu_xxx_srv) must fit in physical memory. They should not be paged or swapped. The same applies for shared memory segments in use.
You can configure the size of some of the shared memory segments. If these segments are too small, performance suffers, and user transactions are delayed or even aborted. If the segments are too large, then the physical memory is wasted.
You can configure the following parameters:
■“DataBufferPoolSize” on page 110
■“LogBufferSize” on page 111
■“InternalLogbufferSize” on page 112
■“NumberOfLocks” on page 113
■“Timeouts” on page 115
DataBufferPoolSize
The HADB stores data on data devices, which are allocated on disks. The data must be in the main memory before it can be processed. The HADB node allocates a portion of shared memory for this purpose. If the allocated database buffer is small compared to the data being processed, then disk I/O will waste significant processing capacity. In a system with
The database buffer is similar to a cache in a file system. For good performance, the cache must be used as much as possible, so there is no need to wait for a disk read operation. The best performance is when the entire database contents fits in the database buffer. However, in most cases, this is not feasible. Aim to have the “working set” of the client applications in the buffer.
Also monitor the disk I/O. If HADB performs many disk read operations, this means that the database is low on buffer space. The database buffer is partitioned into blocks of size 16KB, the same block size used on the disk. HADB schedules multiple blocks for reading and writing in one I/O operation.
Use the hadbm deviceinfo command to monitor disk use. For example, hadbm deviceinfo
NodeNo | TotalSize | FreeSize | Usage |
0 | 512 | 504 | 1% |
1 | 512 | 504 | 1% |
The columns in the output are:
■TotalSize: size of device in MB.
110 | Sun GlassFish Enterprise Server 2.1 Performance Tuning Guide • January 2009 |