StorNext File System Tuning

The Metadata Controller System

StripeBreadth

This setting must match the RAID stripe size or be a multiple of the RAID stripe size. Matching the RAID stripe size is usually the most optimal setting. However, depending on the RAID performance characteristics and application I/O size, it might be beneficial to use a multiple of the RAID stripe size. For example, if the RAID stripe size is 256K, the stripe group contains 4 LUNs, and the application to be optimized uses DMA I/ O with 8MB block size, a StripeBreadth setting of 2MB might be optimal. In this example the 8MB application I/O is issued as 4 concurrent 2MB I/ Os to the RAID. This concurrency can provide up to a 4X performance increase. This typically requires some experimentation to determine the RAID characteristics. The lmdd utility can be very helpful. Note that this setting is not adjustable after initial file system creation.

Example:

[stripeGroup AudioFiles]

Status UP

 

Exclusive Yes

##These two lines set Exclusive stripeGroup ##

Affinity AudioFiles

##for Audio Files Only##

Read Enabled

 

Write Enabled

 

StripeBreadth 1M

 

MultiPathMethod Rotate

Node CvfsDisk4 0

Node CvfsDisk5 1

BufferCacheSize

This setting consumes up to 2X bytes of memory times the number specified. Increasing this value can reduce latency of any metadata operation by performing a hot cache access to directory blocks, inode information, and other metadata info. This is about 10 to 1000 times faster than I/O. It is especially important to increase this setting if metadata I/O latency is high, (for example, more than 2ms average latency). We recommend sizing this according to how much memory is available; more is better.

Example: # BufferCacheSize

64M # default 32MB

StorNext File System Tuning Guide

10

Page 13
Image 13
Quantum 6-01376-05 manual StripeBreadth, BufferCacheSize