Quantum 3.1.4.1 manual RAID Read-Ahead

Models: 3.1.4.1

1 36
Download 36 pages 62.4 Kb
Page 7
Image 7

StorNext File System Tuning

The Underlying Storage System

 

operations involve a very high rate of small writes to the metadata disk,

 

so disk latency is the critical performance factor. Write-back caching can

 

be an effective approach to minimizing I/O latency and optimizing

 

metadata operations throughput. This is easily observed in the hourly

 

File System Manager (FSM) statistics reports in the cvlog file. For

 

example, here is a message line from the cvlog file:

 

PIO HiPriWr SUMMARY SnmsMetaDisk0 sysavg/350 sysmin/333 sysmax/

 

367

 

This statistics message reports average, minimum, and maximum write

 

latency (in microseconds) for the reporting period. If the observed

 

average latency exceeds 500 microseconds, peak metadata operation

 

throughput will be degraded. For example, create operations may be

 

around 2000 per second when metadata disk latency is below 500

 

microseconds. However, if metadata disk latency is around 5

 

milliseconds, create operations per second may be degraded to 200 or

 

worse.

 

Another typical write caching approach is a “write-through.” This

 

approach involves synchronous writes to the physical disk before

 

returning a successful reply for the I/O operation. The write-through

 

approach exhibits much worse latency than write-back caching;

 

therefore, small I/O performance (such as metadata operations) is

 

severely impacted. It is important to determine which write caching

 

approach is employed, because the performance observed will differ

 

greatly for small write I/O operations.

 

In some cases, large write I/O operations can also benefit from caching.

 

However, some SNFS customers observe maximum large I/O throughput

 

by disabling caching. While this may be beneficial for special large I/O

 

scenarios, it severely degrades small I/O performance; therefore, it is

 

suboptimal for general-purpose file system performance.

 

 

RAID Read-Ahead

RAID read-ahead caching is a very effective way to improve sequential

Caching

read performance for both small (buffered) and large (DMA) I/O

 

operations. When this setting is utilized, the RAID controller pre-fetches

 

 

disk blocks for sequential read operations. Therefore, subsequent

 

application read operations benefit from cache speed throughput,

 

which is faster than the physical disk throughput.

 

This is particularly important for concurrent file streams and mixed I/O

 

streams, because read-ahead significantly reduces disk head movement

 

that otherwise severely impacts performance.

StorNext File System Tuning Guide

3

Page 7
Image 7
Quantum 3.1.4.1 manual RAID Read-Ahead