Quantum 6-01376-05 manual File Size Mix and Application I/O Characteristics, Buffer Cache

Page 8

StorNext File System Tuning

File Size Mix and Application I/O Characteristics

File Size Mix and Application I/O Characteristics

Direct Memory Access (DMA) I/O Transfer

It is always valuable to understand the file size mix of the target dataset as well as the application I/O characteristics. This includes the number of concurrent streams, proportion of read versus write streams, I/O size, sequential versus random, Network File System (NFS) or Common Internet File System (CIFS) access, and so on.

For example, if the dataset is dominated by small or large files, various settings can be optimized for the target size range.

Similarly, it might be beneficial to optimize for particular application I/O characteristics. For example, to optimize for sequential 1MB I/O size it would be beneficial to configure a stripe group with four 4+1 RAID5 LUNs with 256K stripe size.

However, optimizing for random I/O performance can incur a performance trade-off with sequential I/O.

Furthermore, NFS and CIFS access have special requirements to consider as described in the Direct Memory Access (DMA) I/O Transfer section.

To achieve the highest possible large sequential I/O transfer throughput, SNFS provides DMA-based I/O. To utilize DMA I/O, the application must issue its reads and writes of sufficient size and alignment. This is called well-formed I/O. See the mount command settings auto_dma_read_length and auto_dma_write_length, described in the Mount Command Options on page 16.

 

Reads and writes that aren't well-formed utilize the SNFS buffer cache.

Buffer Cache

This also includes NFS or CIFS-based traffic because the NFS and CIFS

 

 

daemons defeat well-formed I/Os issued by the application.

 

There are several configuration parameters that affect buffer cache

 

performance. The most critical is the RAID cache configuration because

 

buffered I/O is usually smaller than the RAID stripe size, and therefore

 

incurs a read/modify/write penalty. It might also be possible to match

 

the RAID stripe size to the buffer cache I/O size. However, kernel

 

memory fragmentation can defeat attempts to increase the SNFS buffer

 

cache I/O size (see the cachebufsize setting described in the Mount

StorNext File System Tuning Guide

5

Image 8
Contents 01376-05 Copyright Statement Contents StorNext File System Tuning Underlying Storage SystemRAID Cache Configuration RAID Write-Back Caching RAID Read-Ahead Caching RAID Level, Segment Size, and Stripe Size Buffer Cache File Size Mix and Application I/O CharacteristicsDirect Memory Access DMA I/O Transfer NFS / Cifs Metadata Network Metadata Controller SystemFSM Configuration File Settings Stripe GroupsExample AffinitiesStripeBreadth BufferCacheSizeInodeCacheSize ThreadPoolSizeForcestripeAlignment FsBlockSizeJournalSize Snfs ToolsStorNext File System Tuning Metadata Controller System StorNext File System Tuning Metadata Controller System StorNext File System Tuning Metadata Controller System MountCommandOptions StorNext File System Tuning Metadata Controller System Snfs External API Hardware ConfigurationDistributed LAN Disk Proxy Networks StorNext File System Tuning Guide Network Configuration and Topology SAN Distributed LAN Servers Windows Memory RequirementsStorNext File System Tuning Windows Memory Requirements StorNext File System Tuning Windows Memory Requirements Sample FSM Configuration File MAXStripeBreadth StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File