Quantum 3.1.4.1 manual Buffer Cache, NFS / Cifs

Page 10

StorNext File System Tuning

File Size Mix and Application I/O Characteristics

Buffer Cache

NFS / CIFS

and auto_dma_write_length, described in the Mount Command Options on page 19.

Reads and writes that aren't well-formed utilize the SNFS buffer cache. This also includes NFS or CIFS-based traffic because the NFS and CIFS daemons defeat well-formed I/Os issued by the application.

There are several configuration parameters that affect buffer cache performance. The most critical is the RAID cache configuration because buffered I/O is usually smaller than the RAID stripe size, and therefore incurs a read/modify/write penalty. It might also be possible to match the RAID stripe size to the buffer cache I/O size. However, it is typically most important to optimize the RAID cache configuration settings described earlier in this document.

It is usually best to configure the RAID stripe size no greater than 256K for optimal small file buffer cache performance.

For more buffer cache configuration settings, see Mount Command Options on page 19.

It is best to isolate NFS and/or CIFS traffic off of the metadata network to eliminate contention that will impact performance. For optimal performance it is necessary to use 1000BaseT instead of 100BaseT. On NFS clients, use the vers=3, rsize=262144 and wsize=262144 mount options, and use TCP mounts instead of UDP. When possible, it is also best to utilize TCP Offload capabilities as well as jumbo frames.

It is best practice to have clients directly attached to the same network switch as the NFS or CIFS server. Any routing required for NFS or CIFS traffic incurs additional latency that impacts performance.

It is critical to make sure the speed/duplex settings are correct, because this severely impacts performance. Most of the time auto-detectis the correct setting. Some managed switches allow setting speed/duplex (for example 1000Mb/full,) which disables auto-detectand requires the host to be set exactly the same. However, if the settings do not match between switch and host, it severely impacts performance. For example, if the switch is set to auto-detect but the host is set to 1000Mb/full, you will observe a high error rate along with extremely poor performance. On Linux, the ethtool tool can be very useful to investigate and adjust speed/ duplex settings.

6

StorNext File System Tuning Guide

Image 10
Contents StorNext Copyright Statement Contents Contents StorNext File System Tuning Underlying Storage SystemRAID Write-Back RAID Cache ConfigurationCaching RAID Read-Ahead RAID Level, Segment Size, and Stripe Size File Size Mix and Application I/O Characteristics Buffer Cache NFS / CifsSnfs and Virus Checking Metadata NetworkMetadata Controller System FSM Configuration File Settings Stripe GroupsAffinities StripeBreadth InodeCacheSize BufferCacheSizeThreadPoolSize ForcestripeAlignment FsBlockSizeJournalSize FsBlockSize JournalSizeSnfs Tools StorNext File System Tuning Metadata Controller System StorNext File System Tuning Metadata Controller System Latency-testindex-number seconds Mount Command Options Snfs External API Distributed LAN Disk Proxy NetworksHardware StorNext File System Tuning Guide Network Configuration and Topology SwitchDistributed LAN Servers Distributed LAN Client Vs. Legacy Network Attached StoragePerformance Fault toleranceLoad Balancing Client Scalability Windows Memory Requirements Robustness and Stability Consistent Security ModelStorNext File System Tuning Windows Memory Requirements Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File