StorNext File System Tuning
File Size Mix and Application I/O Characteristics
Buffer Cache
NFS / CIFS
and auto_dma_write_length, described in the Mount Command Options on page 19.
Reads and writes that aren't
There are several configuration parameters that affect buffer cache performance. The most critical is the RAID cache configuration because buffered I/O is usually smaller than the RAID stripe size, and therefore incurs a read/modify/write penalty. It might also be possible to match the RAID stripe size to the buffer cache I/O size. However, it is typically most important to optimize the RAID cache configuration settings described earlier in this document.
It is usually best to configure the RAID stripe size no greater than 256K for optimal small file buffer cache performance.
For more buffer cache configuration settings, see Mount Command Options on page 19.
It is best to isolate NFS and/or CIFS traffic off of the metadata network to eliminate contention that will impact performance. For optimal performance it is necessary to use 1000BaseT instead of 100BaseT. On NFS clients, use the vers=3, rsize=262144 and wsize=262144 mount options, and use TCP mounts instead of UDP. When possible, it is also best to utilize TCP Offload capabilities as well as jumbo frames.
It is best practice to have clients directly attached to the same network switch as the NFS or CIFS server. Any routing required for NFS or CIFS traffic incurs additional latency that impacts performance.
It is critical to make sure the speed/duplex settings are correct, because this severely impacts performance. Most of the time
6 | StorNext File System Tuning Guide |