Quantum 3.1.4.1 manual Mount Command Options

Page 23

Mount Command Options

StorNext File System Tuning

The Metadata Controller System

slower CPU.) Differences in latency over time for the same system can indicate new hardware problems, such as a network interface going bad.

If a latency test has been run for a particular client, the cvadmin who long command includes the test results in its output, along with information about when the test was last run.

The following SNFS mount command settings are explained in greater detail in the mount_cvfs man page.

The default size of the buffer cache varies by platform and main memory size, and ranges between 32MB and 256MB. And, by default, each buffer is 64K so the cache contains between 512 and 4096 buffers. In general, increasing the size of the buffer cache will not improve performance for streaming reads and writes. However, a large cache helps greatly in cases of multiple concurrent streams, and where files are being written and subsequently read. Buffer cache size is adjusted with the buffercachecap setting.

The buffer cache I/O size is adjusted using the cachebufsize setting. The default setting is usually optimal; however, sometimes performance can be improved by increasing this setting to match the RAID5 stripe size.

Using a large cachebufsize setting decreases random I/O performance when the amount of data being read is smaller than the cache buffer size.

Buffer cache read-ahead can be adjusted with the

buffercache_readahead setting. When the system detects that a file is being read in its entirety, several buffer cache I/O daemons pre-fetch data from the file in the background for improved performance. The default setting is optimal in most scenarios.

The auto_dma_read_length and auto_dma_write_length settings determine the minimum transfer size where direct DMA I/O is performed instead of using the buffer cache for well-formed I/O. These settings can be useful when performance degradation is observed for small DMA I/O sizes compared to buffer cache.

For example, if buffer cache I/O throughput is 200 MB/sec but 512K DMA I/O size observes only 100MB/sec, it would be useful to determine which DMA I/O size matches the buffer cache performance and adjust auto_dma_read_length and auto_dma_write_length accordingly. The lmdd utility is handy here.

StorNext File System Tuning Guide

19

Image 23
Contents StorNext Copyright Statement Contents Contents Underlying Storage System StorNext File System TuningCaching RAID Cache ConfigurationRAID Write-Back RAID Read-Ahead RAID Level, Segment Size, and Stripe Size File Size Mix and Application I/O Characteristics NFS / Cifs Buffer CacheMetadata Network Snfs and Virus CheckingMetadata Controller System Stripe Groups FSM Configuration File SettingsAffinities StripeBreadth ThreadPoolSize BufferCacheSizeInodeCacheSize FsBlockSize ForcestripeAlignmentFsBlockSize JournalSize JournalSizeSnfs Tools StorNext File System Tuning Metadata Controller System StorNext File System Tuning Metadata Controller System Latency-testindex-number seconds Mount Command Options Hardware Distributed LAN Disk Proxy NetworksSnfs External API StorNext File System Tuning Guide Switch Network Configuration and TopologyDistributed LAN Client Vs. Legacy Network Attached Storage Distributed LAN ServersLoad Balancing Client Scalability Fault tolerancePerformance Robustness and Stability Consistent Security Model Windows Memory RequirementsStorNext File System Tuning Windows Memory Requirements Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File