Quantum 6-01376-05 manual MountCommandOptions

Page 19

StorNext File System Tuning

The Metadata Controller System

The latency-testcommand has the following syntax:

latency-test index-number[seconds] latency-test all [seconds]

If an index-numberis specified, the test is run between the currently- selected FSM and the specified client. (Client index numbers are displayed by the cvadmin who command). If all is specified, the test is run against each client in turn.

The test is run for 2 seconds, unless a value for seconds is specified. Here is a sample run:

snadmin (lsi) > latency-test

Test started on client 1 (bigsky-node2)... latency 55us Test started on client 2 (k4)... latency 163us

 

There is no rule-of-thumb for “good” or “bad” latency values. Latency

 

can be affected by CPU load or SNFS load on either system, by unrelated

 

Ethernet traffic, or other factors. However, for otherwise idle systems,

 

differences in latency between different systems can indicate differences

 

in hardware performance. (In the example above, the difference is a

 

Gigabit Ethernet and faster CPU versus a 100BaseT Ethernet and a slower

 

CPU.) Differences in latency over time for the same system can indicate

 

new hardware problems, such as a network interface going bad.

 

If a latency test has been run for a particular client, the cvadmin who

 

long command includes the test results in its output, along with

 

information about when the test was last run.

 

The following SNFS mount command settings are explained in greater

MountCommandOptions

detail in the mount_cvfs man page.

 

By default, the size of the buffer cache is 32MB and each buffer is 64K, so

 

 

there is a total of 512 buffers. In general, increasing the size of the buffer

 

cache will not improve performance for streaming reads and writes.

 

However, a large cache helps greatly in cases of multiple concurrent

 

streams, and where files are being written and subsequently read. Buffer

 

cache size is adjusted with the buffercachemin and buffercachemax

 

settings.

StorNext File System Tuning Guide

16

Image 19
Contents 01376-05 Copyright Statement Contents Underlying Storage System StorNext File System TuningRAID Cache Configuration RAID Write-Back Caching RAID Read-Ahead Caching RAID Level, Segment Size, and Stripe Size Direct Memory Access DMA I/O Transfer File Size Mix and Application I/O CharacteristicsBuffer Cache NFS / Cifs Metadata Controller System Metadata NetworkStripe Groups FSM Configuration File SettingsAffinities ExampleBufferCacheSize StripeBreadthFsBlockSize InodeCacheSizeThreadPoolSize ForcestripeAlignmentSnfs Tools JournalSizeStorNext File System Tuning Metadata Controller System StorNext File System Tuning Metadata Controller System StorNext File System Tuning Metadata Controller System MountCommandOptions StorNext File System Tuning Metadata Controller System Distributed LAN Disk Proxy Networks Hardware ConfigurationSnfs External API StorNext File System Tuning Guide Network Configuration and Topology SAN Windows Memory Requirements Distributed LAN ServersStorNext File System Tuning Windows Memory Requirements StorNext File System Tuning Windows Memory Requirements Sample FSM Configuration File MAXStripeBreadth StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File