Quantum 6-01376-07 manual Latency-testindex-number seconds

Page 19

StorNext File System Tuning

The Metadata Controller System

Identify read/modify/write condition.

If buffered VFS writes are causing Device reads, it might be beneficial to match I/O request size to a multiple of the “cachebufsize” (default 64KB; see mount_cvfs man page). Another way to avoid this is by truncating the file before writing.

The cvadmin command includes a latency-testutility for measuring the latency between an FSM and one or more SNFS clients. This utility causes small messages to be exchanged between the FSM and clients as quickly as possible for a brief period of time, and reports the average time it took for each message to receive a response.

The latency-testcommand has the following syntax:

latency-testindex-number [seconds]

latency-test all [seconds]

If an index-numberis specified, the test is run between the currently- selected FSM and the specified client. (Client index numbers are displayed by the cvadmin who command). If all is specified, the test is run against each client in turn.

The test is run for 2 seconds, unless a value for seconds is specified. Here is a sample run:

snadmin (lsi) > latency-test

Test started on client 1 (bigsky-node2)... latency 55us Test started on client 2 (k4)... latency 163us

There is no rule-of-thumb for “good” or “bad” latency values. Latency can be affected by CPU load or SNFS load on either system, by unrelated Ethernet traffic, or other factors. However, for otherwise idle systems, differences in latency between different systems can indicate differences in hardware performance. (In the example above, the difference is a Gigabit Ethernet and faster CPU versus a 100BaseT Ethernet and a slower CPU.) Differences in latency over time for the same system can indicate new hardware problems, such as a network interface going bad.

If a latency test has been run for a particular client, the cvadmin who long command includes the test results in its output, along with information about when the test was last run.

StorNext File System Tuning Guide

16

Image 19 Contents
ExtNrotS Copyright Statement Contents Underlying Storage System StorNext File System TuningRAID Cache Configuration RAIDWrite-BackCaching RAID Read-Ahead Caching RAID Level, Segment Size, and Stripe Size Direct Memory Access DMA I/O Transfer File Size Mix and Application I/O CharacteristicsBuffer Cache NFS / Cifs Metadata Controller System Metadata NetworkStripe Groups FSM Configuration File SettingsAffinities ExampleBufferCacheSize StripeBreadthFsBlockSize InodeCacheSizeThreadPoolSize ForcestripeAlignmentSnfs Tools JournalSizeStorNext File System Tuning Metadata Controller System StorNext File System Tuning Metadata Controller System StorNext File System Tuning Metadata Controller System Latency-testindex-number seconds Mount Command Options Distributed LAN Disk Proxy Networks Hardware ConfigurationSnfs External API Network Configuration and Topology Multi-NIC Hardware and IP Configuration Diagram Distributed LAN Client Vs. Legacy Network Attached Storage Distributed LAN ServersNumber of Clients Tested via Largest Tested ConfigurationSimulation Consistent Windows Memory RequirementsStorNext File System Tuning Windows Memory Requirements Sample FSM Configuration File MAXStripeBreadth StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File