Quantum 3.1.4.1 manual Latency-testindex-number seconds

Page 22

StorNext File System Tuning

The Metadata Controller System

Identify disk performance issues.

If Device throughput is inconsistent or less than expected, it might indicate a slow disk in a stripe group, or that RAID tuning is necessary.

Identify file fragmentation.

If the extent count “exts” is high, it might indicate a fragmentation problem.This causes the device I/Os to be broken into smaller chunks, which can significantly impact throughput.

Identify read/modify/write condition.

If buffered VFS writes are causing Device reads, it might be beneficial to match I/O request size to a multiple of the “cachebufsize” (default 64KB; see mount_cvfs man page). Another way to avoid this is by truncating the file before writing.

The cvadmin command includes a latency-testutility for measuring the latency between an FSM and one or more SNFS clients. This utility causes small messages to be exchanged between the FSM and clients as quickly as possible for a brief period of time, and reports the average time it took for each message to receive a response.

The latency-testcommand has the following syntax:

latency-testindex-number [seconds]

latency-test all [seconds]

If an index-numberis specified, the test is run between the currently- selected FSM and the specified client. (Client index numbers are displayed by the cvadmin who command). If all is specified, the test is run against each client in turn.

The test is run for 2 seconds, unless a value for seconds is specified. Here is a sample run:

snadmin (lsi) >

latency-test

Test started on

client

1

(bigsky-node2)... latency 55us

Test started on

client

2

(k4)... latency 163us

There is no rule-of-thumb for “good” or “bad” latency values. Latency can be affected by CPU load or SNFS load on either system, by unrelated Ethernet traffic, or other factors. However, for otherwise idle systems, differences in latency between different systems can indicate differences in hardware performance. (In the example above, the difference is a Gigabit Ethernet and faster CPU versus a 100BaseT Ethernet and a

18

StorNext File System Tuning Guide

Image 22
Contents StorNext Copyright Statement Contents Contents StorNext File System Tuning Underlying Storage SystemRAID Write-Back RAID Cache ConfigurationCaching RAID Read-Ahead RAID Level, Segment Size, and Stripe Size File Size Mix and Application I/O Characteristics Buffer Cache NFS / CifsSnfs and Virus Checking Metadata NetworkMetadata Controller System FSM Configuration File Settings Stripe GroupsAffinities StripeBreadth InodeCacheSize BufferCacheSizeThreadPoolSize ForcestripeAlignment FsBlockSizeJournalSize FsBlockSize JournalSize Snfs Tools StorNext File System Tuning Metadata Controller System StorNext File System Tuning Metadata Controller System Latency-testindex-number seconds Mount Command Options Snfs External API Distributed LAN Disk Proxy NetworksHardware StorNext File System Tuning Guide Network Configuration and Topology SwitchDistributed LAN Servers Distributed LAN Client Vs. Legacy Network Attached StoragePerformance Fault toleranceLoad Balancing Client Scalability Windows Memory Requirements Robustness and Stability Consistent Security ModelStorNext File System Tuning Windows Memory Requirements Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File