Quantum 3.1.4.1 manual FSM Configuration File Settings, Stripe Groups

Page 13

FSM Configuration File Settings

StorNext File System Tuning

The Metadata Controller System

used to realize performance gains from increased memory: BufferCacheSize, InodeCacheSize, and ThreadPoolSize.

However, it is critical that the MDC system have enough physical memory available to ensure that the FSM process doesn’t get swapped out. Otherwise, severe performance degradation and system instability can result.

The operating system on the metadata controller must always be run in U.S. English.

The following FSM configuration file settings are explained in greater detail in the cvfs_config man page. For a sample FSM configuration file, see Sample FSM Configuration File on page 27.

The examples in the following sections are excerpted from the sample configuration file from Sample FSM Configuration File on page 27.

Stripe Groups

Splitting apart data, metadata, and journal into separate stripe groups is usually the most important performance tactic. The create, remove, and allocate (e.g., write) operations are very sensitive to I/O latency of the journal stripe group. Configuring a separate stripe group for journal greatly benefits the speed of these operations because disk seek latency is minimized. However, if create, remove, and allocate performance aren't critical, it is okay to share a stripe group for both metadata and journal, but be sure to set the exclusive property on the stripe group so it doesn't get allocated for data as well. It is recommended that you assign only a single LUN for each journal or metadata stripe group. Multiple metadata stripe groups can be utilized to increase metadata I/ O throughput through concurrency. RAID1 mirroring is optimal for metadata and journal storage. Utilizing the write-back caching feature of the RAID system (as described previously) is critical to optimizing performance of the journal and metadata stripe groups.

Example:

[stripeGroup RegularFiles] Status UP

Exclusive No##Non-Exclusive stripeGroup for all Files##

Read Enabled

Write Enabled

StorNext File System Tuning Guide

9

Image 13
Contents StorNext Copyright Statement Contents Contents Underlying Storage System StorNext File System TuningRAID Write-Back RAID Cache ConfigurationCaching RAID Read-Ahead RAID Level, Segment Size, and Stripe Size File Size Mix and Application I/O Characteristics NFS / Cifs Buffer CacheMetadata Network Snfs and Virus CheckingMetadata Controller System Stripe Groups FSM Configuration File SettingsAffinities StripeBreadth InodeCacheSize BufferCacheSizeThreadPoolSize FsBlockSize ForcestripeAlignmentFsBlockSize JournalSize JournalSizeSnfs Tools StorNext File System Tuning Metadata Controller System StorNext File System Tuning Metadata Controller System Latency-testindex-number seconds Mount Command Options Snfs External API Distributed LAN Disk Proxy NetworksHardware StorNext File System Tuning Guide Switch Network Configuration and TopologyDistributed LAN Client Vs. Legacy Network Attached Storage Distributed LAN ServersPerformance Fault toleranceLoad Balancing Client Scalability Robustness and Stability Consistent Security Model Windows Memory RequirementsStorNext File System Tuning Windows Memory Requirements Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File