Quantum 6-01376-07 manual FSM Configuration File Settings, Stripe Groups

Page 11

FSM Configuration File Settings

StorNext File System Tuning

The Metadata Controller System

Some metadata operations such as file creation can be CPU intensive, and benefit from increased CPU power. The MDC platform is important in these scenarios because lower clock- speed CPUs such as Sparc and Mips degrade performance.

Other operations can benefit greatly from increased memory, such as directory traversal. SNFS provides three config file settings that can be used to realize performance gains from increased memory: BufferCacheSize, InodeCacheSize, and ThreadPoolSize.

However, it is critical that the MDC system have enough physical memory available to ensure that the FSM process doesn’t get swapped out. Otherwise, severe performance degradation and system instability can result.

The following FSM configuration file settings are explained in greater detail in the cvfs_config man page. For a sample FSM configuration file, see Sample FSM Configuration File on page 25.

The examples in the following sections are excerpted from the sample configuration file from Sample FSM Configuration File on page 25.

Stripe Groups

Splitting apart data, metadata, and journal into separate stripe groups is usually the most important performance tactic. The create, remove, and allocate (e.g., write) operations are very sensitive to I/O latency of the journal stripe group. Configuring a separate stripe group for journal greatly benefits the speed of these operations because disk seek latency is minimized. However, if create, remove, and allocate performance aren't critical, it is okay to share a stripe group for both metadata and journal, but be sure to set the exclusive property on the stripe group so it doesn't get allocated for data as well. It is recommended that you assign only a single LUN for each journal or metadata stripe group. Multiple metadata stripe groups can be utilized to increase metadata I/O throughput through concurrency. RAID1 mirroring is optimal for metadata and journal storage. Utilizing the write-back caching feature of the RAID system (as described previously) is critical to optimizing performance of the journal and metadata stripe groups.

StorNext File System Tuning Guide

8

Image 11 Contents
ExtNrotS Copyright Statement Contents Underlying Storage System StorNext File System TuningRAID Cache Configuration RAIDWrite-BackCaching RAID Read-Ahead Caching RAID Level, Segment Size, and Stripe Size Buffer Cache File Size Mix and Application I/O CharacteristicsDirect Memory Access DMA I/O Transfer NFS / Cifs Metadata Controller System Metadata NetworkStripe Groups FSM Configuration File SettingsAffinities ExampleBufferCacheSize StripeBreadthFsBlockSize InodeCacheSizeThreadPoolSize ForcestripeAlignmentSnfs Tools JournalSizeStorNext File System Tuning Metadata Controller System StorNext File System Tuning Metadata Controller System StorNext File System Tuning Metadata Controller System Latency-testindex-number seconds Mount Command Options Snfs External API Hardware ConfigurationDistributed LAN Disk Proxy Networks Network Configuration and Topology Multi-NIC Hardware and IP Configuration Diagram Distributed LAN Client Vs. Legacy Network Attached Storage Distributed LAN ServersSimulation Largest Tested ConfigurationNumber of Clients Tested via Consistent Windows Memory RequirementsStorNext File System Tuning Windows Memory Requirements Sample FSM Configuration File MAXStripeBreadth StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File