Quantum 6-01376-05 manual StorNext File System Tuning Windows Memory Requirements

Page 26

StorNext File System Tuning

Windows Memory Requirements

mysteriously dying, repeated FSM reconnect attempts, and messages being sent to the application log and cvlog.txt about socket failures with the status code (10555) which is ENOBUFS.

The solution is to adjust a few parameters on the Cache Parameters tab in the SNFS control panel (cvntclnt). These parameters control how much memory is consumed by the directory cache, the buffer cache, and the local file cache.

As always, an understanding of the customers’ workload aids in determining the correct values. Tuning is not an exact science, and requires some trial-and-error (and the unfortunate reboots) to come up with values that work best in the customer’s environment.

The first is the Directory Cache Size. The default is 10 (MB). If you do not have large directories, or do not perform lots of directory scans, this number can be reduced to 1 or 2 MB. The impact will be slightly slower directory lookups in directories that are frequently accessed.

Also, in the Mount Option panel, you should set the Paged DirCache option.

The next parameter is the Buffer Cache NonPaged Pool Usage. This value is in percent (%) and represents the percentage of available non-paged pool that the buffer cache will consume. By default, this value is 75%. This should be set to 25 or at most 50. The minimum value is 10 and the maximum value is 90.

The next parameters control how many file structures are cached on the client. These are controlled by the Meta-data Cache low water mark, Meta- data Cache high water mark and Meta-data Cache Max water mark. Each file structure is represented internally by a data structure called the “cvnode.” The cvnode represents all the state about a file or directory. The more cvnodes that there are encached on the client, the fewer trips the client has to make over the wire to contact the FSM.

Each cvnode is approximately 1462 bytes in size and is allocated from the non-paged pool. The cvnode cache is periodically purged so that unused entries are freed. The decision to purge the cache is made based on the Low, High, and Max water mark values. The 'Low' default is 1024, the 'High' default is 3072, and the 'Max' default is 4096.

These values should be adjusted so that the cache does not bloat and consume more memory than it should. These values are highly dependent on the customers work load and access patterns. Values of 512 for the High water mark will cause the cvnode cache to be purged when more than 512 entries are present. The cache will be purged until the low

StorNext File System Tuning Guide

23

Image 26
Contents 01376-05 Copyright Statement Contents StorNext File System Tuning Underlying Storage SystemRAID Cache Configuration RAID Write-Back Caching RAID Read-Ahead Caching RAID Level, Segment Size, and Stripe Size Buffer Cache File Size Mix and Application I/O CharacteristicsDirect Memory Access DMA I/O Transfer NFS / Cifs Metadata Network Metadata Controller SystemFSM Configuration File Settings Stripe GroupsExample AffinitiesStripeBreadth BufferCacheSizeForcestripeAlignment InodeCacheSizeThreadPoolSize FsBlockSizeJournalSize Snfs ToolsStorNext File System Tuning Metadata Controller System StorNext File System Tuning Metadata Controller System StorNext File System Tuning Metadata Controller System MountCommandOptions StorNext File System Tuning Metadata Controller System Snfs External API Hardware ConfigurationDistributed LAN Disk Proxy Networks StorNext File System Tuning Guide Network Configuration and Topology SAN Distributed LAN Servers Windows Memory RequirementsStorNext File System Tuning Windows Memory Requirements StorNext File System Tuning Windows Memory Requirements Sample FSM Configuration File MAXStripeBreadth StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File