Quantum 6-01376-07 manual Metadata Network, Metadata Controller System

Page 10

StorNext File System Tuning

The Metadata Network

The Metadata Network

As with any client/server protocol, SNFS performance is subject to the limitations of the underlying network. Therefore, it is recommended that you use a dedicated Metadata Network to avoid contention with other network traffic. Either 100BaseT or 1000BaseT is required, but for a dedicated Metadata Network there is usually no benefit from using 1000BaseT over 100BaseT. Neither TCP offload nor are jumbo frames required.

It is best practice to have all SNFS clients directly attached to the same network switch as the MDC systems. Any routing required for metadata traffic will incur additional latency that impacts performance.

It is critical to ensure that speed/duplex settings are correct, as this will severely impact performance. Most of the time auto-detectis the correct setting. Some managed switches allow setting speed/duplex, such as 100Mb/full, which disables auto-detectand requires the host to be set exactly the same. However, performance is severely impacted if the settings do not match between switch and host. For example, if the switch is set to auto-detectbut the host is set to 100Mb/full, you will observe a high error rate and extremely poor performance. On Linux the mii-diagtool can be very useful to investigate and adjust speed/duplex settings.

It can be useful to use a tool like netperf to help verify the Metadata Network performance characteristics. For example, if netperf -t TCP_RR reports less than 15,000 transactions per second capacity, a performance penalty may be incurred.

The Metadata Controller System

The CPU power and memory capacity of the MDC System are important performance factors, as well as the number of file systems hosted per system. In order to ensure fast response time it is necessary to use dedicated systems, limit the number of file systems hosted per system (maximum 8), and have an adequate CPU and memory.

StorNext File System Tuning Guide

7

Image 10 Contents
ExtNrotS Copyright Statement Contents StorNext File System Tuning Underlying Storage SystemRAID Cache Configuration RAIDWrite-BackCaching RAID Read-Ahead Caching RAID Level, Segment Size, and Stripe Size Direct Memory Access DMA I/O Transfer File Size Mix and Application I/O CharacteristicsBuffer Cache NFS / Cifs Metadata Network Metadata Controller SystemFSM Configuration File Settings Stripe GroupsExample AffinitiesStripeBreadth BufferCacheSizeForcestripeAlignment InodeCacheSizeThreadPoolSize FsBlockSizeJournalSize Snfs ToolsStorNext File System Tuning Metadata Controller System StorNext File System Tuning Metadata Controller System StorNext File System Tuning Metadata Controller System Latency-testindex-number seconds Mount Command Options Distributed LAN Disk Proxy Networks Hardware ConfigurationSnfs External API Network Configuration and Topology Multi-NIC Hardware and IP Configuration Diagram Distributed LAN Servers Distributed LAN Client Vs. Legacy Network Attached StorageNumber of Clients Tested via Largest Tested ConfigurationSimulation Windows Memory Requirements ConsistentStorNext File System Tuning Windows Memory Requirements Sample FSM Configuration File MAXStripeBreadth StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File StorNext File System Tuning Sample FSM Configuration File