StorNext File System Tuning
The Metadata Network
StorNext File System Tuning Guide 7
The Metadata Network
As with any client/server protocol, SNFS performance is subject to the
limitations of the underlying network. Therefore, it is recommended that
you use a dedicated Metadata Network to avoid contention with other
network traffic. Either 100BaseT or 1000BaseT is required, but for a
dedicated Metadata Network there is usually no benefit from using
1000BaseT over 100BaseT. Neither TCP offload nor are jumbo frames
required.
It is best practice to have all SNFS clients directly attached to the same
network switch as the MDC systems. Any routing required for metadata
traffic will incur additional latency that impacts performance.
It is critical to ensure that speed/duplex settings are correct, as this will
severely impact performance. Most of the time auto-detect is the correct
setting. Some managed switches allow setting speed/duplex, such as
100Mb/full, which disables auto-detect and requires the host to be set
exactly the same. However, performance is severely impacted if the
settings do not match between switch and host. For example, if the switch
is set to auto-detect but the host is set to 100Mb/full, you will observe a
high error rate and extremely poor performance. On Linux the mii-diag
tool can be very useful to investigate and adjust speed/duplex settings.
It can be useful to use a tool like netperf to help verify the Metadata
Network performance characteristics. For example, if netperf -t TCP_RR
reports less than 15,000 transactions per second capacity, a performance
penalty may be incurred.
The Metadata Controller System
The CPU and memory power of the MDC System are important
performance factors, as well as the number of file systems hosted per
system. In order to ensure fast response time it is necessary to use
dedicated systems, limit the number of file systems hosted per system
(maximum 8), and have an adequate CPU and memory.