Performance and scaling with CFS

Another aspect to consider while implementing an application in a CFS cluster is the I/O performance. If an application uses the CFS only from one node to take advantage of possible improved failover times and ease of administration, the I/O performance is expected to be equivalent to the performance on a local VxFS.

For Serviceguard Storage Management Suite, the expectations are different when the CFS data is being accessed concurrently by multiple nodes. The CFS is a multi-reader/multi-writer file system, which means that all nodes can read and write user data directly to the file system. For each CFS, one cluster node is the primary and the other nodes are secondary nodes. The node mounting a CFS first is elected as primary for that file system. If a primary for a CFS fails, the role moves to another cluster node automatically. You can also move the primary role manually by executing a command. With Veritas Storage Foundation 4.1, file system metadata (file size, name, and time stamp) can also be read by all nodes, but only written by the primary node of that CFS. With Veritas Storage Foundation 5.0, file system metadata can be read and written by all CFS nodes. Keeping this in mind, the I/O performance and scaling of multi-instance applications depend on their I/O characteristics:

Applications that mostly read I/Os are expected to perform and scale very well. No noticeable degradation in comparison to applications doing I/O to a non-CFS is expected.

The performance of applications that also do many writes depends on the I/O patterns of the nodes across the cluster. CFS has implemented a range-locking functionality to allow multiple processes to write to different regions of the same file at the same time.

With Veritas Storage Foundation 4.1, applications that perform many file system metadata updates are not expected to perform as well on a CFS as on a non-CFS. For example, file creation, file deletion, file size expansion, and file size truncation operations are performed exclusively by the CFS primary. With Veritas Storage Foundation 5.0, performance is improved, because file system metadata updates can be performed by all CFS nodes, not just the primary CFS node.

A special mount option (noatime) directs the file system to ignore file access time. This option reduces file system metadata updates and should be used with applications that do not require file access time information.

Each mounted file system has its own CFS primary, and by default, the node that mounts the file system first becomes its primary. One way to mitigate the performance impact is to distribute the CFS primaries equally across the cluster nodes. For instance, if a four-node cluster shares eight file systems, each node could be assigned as primary for only two file systems. The fsclustadm command with the setprimary option can be used to alter the primary node of a CFS.

If an application does many file system metadata updates only on temporary files that are not relevant to other instances of the application running on other nodes in the cluster, then those files could be configured to be stored on node-specific storage to avoid performance issues.

Note

With Serviceguard Storage Management Suite 2.0 on HP-UX 11i v3, a performance enhancement provides a symmetrical architecture enabling all nodes in the cluster to process metadata operations simultaneously—so CFS can handle significantly higher metadata loads.

15