General tuning techniques
Using direct I/O, unbuffered, or zero copy 10
Traditional I/O paths include the page cache, a DRAM cache of data stored on the disk. The IO Accelerator is fast enough that this and other traditional optimizations, such as I/O merging and reordering, are actually detrimental to performance. I/O merging and reordering are eliminated naturally by the IO Accelerator, but the page cache must be bypassed at the application level.
Direct I/O bypasses the page cache. This allows the same memory regions written by the application to be
Bypassing the page cache provides the following benefits:
•Less complex write path
•Lower overall CPU utilization
•Less memory bandwidth usage
In most cases, direct I/O is beneficial for IO
Many
For other applications, it is necessary for the application provider to enable direct I/O or to modify the source to enable direct I/O, and then recompile.
For a more
dd support
More recent versions of dd support the oflag=direct and iflag=direc}}t parameters. These enable direct I/O for either the file being written to or the file being read from, respectively. Use the {{oflag=direct parameter when writing to an IO Accelerator and the iflag=direct parameter when reading from an IO Accelerator.
ioZone Benchmark
ioZone supports the
fio Benchmark
Fio uses the direct=1 setting in the job file, or
Multiple outstanding IOs
The IO Accelerator is more like a storage controller than a single disk. Like other storage controllers, it performs best when multiple requests are outstanding. Unlike other storage solutions that rely on legacy
General tuning techniques 16