HP c-Class Performance Tuning manual Linux filesystem tuning, Ext2-3-4 tuning

Page 21

Linux filesystem tuning

ext2-3-4 tuning

XFS is currently the recommended filesystem. It can achieve up to three times the performance of a tuned ext2/ext3 solution. At this time, there is no known additional tuning for running XFS in a single- or multi-IO Accelerator configuration.

Setting stride size and stripe width for ext2/3 (extN) when using

RAID

The extN filesystem family has create-time options of stride and stripe width. Stride helps the filesystem ensure that critical metadata structures are spread across the disks in the RAID evenly, to keep any given disk from becoming a hotspot. Stripe width allows for filesystem optimization.

To calculate the correct values for stride size, take the chunk size of the RAID array and divide it by the block size of the filesystem. For example:

stride = (chunk size / filesystem block size)

Stripe width calculation requires the stride size and the number of data-bearing disks. The following table shows the calculation for number of data bearing disks per raid type.

dbd = # of data bearing disks total_disks = # of active disks

mirrored_sets = The number of raid 1 mirrored sets that are used to form the higher-level group

RAID level

Data bearing disks (dbd)

 

 

 

0

(Striping)

total_disks

1

(Mirroring)

1

5

 

total_disks - 1

6

 

total_disks - 2

10

mirrored_sets

50

mirrored_sets - 1

To calculate the stripe width, take the number of data bearing disks and multiply it by the stride.

stripe_width = dbd * stride

Sample Configuration

NOTE: The example below is not a recommended configuration, but it is a good demonstration of how to use the above equations.

Create a RAID50 with 10 IO Accelerators combined into five mirrored sets, (two mirrored IO Accelerators per set) and 256KB chunk size.

Create an ext3 filesystem with an 8K block size:

Linux filesystem tuning 21

Image 21
Contents HP IO Accelerator Performance Tuning Guide Page Contents Setting Windows driver affinity Introduction About the Performance and Tuning GuideVerifying Linux system performance System performanceWrite bandwidth test System performance Verifying Windows system performance with Iometer Debugging performance issues Improperly configured benchmarkOversubscribed bus Handling PCIe errors PCIe link width improperly negotiated CPU thermal throttling or auto-idling Slow performance using RAID5 on Linux Using CP and other system utilitiesBenchmarking through a filesystem To avoid this issue. For more information, see the patch General tuning techniques Using direct I/O, unbuffered, or zero copyMultiple outstanding IOs $ dd if=/dev/zero of=/dev/fioX bs=10M oflag=direct Pre-conditioning$ echo 4096 /sys/block/fio name/queue/nrrequests Pre-allocating memoryPreallocatemb Increased steady-state write performance with fio-format Tuning techniques for writesExt2-3-4 tuning Linux filesystem tuningStride = chunk size / filesystem block size Stripewidth = dbd * strideOptions iomemory-vsl preallocatememory=1072,4997,6710,10345 Using the IO Accelerator as swap spaceFio benchmark Compiling the fio benchmark$ tar xjvf fio-X.Y.Z.tar.bz2 $ cd fio-X.Y.Z Page Using direct I/O on Linux Programming using direct I/OFd = openfilename, Owronly Fd = openfilename, Owronly OdirectUsing direct I/O on Windows ++ code sample Programming using direct I/O Programming using direct I/O Windows driver affinity Setting Windows driver affinityCreate the SetWorkerAffinity2 tag of type Regdword Acronyms and abbreviations Index Index