stage requests. The site policy could limit the maximum number of non-Authorized Caller requests allowed at once by either delaying or denying particular requests. To delay the request, the site policy may return a special retry status along with the number of seconds to wait before the Client API retries the request. Delaying requests should limit the number of create, open, and/or stage requests performed at a particular point in time, thus decreasing the load on the system. However, care must be taken to figure out the best retry wait scheme to meet the requirements for each site and to configure the correct number of Gatekeepers if the load on one Gatekeeper is heavy. (Note: The maximum number of Gatekeepers per storage subsystem is one.) Also, sites need to write their Site Interfaces optimally to return in a timely manner.

Two special error status codes (HPSS_ETHRESHOLD_DENY and HPSS_EUSER_DENY) may be used to refine how a site may deny a create, open, or stage requests. If the Core Server receives either of these errors, then it will return this error directly to the Client API rather than performing a retry. Errors other than these two or the special HPSS_ERETRY status will be retried several times by the Core Server. See either volume of the HPSS Programmer's Reference for more information.

Create, open, and stage requests from Authorized Callers (MPS and XFS) can NOT be delayed or denied due to timing sensitivity of the special requests these servers make to the Core Server. For example, migration of a file by MPS is an Authorized Caller Open request. The site policy could keep track of Authorized Caller requests to further limit non-AuthorizedCaller requests.

If a Gatekeeper is being used for Gatekeeping Services, then the Core Server for each storage subsystem configured to use a particular Gatekeeper will return errors for the create, open, and/or stage requests being monitored by that Gatekeeper when that Gatekeeper is down. For example, if storage subsystem #2 is configured to use Gatekeeper #2, and Gatekeeper #2 is monitoring open requests and is DOWN, then each open by the Core Server in storage subsystem #2 will eventually fail after retrying several times.

3.11.11. XFS

XFS is one of the better performing filesystems available for Linux – particularly when manipulating large files. The XFS filesystem configuration required for use with HPSS must have the XDSM (DMAPI) kernel extension enabled. This adds some additional overhead. However, timing tests indicate that the amount of additional filesystem processing time introduced by DMAPI is minimal.

The HPSS HDM handles the namespace, data and administrative events generated by the XFS filesystem. The HDM was designed to introduce very little additional processing when handling namespace events (file creations, renames or deletions). Data events (reads and writes) require communication with the DMAP gateway which means one or more RPCs will be executed. Additionally, attempting to read or modify a file which has been purged from the XFS Filesystem will cause the process to block until the file has been staged back from HPSS.

The HDM keeps an internal record of migration and purge candidates and is capable of quickly completing migration and purge runs which would otherwise take a good deal of time. This makes a 'migrate early, migrate often' strategy a feasible way to keep XFS disks clear of inactive data. It is also possible to configure a minimum file size for migration candidates. This can be used to keep files below a certain size on the XFS filesystem, improving small file access times.

If the HDM appears to be introducing a performance bottleneck, it's possible to configure multiple HDMs on multiple machines to distribute the load.

3.11.12. HPSS VFS Interface

Please refer to Section 1.4: HPSS VFS Interface Configuration of the HPSS Management Guide.

HPSS Installation Guide

July 2008

Release 6.2 (Revision 2.0)

116

Page 116
Image 116
IBM HPSS manual Xfs, Hpss Installation Guide July Release 6.2 Revision 116