If you can view Figure
2.3.1 Server-based design
The DS6800 benefits from a fully assembled, leading edge processor and memory system. Using the PowerPC architecture as the primary processing engine sets the DS6800 apart from other disk storage systems on the market.
The design decision to use processor memory as I/O cache is a key element of the IBM storage architecture. Although a separate I/O cache could provide fast access, it cannot match the access speed of main memory. The decision to use main memory as the cache proved itself in three generations of the IBM Enterprise Storage Server (ESS 2105). The performance roughly doubled with each generation. This performance improvement can be traced to the capabilities of the processor speeds, the L1/L2 cache sizes and speeds, the memory bandwidth and response time, and the PCI bus performance.
With the DS6800, the cache access has been accelerated further by making the
2.3.2 Cache management
Most if not all
The DS6800 and DS8000 use the
partnership with IBM Research. It is a
workloads with a varying mix of sequential and random I/O streams. SARC is inspired by the Adaptive Replacement Cache (ARC) algorithm and inherits many features from it. For a
detailed description of ARC see N. Megiddo and D. S. Modha, “Outperforming LRU with an adaptive replacement cache algorithm,” IEEE Computer, vol. 37, no. 4, pp.
SARC basically attempts to determine four things:
When data is copied into the cache.
Which data is copied into the cache.
Which data is evicted when the cache becomes full.
How does the algorithm dynamically adapt to different workloads.
The decision to copy some amount of data into the DS6000/DS8000 cache can be triggered from two policies: demand paging and prefetching. Demand paging means that disk blocks
are brought in only on a cache miss. Demand paging is always active for all volumes and ensures that I/O patterns with some locality find at least some recently used data in the cache.
Prefetching means that data is copied into the cache speculatively even before it is requested. To prefetch, a prediction of likely future data accesses is needed. Because effective, sophisticated prediction schemes need extensive history of page accesses (which
Chapter 2. Components 27