The 9960 Provides 32GB of Fully Addressable Cache

Hitachi Data Systems

A portion of cache memory can be allocated to specific data.

The 9960 Provides 32GB of Fully Addressable Cache

The 9960 supports up to 32GB of data cache, all directly addressable. Separate cache modules (up to 1.5GB) are used for control storage. Competitive systems use their cache for both data and control information, limiting the amount of usable data cache.

Advanced Cache Algorithms of the Lightning 9900Series

The Lightning 9900Series has a variety of advanced cache algorithms and software solutions that allow exceptional performance.

Hitachi FlashAccessAllows Datasets to be Permanently Placed in Cache

Hitachi FlashAccessallows users to dynamically “lock and unlock” data into cache in real time. Read and write functions are then performed at cache speeds, with no disk latency delay. With Hitachi FlashAccess, a portion of cache memory can be allocated to specific data. Administrators can add, delete, or change FlashAccessmanaged data at any time, quickly and easily.

In S/390® environments, defined by the Logical Volume Image (LVI), cache data can be as small as a single track or as large as a full 3390. For increased configuration flexibility, Hitachi FlashAccessoffers multiple modes of operation. It can be used in conjunction with Hitachi RapidXchangeto increase the speed of data transfer and, therefore, improve performance of mainframes to open systems data exchange. RapidXchangesupports both open-to-S/390 and open-to-open high speed data transfers.

Read-ahead for High-performance Sequential Reads

Read clustering in the Lightning 9900Series is enabled using built-in heuristics to read ahead for every I/O. The heuristics are applied to determine if the data is being accessed sequentially. If so, then the Lightning 9900Series reads ahead pages corresponding to that data. Read-ahead helps to ensure that when a client read request is received the requested data will already be stored in the data cache, so the request can be satisfied immediately.

Control Memory Hierarchical Star Network®

The second component of the Hi-Star Architectureis the Control Memory Hierarchical Star Network(CM-HSN). This is a point-to-point network that handles the exchange of control information between the processors and control memory. The control memory contains information about the status, location, and configuration of the cache, the data in the cache, and the configuration of the Lightning 9900Series system (as well as other information related to the operational state of the system). Two control memory areas are mirrored images of each other. This is illustrated in Figure 16. Control data is “data about data” also called “meta-data.” Essentially, control information is handled “out of band” from the data paths, both through a separate memory area and network.

The CM-HSN is a much simpler network design in that every connection is a point-to-point connection. Only the Cache-HSN (data paths) uses a switched-fabric topology for its interconnecting network. The CM-HSN also uses a narrower path and more of them. Figure 16 shows a close-up view of the CM-HSN’s networking topology. Referring back to the diagram in Figure 2, there are two CM-HSN paths connecting the processors to the control memory. However, the diagram in Figure 15 shows four paths per processor module. There are 64 4-bit paths connecting the processors to the control memory. The diagram in Figure 2 shows the 4-bit paths combined into their full 8-bit (plus a parity bit) paths. The zoomed-in view in Figure 16 shows all of the ports to the

18

Page 27
Image 27
Hitachi 9900 Series manual Hitachi Data Systems, The 9960 Provides 32GB of Fully Addressable Cache