site. XRC externalizes a timestamp of the recovered system so that manual recovery is possible from a specified time. The time lag between the primary and the secondary sites can be minimized by performance tuning actions.

1

3

4

2

5

67

8

1.Write data to cache and NVS on primary

2.3990 sidefile entry created

3.Device End - write complete

4.SDM reads sidefile using a utility address

5.SDM forms Consistency Group

-SDM optimizes secondary update process

6.SDM writes Consistency Group to journal

7.SDM updates Consistency Group on secondary devices

8.State data sets updated

Figure 31. XRC Data Flow

9.5.5 Compression

Host compression techniques are commonly used to reduce the amount of auxiliary storage required. As a general result, not only is storage space saved, but also disk I/O; the data occupies less space; and fewer operations are required to access and transfer the data on channels and networks. The cost is extra CPU cycles needed at the host to compress the data before destaging to storage servers and to decompress the data after it has been retrieved.

DB2 uses host compression and keeps the data compressed in the buffer pools as well, effectively increasing their size, and decompressing only the rows needed by the application programs. DB2 provides utilities that estimate the compression values for your data, and therefore can help when evaluating the trade off between DASD savings and CPU overhead.

Some disk storage servers, like RVA, store the user data in compressed form. In such cases compression and decompression are independent of the host. So the question arises about the usability of both levels of compression. Are they compatible?

The answer is yes: both can be used! Obviously, when you use both, the effectiveness of the compression ratio between host data and stored data will be considerably less than the 3.6 general value for traditional data, probably in the range between 1.5 and 2.5, but still greater than 1. The RVA also implements compaction and replaces the traditional device control information (such as gaps and headers) with other techniques. In general, when capacity planning for large storage occupancy, and the real amount of compressed data is not well defined, consider some preliminary analysis of your RVA solution. There are tools available to the IBM storage specialists in order to determine the specific compression ratio by sampling the data of a given environment.

Please refer to IBM RAMAC Virtual Array, SG24-4951, and to DB2 for OS/390 and Data Compression, SG24-5261, for details on RVA and DB2 compression.

100Storage Management with DB2 for OS/390

Page 122
Image 122
IBM 5655-DB2, 5695-DF1 manual Compression, XRC Data Flow

5695-DF1, 5655-DB2 specifications

IBM 5655-DB2 and 5695-DF1 are significant components within the IBM software ecosystem, predominantly focusing on data management and integration solutions. These offerings cater primarily to enterprise environments that require robust database management systems and associated frameworks to maintain and manipulate data efficiently.

IBM 5655-DB2 is a well-known relational database management system (RDBMS) that excels in managing large volumes of structured data. Its architecture is designed to support high availability, scalability, and performance, crucial for businesses operating in today’s data-driven world. Some of its main features include advanced indexing capabilities, support for complex queries, and dynamic workload management. Additionally, it provides strong concurrency controls, which enable multiple users to access and manipulate data simultaneously without compromising data integrity.

One of the key characteristics of DB2 is its support for various data types, including JSON and XML, making it versatile for modern applications that generate data in diverse formats. It also features robust security mechanisms to protect sensitive data, aligning with compliance standards across industries. Integration with analytics tools further allows businesses to derive insights from their data, enhancing decision-making processes.

On the other hand, IBM 5695-DF1, also known as the InfoSphere DataStage, is a powerful data integration tool that facilitates the extraction, transformation, and loading (ETL) of data from various sources to target systems. It empowers organizations to streamline their data flows, ensuring that clean, consistent information is available for analysis and operational use. Key features of 5695-DF1 include a user-friendly graphical interface that enhances developer productivity and a rich set of connectors for numerous data sources, enabling seamless data integration.

DataStage also supports real-time data integration, allowing businesses to keep their data synchronized across multiple platforms. Its parallel processing capabilities dedicatedly optimize performance, enabling organizations to handle vast datasets efficiently. It incorporates data quality tools that help in validating and cleansing data before it is used for decision-making processes.

Both IBM 5655-DB2 and 5695-DF1 are part of a broader strategy to accommodate the evolving landscape of data management. Businesses leverage these technologies to enhance their data architectures, fostering agility and competitive advantage in their respective markets. Their integration capabilities, along with a focus on security and scalability, position them as vital assets in modern enterprise environments. Whether managing critical data within a database or ensuring seamless data flow across systems, these IBM offerings provide a comprehensive approach to handling complex data challenges.