determined by either the Management Class for SMS managed data sets, or set by the ADDVOL parameter for HSM managed. It can also be controlled in combination with volume thresholds set by the storage administrator. Data sets may be migrated to ML1 (normally disk)4, after a period of inactivity, and then onto ML2 (tape) following a further period of non-usage. It is feasible, and maybe more appropriate in certain cases, to migrate directly to ML2. Additionally, there is an interval migration process which can be used to free space on volumes during times of high activity.

Expiration processing

This is based upon the inactivity of data sets. For HSM managed volumes, all data sets are treated in the same way on each volume. For SMS managed volumes, the expiration of a data set is determined by the Management Class attributes for that data set.

Release of unused space

HSM can release over-allocated space of a data set for both SMS managed and non-SMS managed data sets.

Recall

There are two types of recall:

Automatically retrieving a data set when a user or task attempts to access it

When the HRECALL command is issued

All recalls are filtered; if the data set is SMS managed, then SMS controls the volume selection. If the data set is non-SMS, HSM directs the volume allocation. However, it is possible for the storage administrator to control a recall and place it on an appropriate volume or Storage Group if required.

4.3.3.2 Availability Management

The purpose of availability management is to provide backup copies of data sets for recovery scenarios. HSM can then restore the volumes and recover the data sets when they are needed.

Incremental backups

This is the process of taking a copy of a data set, depending upon whether it has changed (open for output; browse is not sufficient), since its last backup. For SMS managed volumes, HSM performs the backup according to the attributes of the Management Class of the individual data set. For non-SMS managed volumes, HSM performs the backup according to the ADDVOL definition.

Full volume dumps

A full volume dump backs up all data sets on a given volume, by invoking DSS. HSM Dump Classes exist which describe how often the process is activated, for example, daily, weekly, monthly, quarterly or annually.

Aggregate backup

ABARS is the process of backing up user defined groups of data sets that are business critical, to enable recovery should a scenario arise.

4Very large data sets, in excess of 64,000 tracks, cannot be migrated to disk: they must be migrated to migration level 2 tape.

30Storage Management with DB2 for OS/390

Page 52
Image 52
IBM 5655-DB2, 5695-DF1 manual Availability Management

5695-DF1, 5655-DB2 specifications

IBM 5655-DB2 and 5695-DF1 are significant components within the IBM software ecosystem, predominantly focusing on data management and integration solutions. These offerings cater primarily to enterprise environments that require robust database management systems and associated frameworks to maintain and manipulate data efficiently.

IBM 5655-DB2 is a well-known relational database management system (RDBMS) that excels in managing large volumes of structured data. Its architecture is designed to support high availability, scalability, and performance, crucial for businesses operating in today’s data-driven world. Some of its main features include advanced indexing capabilities, support for complex queries, and dynamic workload management. Additionally, it provides strong concurrency controls, which enable multiple users to access and manipulate data simultaneously without compromising data integrity.

One of the key characteristics of DB2 is its support for various data types, including JSON and XML, making it versatile for modern applications that generate data in diverse formats. It also features robust security mechanisms to protect sensitive data, aligning with compliance standards across industries. Integration with analytics tools further allows businesses to derive insights from their data, enhancing decision-making processes.

On the other hand, IBM 5695-DF1, also known as the InfoSphere DataStage, is a powerful data integration tool that facilitates the extraction, transformation, and loading (ETL) of data from various sources to target systems. It empowers organizations to streamline their data flows, ensuring that clean, consistent information is available for analysis and operational use. Key features of 5695-DF1 include a user-friendly graphical interface that enhances developer productivity and a rich set of connectors for numerous data sources, enabling seamless data integration.

DataStage also supports real-time data integration, allowing businesses to keep their data synchronized across multiple platforms. Its parallel processing capabilities dedicatedly optimize performance, enabling organizations to handle vast datasets efficiently. It incorporates data quality tools that help in validating and cleansing data before it is used for decision-making processes.

Both IBM 5655-DB2 and 5695-DF1 are part of a broader strategy to accommodate the evolving landscape of data management. Businesses leverage these technologies to enhance their data architectures, fostering agility and competitive advantage in their respective markets. Their integration capabilities, along with a focus on security and scalability, position them as vital assets in modern enterprise environments. Whether managing critical data within a database or ensuring seamless data flow across systems, these IBM offerings provide a comprehensive approach to handling complex data challenges.