6947ch07.fm

Draft Document for Review April 7, 2004 6:15 pm

The Workload Manager (WLM) component of z/OS or OS/390 provides sysplex-wide workload management capabilities based on installation-specified performance goals and the business importance of the workloads. The Workload Manager tries to attain the performance goals through dynamic resource distribution. WLM provides the Parallel Sysplex cluster with the intelligence to determine where work needs to be processed and in what priority. The priority is based on the customer's business goals and is managed by sysplex technology.

￿Sysplex Failure Manager

The Sysplex Failure Management component of z/OS or OS/390 allows the installation to specify failure detection intervals and recovery actions to be initiated in the event of the failure of an image in the sysplex.

￿Automatic Restart Manager

The Automatic Restart Manager (ARM), a component of z/OS or OS/390, enables fast recovery of the subsystems that might hold critical resources at the time of failure. If other instances of the subsystem in the Parallel Sysplex need any of these critical resources, fast recovery will make these resources available more quickly. Even though automation packages are used today to restart the subsystem to resolve such deadlocks, ARM can be activated closer to the time of failure.

￿Cloning/symbolics

Cloning refers to replicating the hardware and software configurations across the different physical servers in the Parallel Sysplex. That is, an application that is going to take advantage of parallel processing might have identical instances running on all images in the Parallel Sysplex. The hardware and software supporting these applications could also be configured identically on all images in the Parallel Sysplex to reduce the amount of work required to define and support the environment.

Resource sharing

A number of base z/OS or OS/390 components exploit Coupling Facility shared storage, providing an excellent medium for sharing component information for the purpose of multi-image resource management. This exploitation, called IBM zSeries Resource Sharing, enables sharing of physical resources such as files, tape drives, consoles, catalogs, and so forth, with significant improvements in cost, performance, and simplified systems management. The zSeries Resource Sharing delivers immediate value, even for customers who are not leveraging data sharing, through exploitation delivered with the base z/OS or OS/390 software stack.

Single system image

Even though there could be multiple servers and z/OS or OS/390 images in the Parallel Sysplex cluster, it is essential that the collection of images in the Parallel Sysplex appear as a single entity to the operator, the end user, the database administrator, and so on. A single system image ensures reduced complexity from both operational and definition perspectives.

Regardless of the number of images and the underlying hardware, the Parallel Sysplex cluster appears as a single system image from several perspectives:

￿Data access, allowing dynamic workload balancing and improved availability

￿Dynamic transaction routing, providing dynamic workload balancing and improved availability

￿End-user interface, allowing access to an application as opposed to a specific image

￿Operational interfaces that allow Systems Management across the sysplex from a single point

156IBM eServer zSeries 990 Technical Guide

Page 170
Image 170
IBM 990 manual Resource sharing, Single system image

990 specifications

The IBM 990 series, often referred to in the context of IBM's pioneering efforts in the realm of mainframe computing, represents a unique chapter in the history of information technology. Introduced in the late 1960s, the IBM 990 series was designed as a powerful tool for enterprise-level data processing and scientific calculations, showcasing the company's commitment to advancing computing capabilities.

One of the main features of the IBM 990 was its architecture, which was built to support a wide range of applications, from business processing to complex scientific computations. The system employed a 32-bit word length, which was advanced for its time, allowing for more flexible and efficient data handling. CPUs in the IBM 990 series supported multiple instructions per cycle, which contributed significantly to the overall efficiency and processing power of the machines.

The technology behind the IBM 990 was also notable for its use of solid-state technology. This provided a shift away from vacuum tube systems that were prevalent in earlier computing systems, enhancing the reliability and longevity of the hardware. The IBM 990 series utilized core memory, which was faster and more reliable than the magnetic drum memory systems that had been standard up to that point.

Another defining characteristic of the IBM 990 was its extensibility. Organizations could configure the machine to suit their specific needs by adding memory, storage, and peripheral devices as required. This modular approach facilitated the growth of systems alongside the technological and operational demands of the business environments they served.

In terms of software, the IBM 990 series was compatible with a variety of operating systems and programming environments, including FORTRAN and COBOL, enabling users to access a broader array of applications. This versatility was a significant advantage, making the IBM 990 an appealing choice for educational institutions, research facilities, and enterprises alike.

Moreover, the IBM 990 was engineered to support multiprocessing, which allowed multiple processes to run simultaneously, further increasing its effectiveness in tackling complex computing tasks.

In summary, the IBM 990 series represents a significant advancement in computing technology during the late 20th century. With a robust architecture, versatile configuration options, and a focus on solid-state technology, the IBM 990 facilitated substantial improvements in data processing capabilities, making it a cornerstone for many businesses and academic institutions of its time. Its impact can still be seen today in the continued evolution of mainframe computing.