6947ch02.fm

Draft Document for Review April 7, 2004 6:15 pm

￿Partial Memory Restart

In the rare event of a memory card failure, Partial Memory Restart enables the system to be restarted with only part of the original memory. In a one-book system, the failing card will be deactivated, after which the system can be restarted with the memory on the remaining memory card.

In a system with more than one book, all physical memory in the book containing the failing memory card is taken offline, allowing you to bring up the system with the remaining physical memory in the other books. In this way, processing can be resumed until a replacement memory card is installed.

Memory error-checking and correction code detects and corrects single-bit errors, or 2-bit errors from a chipkill failure, using the Error Correction Code (ECC). Also, because of the memory structure design, errors due to a single memory chip failure are corrected.

Memory background scrubbing provides continuous monitoring of storage for the correction of detected faults before the storage is used.

The memory cards use the latest fast 256 Mb, and 512 Mb, synchronous DRAMs. Memory access is interleaved between the memory cards to equalize memory activity across the cards.

Memory cards have 8 GB, 16 GB, or 32 GB of capacity. All memory cards installed in one book must have the same capacity. Books may have different memory sizes, but the card size of the two cards per book must always be the same.

The total capacity installed may have more usable memory than required for a configuration, and Licensed Internal Code Configuration Control (LIC-CC) will determine how much memory is used from each card. The sum of the LIC-CC provided memory from each card is the amount available for use in the system.

Memory allocation

Memory assignment or allocation is done at Power-on Reset (POR) when the system is initialized. Actually, PR/SM is responsible for the memory assignments; it is PR/SM that controls the resource allocation of the CPC. Table 2-1 on page 28 shows the distribution of physical memory across books when a system initially is installed with the amounts of memory shown in the first column. However, the table gives no indication of where the initial memory is allocated. Memory allocation is done as evenly as possible across all installed books.

PR/SM has knowledge of the amount of purchased memory and how it relates to the available physical memory in each of the installed books. PR/SM has control over all physical memory and therefore is able to make physical memory available to the configuration when a book is non-disruptively added. PR/SM also controls the reassignment of the content of a specific physical memory array in one book to a memory array in another book. This is known as the Memory Copy/Reassign function.

Due to the memory allocation algorithm, systems that undergo a number of MES upgrades for memory can have a variety of memory card mixes in all books of the system. If, however unlikely, memory should fail, it is technically feasible to Power-on Reset the system with the remaining memory resources (see “Partial Memory Restart” on page 54). After Power-on Reset, the memory distribution across the books is now different, as is the amount of memory.

Capacity Upgrade on Demand (CUoD) for memory can be used to order more memory than needed on the initial model, but that is required on the target model; see “Memory upgrades”

54IBM eServer zSeries 990 Technical Guide

Page 68
Image 68
IBM 990 manual Memory allocation, Partial Memory Restart

990 specifications

The IBM 990 series, often referred to in the context of IBM's pioneering efforts in the realm of mainframe computing, represents a unique chapter in the history of information technology. Introduced in the late 1960s, the IBM 990 series was designed as a powerful tool for enterprise-level data processing and scientific calculations, showcasing the company's commitment to advancing computing capabilities.

One of the main features of the IBM 990 was its architecture, which was built to support a wide range of applications, from business processing to complex scientific computations. The system employed a 32-bit word length, which was advanced for its time, allowing for more flexible and efficient data handling. CPUs in the IBM 990 series supported multiple instructions per cycle, which contributed significantly to the overall efficiency and processing power of the machines.

The technology behind the IBM 990 was also notable for its use of solid-state technology. This provided a shift away from vacuum tube systems that were prevalent in earlier computing systems, enhancing the reliability and longevity of the hardware. The IBM 990 series utilized core memory, which was faster and more reliable than the magnetic drum memory systems that had been standard up to that point.

Another defining characteristic of the IBM 990 was its extensibility. Organizations could configure the machine to suit their specific needs by adding memory, storage, and peripheral devices as required. This modular approach facilitated the growth of systems alongside the technological and operational demands of the business environments they served.

In terms of software, the IBM 990 series was compatible with a variety of operating systems and programming environments, including FORTRAN and COBOL, enabling users to access a broader array of applications. This versatility was a significant advantage, making the IBM 990 an appealing choice for educational institutions, research facilities, and enterprises alike.

Moreover, the IBM 990 was engineered to support multiprocessing, which allowed multiple processes to run simultaneously, further increasing its effectiveness in tackling complex computing tasks.

In summary, the IBM 990 series represents a significant advancement in computing technology during the late 20th century. With a robust architecture, versatile configuration options, and a focus on solid-state technology, the IBM 990 facilitated substantial improvements in data processing capabilities, making it a cornerstone for many businesses and academic institutions of its time. Its impact can still be seen today in the continued evolution of mainframe computing.