IBM HS21 specifications Workhorse 2-socket dual and quad-core Intel Xeon blade server

Models: HS21

1 23
Download 23 pages 53.28 Kb
Page 8
Image 8
Manual background

Workhorse 2-socket dual and quad-core

Intel Xeon blade server

with the blade servers, using the same blade slots. Up to four chassis can be installed in an industry-standard 42U rack, for a total of up to 56 30mm blade servers per rack.

Up to ten module slots for communication and I/O switches or bridges — The modules interface with all of the blade servers in the chassis and alleviate the need for external switches or expensive, cumbersome cabling. All connections are done internally via the midplane. Two module slots are reserved for hot-swap/redundant Gigabit Ethernet switch modules. Two slots support either high-speed bridge modules or legacy Gigabit Ethernet, Myrinet, Fibre Channel, InfiniBand and other switch modules. Two slots are dedicated for bridge modules. Four additional slots are dedicated for high-speed bridge modules. All modules, when in stalled in pairs, offer load balancing and failover support.

Integrated switch and bridge modules mean that no additional rack “U” space is required.

Two module bays for Advanced Management Modules — The new management module provides advanced systems management and KVM capabilities for not only the chassis itself, but for all of the blades and other modules installed in the chassis. The Advanced Management Module provides capabilities similar to the IBM Remote Supervisor Adapter II used in stand-alone xSeries rack and tower servers. New features include concurrent KVM (cKVM) and media tray, an external Serial over LAN connector, more memory, a more powerful onboard processor, industry-standard management interfaces (SMASH/CLP/CIM/HPI), USB virtualization, network failover and backward compatibility with the original Management Module, among others. The features of the module can be accessed either locally or remotely across a network. One module comes standard. A second module can be added for hot-swap/redundancy and failover. The module uses a USB connection, rather than the PS2 connection of the original Management Module.

Two module bays for Blower Modules — Two hot-swap/redundant blower modules come standard with the chassis. They are capable of providing efficient cooling for up to 14 blades. These modules replace the need for each blade and switch to contain its own fans. The blowers are more energy efficient than dozens or hundreds of smaller fans would be, and they offer many fewer points of potential failure. BladeCenter H also includes up to twelve additional hot-swap/redundant fans to cool the power supplies and high-speed switch modules.

Four module bays for Power Modules — BladeCenter H ships with two 2900W high-efficiency hot-swap/redundant power modules (upgradeable to four), capable of handling the power needs of the entire chassis, including future higher-wattage processors. Each power module includes a customer-replaceablehot-swap/redundant fan pack (3 fans) for additional cooling capability.

A hot-swappable Media Tray containing a DVD-ROM drive, two USB 2.0 ports, and a light path diagnostic panel — The media tray is shared by all the blades in the server. This reduces unnecessary parts (and reduces the number of parts than can fail). In the event of a failure of the Media Tray the tray can be swapped for another. While the tray is offline, the servers in the chassis can remotely access the Media Tray in another chassis. The light path diagnostic panel contains LEDs that identify which internal components are in need of service.

A serial breakout port with optional cable — This provides a direct serial connection to each blade server installed in the chassis, as an alternative to Serial over LAN. (Note: This applies only to newer blades that include this capability.)

It is extremely important to include all infrastructure costs when comparing a BladeCenter H solution to a competitor’s offering, not just the cost of the chassis and the blades. The high density and level of integration of the BladeCenter H chassis can greatly reduce the cost of the overall solution. For example, because up to four chassis will fit in a rack, this means that up to 56 blade servers can be installed. Also, because up to ten Ethernet, Myrinet, Fibre Channel, InfiniBand or other bridges and switches can be installed per chassis, up to 40 switches and bridges can be installed per rack without having to reserve any “U” space for switches, unlike the competition. (And the integrated switches may be less expensive than external, self-powered switches.) Plus, the number of power distribution units (PDUs) needed per rack may be lessened, because there are fewer discrete devices to have to plug in. In addition, because all the blades are connected to all the switches inside the chassis, there is no need for external Ethernet or other communication cables to connect the blades, bridges and switches. (Only the few cables needed to connect the switches to the external world are required.) This not only saves the cost of numerous cables per rack, but also the clutter and bother of routing that many cables. An added bonus is potentially much freer airflow behind the rack, due to fewer cables.

BladeCenter T is a carrier grade, rugged 8U (20-inchdeep) chassis designed for challenging central office and networking environments. It provides:

NEBS 3/ETSI-compliance— Designed for the Network Equipment Provider (NEP)/Service Provider (SP) environment. Also ideal for government/military, aerospace, industrial automation/robotics, medical imaging and finance.

Designed for Carrier-Grade Linux — Several distributions are supported, include SUSE and Red Hat.

Reduced single points of failure — Many major components (either standard or optionally) are hot-swappableand/or redundant. Servers and modules can be configured for automatic failover

Please see the Legal Information section for important notices and information.

8.

Page 8
Image 8
IBM HS21 specifications Workhorse 2-socket dual and quad-core Intel Xeon blade server