Workhorse 2-socket single- or dual-core Intel Xeon blade server

modules.

Four module bays for Power Modules — BladeCenter H ships with two 2900W high- efficiency hot-swap/redundant power modules (upgradeable to four), capable of handling the power needs of the entire chassis, including future higher-wattage processors. Each power module includes a customer-replaceablehot-swap/redundant fan pack (3 fans) for additional cooling capability.

A hot-swappable Media Tray containing a DVD-ROM drive, two USB 2.0 ports, and a light path diagnostic panel — The media tray is shared by all the blades in the server. This reduces unnecessary parts (and reduces the number of parts than can fail). In the event of a failure of the Media Tray the tray can be swapped for another. While the tray is offline, the servers in the chassis can remotely access the Media Tray in another chassis. The light path diagnostic panel contains LEDs that identify which internal components are in need of service.

A serial breakout port with optional cable — This provides a direct serial connection to each blade server installed in the chassis, as an alternative to Serial over LAN. (Note: This applies only to newer blades that include this capability.)

It is extremely important to include all infrastructure costs when comparing a BladeCenter H solution to a competitor’s offering, not just the cost of the chassis and the blades. The high density and level of integration of the BladeCenter H chassis can greatly reduce the cost of the overall solution. For example, because up to four chassis will fit in a rack, this means that up to 56 blade servers can be installed. Also, because up to ten Ethernet, Myrinet, Fibre Channel, InfiniBand or other bridges and switches can be installed per chassis, up to 40 switches and bridges can be installed per rack without having to reserve any “U” space for switches, unlike the competition. (And the integrated switches may be less expensive than external, self-powered switches.) Plus, the number of power distribution units (PDUs) needed per rack may be lessened, because there are fewer discrete devices to have to plug in. In addition, because all the blades are connected to all the switches inside the chassis, there is no need for external Ethernet or other communication cables to connect the blades, bridges and switches. (Only the few cables needed to connect the switches to the external world are required.) This not only saves the cost of numerous cables per rack, but also the clutter and bother of routing that many cables. An added bonus is potentially much freer airflow behind the rack, due to fewer cables.

BladeCenter T is a carrier grade, rugged 8U (20-inchdeep) chassis designed for challenging central office and networking environments. It provides:

NEBS 3/ETSI-compliance— Designed for the Network Equipment Provider (NEP)/Service Provider (SP) environment. Also ideal for government/military, aerospace, industrial automation/robotics, medical imaging and finance.

Designed for Carrier-Grade Linux — Several distributions are supported, include SUSE and Red Hat.

Reduced single points of failure — Many major components (either standard or optionally) are hot-swappableand/or redundant. Servers and modules can be configured for automatic failover to backups. It also offers an extended product lifecycle (3 years in production from date of General Availability, plus another 5 years of support).

Backward compatibility Every blade, switch and passthru module released by IBM for the original BladeCenter chassis since 2002 is supported in the BladeCenter T chassis.

Eight 30mm blade slots — These hot-swapslots are capable of supporting any combination of 8 Low Voltage HS20 (Xeon) blade servers, or 7 regular-voltage HS20, LS20 (Opteron), and JS20/JS21 (PowerPC 970FX/MP) blade servers, or 4 double-wide (60mm) HS40 or Cell BE processor-based blade servers, or a mixture of 30mm and 60mm blades. It also supports 30mm optional SCSI Storage Expansion Units and/or PCI I/O Expansion Unit IIs in combination with the blade servers, using the same blade slots. Up to five chassis can be installed in an industry-standard 42U rack (or a telco rack), for a total of up to 40 30mm blade servers per rack.

Four module bays for communication and I/O switches — The modules interface with all of the blade servers in the chassis and eliminate the need for external switches or expensive, cumbersome cabling. All connections are done internally via the midplane. Two bays are reserved for hot-swap/redundant Gigabit Ethernet switch modules. The other two bays support additional Gigabit Ethernet modules, or Fibre Channel, InfiniBand and other switch modules. All modules, when in stalled in pairs, offer load balancing and failover support. Integrated switch modules mean that no extra rack “U space” is required.

Two module bays for Management Modules — The management module provides advanced systems management and KVM capabilities for not only the chassis itself, but for all of the blades and other modules installed in the chassis. The Management Module provides capabilities similar to the IBM Remote Supervisor Adapter II used in stand-alone

8.

Please see the Legal Information section for important notices and information.

Page 8
Image 8
IBM HS20 specifications Modules