Hitachi 1000 manual SMP Configuration Options, Numa Architecture

Page 16

SMP Configuration Options

BladeSymphony 1000 supports two socket (four-core) Intel Itanium Server Blades that can be scaled to offer up to two 16 core servers in a single chassis or eight four core servers, or a mixture of SMP and single module systems, thus reducing footprint and power consumption while increasing utilization and flexibility. SMP provides higher performance for applications that can utilize large memory and multiple processors, such as large databases or visualization applications.

The maximum SMP configuration supported by BladeSymphony 1000 is:

Four Dual Core Intel Itanium Server Blades for a total of 16 CPU cores

256 GB memory (64 GB per server blades x 4)

Eight gigabit NICs (2 on-board per server blade) connected to two internal gigabit Ethernet switches

Eight PCI-X slots (or 16 PCI-X slots with chassis B)

With its unique interconnect technology, BladeSymphony 1000 delivers a new level of flexibility in adding computing resources to adapt to changing business needs. BladeSymphony 1000 can address scalability requirements by scaling-out (horizontally), or by scaling-up (vertically). Scaling out is ideally suited to online and other front-end applications that can divide processing requirements across multiple servers. Scaling out can also provide load-balancing capabilities and higher availability through redundancy.

 

SMP Interconnect

SMP Interconnect

 

Backplane

Backplane

OS

OS

OS

4-way

8-way

16-way

Server

Server

Server

Figure 7. Scale-up capabilities

Scaling up is accomplished through SMP, shown in Figure 7. This approach is better suited to enterprise-class applications requiring 64-bit processing, high computational performance, and large memory addressability beyond that provided in a typical x86 environment. BladeSymphony 1000 SMP Interconnect technology and blade form factor allow IT staff to manage scale-up operations on their own, without a service call. The interconnect allows up to four server blades to be joined into a single server environment composed of the total resources (CPU, memory, and I/O) resident in each module.

NUMA Architecture

The Intel Itanium Server Blade supports two memory interleave modes, full and non-interleave.

In full interleave mode, the additional latency in accessing memory on other server blades is averaged across all memory, including local memory, to provide a consistent access time. In non-interleave mode, a server blades has faster access to local memory than to memory on other server blades. Both of these options are illustrated in Figure 8.

16 BladeSymphony 1000 Architecture White Paper

www.hitachi.com

Image 16
Contents BladeSymphony 1000 Architecture Table of Contents Executive Summary IntroductionIntroducing BladeSymphony Enterprise-Class Capabilities BladeSymphony 1000 front viewData Center Applications System Architecture Overview Front Intel Itanium Server Blade features Intel Itanium Server BladeSpecifications Backplane Node link for Three interconnect ports InterfaceTwo ports per partition Fast Ethernet Two100Base/10Base ports LAN manage MentIntel Itanium Processor 9100 Series FW ROM AtmelCache Hyper-Threading TechnologyDemand Based Switching Hitachi Node ControllerIntel VT Virtualization Technology Bus throughput from the Hitachi Node ControllerBaseboard Management Controller Memory SystemSMP Capabilities Hitachi Node Controller connects multiple server blades SMP Configuration Options Numa ArchitectureFull interleave mode and non-interleave mode Intel Itanium I/O Expansion Module L3 Cache Copy TagEBS Chassis Intel Xeon Server Blade Intel Xeon Server Blade componentsIntel Xeon 5200 Dual Core Processors Microsoft Windows Server 2003 SP2, Enterprise x64 EditionIntel Xeon 5400 Quad Core Processors FB-DIMM AdvantagesAdvanced ECC Online spare memory supported configurationsOnline Spare Memory Memory Mirroring Memory mirroringOn-Module Storage Modules Sub SystemPCI-X I/O Module Embedded Fibre Channel Switch Module PCI-X I/O Module connector typesPCIe I/O Module PCIe I/O Module Combo CardFiber channel switch close-up Total 8 modules mountable Embedded Fibre Channel Switch Module components FCSW, Ipfc RFC, FCAL2, FcphFC-HBA + Gigabit Ethernet Combo Card LANManagement Software Hitachi FC Controller FC-HBA functionsEmbedded Gigabit Ethernet Switch Scsi Hard Drive Modules Connection configuration for HDD Modules Chassis, Power, and Cooling Chassis specificationsRedundant Power Modules Module ConnectionsRedundant Cooling Fan Modules Top view and cooling fan modules numbersReliability Features Reliability and Serviceability FeaturesReliability features Serviceability Features Switch & Management ModuleSwitch & Management Module components NV SramBase Management Controller BMC Console Functions OS ConsoleSVP Console Remote ConsoleBladeSymphony Management Suite Management SoftwareOperating System Support Deployment Manager Operations Management+1 or N+M Cold Standby Fail-over Remote Management Network ManagementRack Management Asset ManagementHigh CPU Performance and Features VirtageShared Mode Dedicated ModeHigh I/O Performance Shared/Virtual NIC Functions Fiber Channel VirtualizationIntegrated System Management for Virtual Machines Summary For More InformationSierra Point Parkway