Hitachi 1000 manual High I/O Performance, Dedicated Mode, Shared Mode

Page 49

And the host intervention code is tuned for the latest Itanium hardware features, minimizing the performance impact to guests.

Virtage offers two modes in which processor resources can be distributed among the different logical partitions: dedicated mode and shared mode, as illustrated in Figure 31.

Dedicated Mode

Shared Mode

CPU

CPU

CPU

CPU

CPU

Memory

Memory

Memory Memory

NIC

NIC

NIC

NIC

NIC

Partition 1

Partition 2 Partition 3

PCI

PCI

PCI

PCI

PCI

Figure 31. Share or isolate CPU and I/O resources to any partition in the same environment

Dedicated Mode

Individual processor cores can be assigned to a specific logical partition. Dedicating the core to an LPAR helps ensure that no other partition can take CPU resources away from the assigned partition. This method is highly recommended for environments that require CPU processing exclusivity, such as databases or real time applications.

Shared Mode

A single processor core or groups of cores can be assigned to multiple logical partitions, which in turn can share the assigned processing resources. This allows multiple partitions to share one or multiple CPU cores to increase utilization. Virtage can also carve up a single processor core into logical partitions for workloads that are smaller than one core.

Each partition is assigned a service rate of the processor. Another advantage is the ability to dynamically change the services ratio for any given partition. The system monitors the activity of a partition, and if one partition is idle while the other is using 100 percent of its share, the system temporarily increases the service rate until CPU resources are required by the other partition.

High I/O Performance

When deployed on Itanium processor-based server blades, Virtage employs direct execution, as is used in the mainframe world, leveraging Virtage technology embedded in the Hitachi Node Controller. The Virtage I/O hardware assist feature passes data through the guest I/O requests with minimum modification, thus does not add an extra layer for guest I/O accesses. Users can use standard I/O device drivers as they are, so they can take advantage of the latest functionality with less overhead. The hardware assist feature simply modifies the memory addresses for the I/O requests.

Also, because BladeSymphony 1000 can be configured with physical PCI slots, I/O can be assigned by the slot to any given partition. Therefore, any partition can be assigned any amount of slots and each partition can be mounted with any standard PCI interface cards. Since the PCI slots are assigned to the partition, each environment can support a unique PCI interface card.

www.hitachi.com

BladeSymphony 1000 Architecture White Paper 49

Image 49
Contents BladeSymphony 1000 Architecture Table of Contents Executive Summary IntroductionIntroducing BladeSymphony BladeSymphony 1000 front view Enterprise-Class CapabilitiesData Center Applications System Architecture Overview Front Intel Itanium Server Blade Intel Itanium Server Blade featuresBackplane Node link for Three interconnect ports Interface SpecificationsTwo ports per partition Fast Ethernet Two100Base/10Base ports LAN manage MentFW ROM Atmel Intel Itanium Processor 9100 SeriesHyper-Threading Technology CacheHitachi Node Controller Demand Based SwitchingIntel VT Virtualization Technology Bus throughput from the Hitachi Node ControllerMemory System Baseboard Management ControllerSMP Capabilities Hitachi Node Controller connects multiple server blades Numa Architecture SMP Configuration OptionsFull interleave mode and non-interleave mode L3 Cache Copy Tag Intel Itanium I/O Expansion ModuleEBS Chassis Intel Xeon Server Blade components Intel Xeon Server BladeMicrosoft Windows Server 2003 SP2, Enterprise x64 Edition Intel Xeon 5200 Dual Core ProcessorsFB-DIMM Advantages Intel Xeon 5400 Quad Core ProcessorsAdvanced ECC Online spare memory supported configurationsOnline Spare Memory Memory mirroring Memory MirroringOn-Module Storage Modules Sub SystemPCI-X I/O Module PCI-X I/O Module connector types Embedded Fibre Channel Switch ModulePCIe I/O Module PCIe I/O Module Combo CardFiber channel switch close-up Total 8 modules mountable FCSW, Ipfc RFC, FCAL2, Fcph Embedded Fibre Channel Switch Module componentsLAN FC-HBA + Gigabit Ethernet Combo CardHitachi FC Controller FC-HBA functions Management SoftwareEmbedded Gigabit Ethernet Switch Scsi Hard Drive Modules Connection configuration for HDD Modules Chassis specifications Chassis, Power, and CoolingModule Connections Redundant Power ModulesTop view and cooling fan modules numbers Redundant Cooling Fan ModulesReliability Features Reliability and Serviceability FeaturesReliability features Switch & Management Module Serviceability FeaturesNV Sram Switch & Management Module componentsBase Management Controller BMC OS Console Console FunctionsRemote Console SVP ConsoleBladeSymphony Management Suite Management SoftwareOperating System Support Deployment Manager Operations Management+1 or N+M Cold Standby Fail-over Network Management Remote ManagementRack Management Asset ManagementVirtage High CPU Performance and FeaturesShared Mode Dedicated ModeHigh I/O Performance Shared/Virtual NIC Functions Fiber Channel VirtualizationIntegrated System Management for Virtual Machines For More Information SummarySierra Point Parkway