Hitachi 1000 manual Intel Itanium I/O Expansion Module, L3 Cache Copy Tag

Page 18

L3 Cache Copy Tag

The data residing in caches and main memory across Intel Itanium Server Blades are kept in sync by using a snooping cache coherency protocol. When one of the Intel Itanium processors needs to access memory, the requested address is broadcast by the Hitachi Node Controller. The other Node Controllers that are part of that partition (SMP) listen for (snoop) those broadcasts. The Node Controller keeps track of the memory addresses currently cached in each processor’s on-chip caches by assigning a tag for each cache entry. If one of the processors contains the requested data in its cache it initiates a cache-to-cache transfer. This reduces latency by avoiding the penalty to retrieve data from main memory and helps maintain consistency by sending the requesting processor the most current data. In order to save bandwidth on the processors’ front side bus, the Node Controller is able to use the L3 Cache Copy Tags to determine which memory address broadcasts its two local processors need to see. If a requested address is not in the processors’ cache, the Node Controller filters the request and does not forward the request to the local processors. This process is illustrated in Figure 10.

 

Node 0

Node 1

 

Itanium2

Itanium2

Itanium2

Itanium2

L3 C

 

(1)

(2)

(3)’

L3 C

Copy

Node

Node

Copy

Tag

Tag

 

Controller

Controller

 

 

 

 

(4)

(3)

 

 

 

 

 

Memory

Memory

Memory

Memory

Controller

Controller

Controller

Controller

 

Main Memory

Main Memory

 

(1)Cache consistency control within a local node

(2)Memory address

broadcasting

(3)

Parallel

(3)’ Cache

consistency

Memory

Processing

control over

access

 

remote nodes

 

 

(4)Memory data transfer or

Cache data transfer

Figure 10. L3 cache copy tag process

Intel Itanium I/O Expansion Module

Some applications require more PCI slots than the two that are available per server blade. The Intel Itanium I/O Expansion Module provides more ports, without the expense of additional server blades. Using the Itanium I/O Expansion Module with the Intel Itanium Server Blade can increase the number of the PCI expansion-card slots that can be connected to the Intel Itanium Server Blade. The Itanium I/O expansion module cannot be used in with the Intel Xeon Server Blade.

The Intel Itanium I/O Expansion Module increases the number of PCI I/O slots to either four or eight slots depending on the chassis type. The type A chassis enables connection to four PCI I/O slots (Figure 11), and the type B chassis enables up to eight PCI I/O slots (Figure 12).

18 BladeSymphony 1000 Architecture White Paper

www.hitachi.com

Image 18
Contents BladeSymphony 1000 Architecture Table of Contents Introduction Executive SummaryIntroducing BladeSymphony Enterprise-Class Capabilities BladeSymphony 1000 front viewData Center Applications System Architecture Overview Front Intel Itanium Server Blade features Intel Itanium Server BladeTwo ports per partition SpecificationsBackplane Node link for Three interconnect ports Interface Fast Ethernet Two100Base/10Base ports LAN manage MentIntel Itanium Processor 9100 Series FW ROM AtmelCache Hyper-Threading TechnologyIntel VT Virtualization Technology Demand Based SwitchingHitachi Node Controller Bus throughput from the Hitachi Node ControllerBaseboard Management Controller Memory SystemSMP Capabilities Hitachi Node Controller connects multiple server blades SMP Configuration Options Numa ArchitectureFull interleave mode and non-interleave mode Intel Itanium I/O Expansion Module L3 Cache Copy TagEBS Chassis Intel Xeon Server Blade Intel Xeon Server Blade componentsIntel Xeon 5200 Dual Core Processors Microsoft Windows Server 2003 SP2, Enterprise x64 EditionIntel Xeon 5400 Quad Core Processors FB-DIMM AdvantagesOnline spare memory supported configurations Advanced ECCOnline Spare Memory Memory Mirroring Memory mirroringOn-Module Storage Sub System ModulesPCI-X I/O Module PCIe I/O Module Embedded Fibre Channel Switch ModulePCI-X I/O Module connector types PCIe I/O Module Combo CardFiber channel switch close-up Total 8 modules mountable Embedded Fibre Channel Switch Module components FCSW, Ipfc RFC, FCAL2, FcphFC-HBA + Gigabit Ethernet Combo Card LANManagement Software Hitachi FC Controller FC-HBA functionsEmbedded Gigabit Ethernet Switch Scsi Hard Drive Modules Connection configuration for HDD Modules Chassis, Power, and Cooling Chassis specificationsRedundant Power Modules Module ConnectionsRedundant Cooling Fan Modules Top view and cooling fan modules numbersReliability and Serviceability Features Reliability FeaturesReliability features Serviceability Features Switch & Management ModuleSwitch & Management Module components NV SramBase Management Controller BMC Console Functions OS ConsoleSVP Console Remote ConsoleManagement Software BladeSymphony Management SuiteOperating System Support Operations Management Deployment Manager+1 or N+M Cold Standby Fail-over Rack Management Remote ManagementNetwork Management Asset ManagementHigh CPU Performance and Features VirtageDedicated Mode Shared ModeHigh I/O Performance Fiber Channel Virtualization Shared/Virtual NIC FunctionsIntegrated System Management for Virtual Machines Summary For More InformationSierra Point Parkway