Processor Bus
6.4GB/s (FSB400MHz)
10.6 GB/s (FSB667MHz)
Node Bandwidth
L3 Cache
Copy Tag
Point to point
Low Latency
NDC
Node
Controller
NDC
Node
Controller
NDC
Node
Controller
NDC
Node
Controller
4.8GB/s (FSB400MHz)
5.3GB/s (FSB667MHz)
PCI Bus
2GB/s x3
PCI | |
Bridge | |
|
|
PCI
Slots
Memory Bus
4.8GB/s (FSB400MHz)
5.3GB/s (FSB667MHz)
DDR2
Memory
MC
Memory
Controller
MC
Memory
Controller
DDR2
Memory
Figure 6. Hitachi Node Controller connects multiple server blades
By dividing the SMP system across several server blades, the memory bus contention problem is solved by virtue of the distributed design. A processor’s access to its
While there is a penalty for accessing remote memory, a number of operating systems are enhanced to improve the performance of NUMA system designs. These operating systems take into account where data is located when scheduling tasks to run on CPUs, using the closest CPU where possible. Some operating systems are able to rearrange the location of data in memory to move it closer to the processors where its needed. For operating systems that are not NUMA aware, the BladeSymphony 1000 offers a number of memory interleaving options that can improve performance.
The Node Controllers can connect to up to three other Node Controllers providing a
www.hitachi.com | BladeSymphony 1000 Architecture White Paper 15 |