Dedicated Cache: This SMP design supports a dedicated L2 cache for each processor. This allows more cache hits than a shared L2 cache. Adding a second processor using a dedicated L2 cache can improve performance as much as 80%. With current technology, adding even more processors can further increase performance in an almost linear fashion up to the point where the addition of more processors does not increase performance and can actually decrease performance due to excessive overhead.

The IBM PC Server 720 implements SMP with dedicated caches.

Figure 2 shows SMP with dedicated secondary cache.

￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿

 

 

￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿

￿

Pentium

￿

 

 

￿

Pentium

￿

￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿

 

 

￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿

 

￿

 

 

 

 

￿

 

 

￿

 

 

 

 

￿

 

 

￿

 

 

 

 

￿

 

￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿

￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿

￿512KB Secondary (level2) Cache￿

￿512KB Secondary(level2) Cache￿ ￿

￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿

￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿

 

￿

 

 

 

 

￿

 

 

￿

 

 

 

 

￿

 

 

￿

 

 

 

 

￿

 

 

￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿

 

 

 

 

 

￿

 

 

 

 

 

 

 

￿

 

 

 

 

 

￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿

 

 

 

 

￿

Main memory

￿

 

 

 

 

￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿￿

 

 

 

 

 

 

￿

 

 

 

 

 

 

 

 

Figure 2.

SMP with Dedicated Secondary Cache

 

 

 

Dedicated caches are also more complicated to manage. Care needs to be taken to ensure that a processor needing data always gets the latest copy of that data. If this data happens to reside in another processors cache, then the two caches must be brought into sync with one another.

The cache controllers maintain this coherency by communicating with one another using a special protocol called MESI, which stands for Modified, Exclusive, Shared, or Invalid. These refer to tags that are maintained for each line of cache, and indicate the state of each line.

The implementation of MESI in the IBM PC server 720 supports two sets of tags for each cache line, which allows for faster cache operation than when only one set of tags is provided.

1.3.2 Memory Interleaving

Another technique used to reduce effective memory access time is interleaving. This technique greatly increases memory bandwidth when access to memory is sequential such as in program instruction fetches.

6NetWare Integration Guide

Page 21
Image 21
IBM SG24-4576-00 manual Memory Interleaving, Shows SMP with dedicated secondary cache