section 1 | hp storage white paper |
| ||
The HP Virtual Array accepts new disks while | automating the cache | 1 | ||
the array is up and running and accepting I/Os | parameters | |||
as with some |
| |||
However, the HP Virtual Array takes it one step |
| |||
further. Once the disk is inserted, the array | Configuring a traditional array typically requires | |||
setting the cache parameters such as the percentage | ||||
automatically includes that disk into the existing | of read and write cache, the size of the cache | |||
disk space and stripes all LUNs across that disk. | ||||
pages, and, in some cases, the allocation of cache | ||||
This means that even without the creation of any | ||||
to specific LUNs. In making these determinations, | ||||
additional LUNs, the array performance will | ||||
there is ample opportunity for error. |
|
| ||
improve because of the additional available |
|
| ||
|
|
| ||
spindle. Only the HP Virtual Array automatically | With HP’s Virtual Arrays, all of this is preset and | |||
adds the new disks to existing LUNs. Further, | ||||
automatic. And this means that all the parameters | ||||
any newly created LUNs are also automatically | ||||
within the array are tuned to work in unison with | ||||
spread across all the disks in the array, including | ||||
the stripe size and the array hardware. First, the | ||||
the additional disk. | ||||
cache is set at 80% read and 20% write, is | ||||
| ||||
time to implementation: | shared between controllers, and is treated as a | |||
“pool.” Second, the cache page size is set at | ||||
formatting the array | ||||
64K and is set to automatically destage to disk | ||||
As mentioned earlier, after new disks are added | every 4 seconds whether the page is full or not. | |||
The 64K size minimizes the number of I/Os to | ||||
to a traditional array, it then takes several hours | the | |||
to complete the formatting of the RAID group. | provides a carefully calculated balance within the | |||
During this format phase, no data can be written | array between the number of cache pages and the | |||
to the new LUNs. With some implementations, | speed of the | |||
the array is offline until all the LUNs have been |
|
|
| |
formatted. In other implementations, I/Os can | performance |
|
| |
be written to already formatted LUNs even while |
|
|
| |
other LUNs are going through the format | Traditional arrays are susceptible to “hot spots” and | |||
process, although performance is very slow. | to changes in the environment that make the initial | |||
Because executing the disk format command | configuration obsolete. The HP Virtual Array virtually | |||
eliminates these critical performance issues. | ||||
uses up so much of the array’s internal bandwidth, |
|
|
| |
array performance is greatly reduced until all | First, the HP Virtual Array is far less likely to | |||
of the disk formatting has been completed. | experience a hot | |||
With HP’s Virtual Array Technology, the array is | almost never experience a condition where a | |||
immediately available as soon as the LUNs have | few disk drives become a performance bottleneck | |||
been configured. The disk formatting is done as | in the array. Here’s why: the virtual array | |||
the writes are done. In other words, as writes | always (and automatically) stripes all of the | |||
are sent to disk, the formatting is accomplished | LUNs across all of the disks in the RAID group. | |||
for only those blocks being written to. This means | For example: assume a virtual array loaded with | |||
that while there is a small hit to performance for | a total of 60 disks had 30 disks in each of its | |||
that individual write, there is very, very little | two RAID/redundancy groups. Every LUN in | |||
impact on overall array performance. | that group would be spread across all 30 disks. |
1.5