target volumes in the new disk storage server. It can restart immediately to connect to the new disk storage server after the XRC secondary volumes have been relabeled by a single XRC command per XRC session, XRECOVER.

In addition to transparent data replication, the advantage for XRC is extreme scalability. Figure 13-3shows that XRC can run either in existing system images or in a dedicated LPAR. Each image can host up to five System Data Movers (SDM). An SDM is an address space (ANTAS00x) that is started by a respective XSTART command. A reasonable number of XRC volume pairs which a single SDM can manage is in the range of 1,500 to 2,000 volume pairs. With up to five SDMs within a system image, this totals approximately 10,000 volume pairs. This requires an adequate bandwidth for the connectivity between the disk storage servers to the system image which hosts the SDMs. Because XRC in migration mode stores the data through, it mainly requires channel bandwidth and SDM tends to monopolize its channels. Therefore, the approach with dedicated channel resources is an advantage over a shared channel configuration and would almost not impact the application I/Os. In a medium sized configuration, one or two SDMs is most likely sufficient.

Assume the migration consolidates two medium sized ESS F20s with a total of about 5 TB to a DS6800 disk server. This would suggest connecting the SDM LPAR to each ESS F20 with two dedicated ESCON channels. Configure for each F20 an SDM within the SDM LPAR. The DS6800 FICON channels might be shared between the SDMs because there is no potential bottleneck when coming from ESCON. This would take about one day to replicate all data from the two F20s to the DS6800, provided there is not too much application write I/O during the initial full copy. Otherwise it takes just a few more hours, depending on the amount of application write I/O during the first full volume copy, and the entire migration can be completed over a weekend.

XRC requires disk storage subsystems which support XRC primary volumes through the microcode. Currently only IBM- or HDS-based controllers support XRC as a primary or source disk subsystem. As an exception, this does not apply to the IBM RVA storage controller, which does not support XRC as a primary XRC device. Also, EMC does not provide XRC support at the XRC primary site.

13.2.3 Hardware- and microcode-based migration

Hardware- and microcode-based migration through remote copy is usually only possible between like hardware, so using remote copy through microcode is not possible with different disks from vendor A at the source site and disks from vendor B at the target site. Therefore, we discuss only what is possible for IBM disk storage servers using remote copy or Peer-to-Peer remote copy (PPRC) and its variations.

Remote copy approaches with Global Mirror, Metro Mirror, Metro/Global Copy, and Global Copy allow the primary and secondary site to be any combination of ESS 750s, ESS 800s and DS6000s or DS8000s.

Bridge from ESCON to FICON with Metro/Global Copy

The ESS Model E20 and Model F20 do not support PPRC over Fibre Channel links, but only PPRC based on PPRC ESCON links. In contrast, the newly announced disk storage server supports only PPRC over Fibre Channel links and does not support PPRC ESCON links. The ESS Model 800 actually supports both PPRC link technologies.

258DS6000 Series: Concepts and Architecture

Page 282
Image 282
IBM DS6000 Series manual Hardware- and microcode-based migration, Bridge from Escon to Ficon with Metro/Global Copy