VMware ESX Server
No. Host System Operating System Host
Bus Host Bus Adapter Topology Storage
Array External
Boot Comments
513 NEC Express 5800 140Rf−4 VMware ESX Server:
3.0.23, 4, 5, 6, 7, 8, 9, 10,
3.0.33, 4, 6, 7, 8, 9, 10
PCI
Express NEC: N8190−120 (LP1050)2, 11, 61, 85,
N8190−127 (LPe1150)2, 11, 45, 62 FC−SW EMC
CLARiiON
AX150
Y13, 14
514 Sun Sun: Blade Server Module
X6450, Fire X4140, Fire X4240,
Fire X4440
VMware ESX Server:
3.0.23, 4, 5, 6, 7, 8, 9, 10,
3.0.33, 4, 6, 7, 8, 9, 10
PCI
Express QLogic QLA2460−E−SP12, 32, 49 FC−SW EMC
CLARiiON
AX150
N
515 Sun Sun: Blade Server Module
X6450, Fire X4140, Fire X4240,
Fire X4440
VMware ESX Server:
3.0.23, 4, 5, 6, 7, 8, 9, 10,
3.0.33, 4, 6, 7, 8, 9, 10
PCI
Express Sun: SG−XPCI1FC−EM2 (LP10000)2, 11,
SG−XPCI1FC−EM4−Z (LP11000)2, 11, 12, 45,
SG−XPCI1FC−QF4 (QLA2460)43,
SG−XPCI1FC−QL2 (QLA2340)12, 43,
SG−XPCI2FC−EM2 (LP10000DC)2, 11,
SG−XPCI2FC−EM4 (LP11002)2, 11, 45,
SG−XPCI2FC−EM4−Z (LP11002)2, 11, 12, 45,
SG−XPCI2FC−QF4 (QLA2462)43
FC−SW EMC
CLARiiON
AX150
Y13, 14
516 Fujitsu Siemens Primergy BX620
S4 VMware ESX Server:
3.0.23, 4, 5, 6, 7, 8, 9, 10,
3.0.33, 4, 6, 7, 8, 9, 10
PCI
Express,
PCI−X
Emulex LPe11002−E1, 2, 11, 12 FC−SW EMC
CLARiiON
AX150
Y13, 14
517 Dell PowerEdge: 1950 III, 2900
III, 2950 III, 2970;
Fujitsu Siemens Primergy: BX620
S4, RX200 S4, RX600 S4, TX300
S4
VMware ESX Server:
3.0.23, 4, 5, 6, 7, 8, 9, 10,
3.0.33, 4, 6, 7, 8, 9, 10
PCI
Express,
PCI−X
Emulex LPe1150−E1, 2, 11, 12 FC−SW EMC
CLARiiON
AX150
Y13, 14
518 HPQ Proliant: DL385 (G2), DL585
(G2), DL585 (G5) VMware ESX Server:
3.0.23, 4, 5, 6, 7, 8, 9, 10,
3.0.33, 4, 6, 7, 8, 9, 10
PCI
Express,
PCI−X
Emulex: LP11000−E1, 2, 12, 31, LP11002−E1, 2,
12, 31, LP1150−E1, 2, 11, 12, 31 FC−SW EMC
CLARiiON
AX150
Y13, 14
519 NEC Express 5800: 120Lj,
120Rh−1, 120Rj−2, 140Hf,
140Re−4
VMware ESX Server:
3.0.23, 4, 5, 6, 7, 8, 9, 10,
3.0.33, 4, 6, 7, 8, 9, 10
PCI
Express,
PCI−X
NEC: N8190−120 (LP1050)2, 11, 61, 85,
N8190−127 (LPe1150)2, 11, 45, 62 FC−SW EMC
CLARiiON
AX150
Y13, 14
520 Fujitsu Siemens Primergy TX200
S4 VMware ESX Server:
3.0.23, 4, 5, 6, 7, 8, 9, 10,
3.0.33, 4, 6, 7, 8, 9, 10
PCI, PCI
Express,
PCI−X
Emulex LPe1150−E1, 2, 11, 12 FC−SW EMC
CLARiiON
AX150
Y13, 14
521 HPQ Proliant DL580: (G4), (G5) VMware ESX Server:
3.0.23, 4, 5, 6, 7, 8, 9, 10,
3.0.33, 4, 6, 7, 8, 9, 10
PCI, PCI
Express,
PCI−X
Emulex: LP11000−E1, 2, 12, 31, LP11002−E1, 2,
12, 31, LP1150−E1, 2, 12, 31;
HPQ: AB429A/FC1143 (QLA2460)12, 31, 32,
AE369A/FC1243 (QLA2462)12, 31, 32;
QLogic: QLA2460−E−SP12, 31, 32,
QLA2462−E−SP12, 31, 32
FC−SW EMC
CLARiiON
AX150
Y13, 14
522 IBM BladeCenter: LS22 (Model
7901), LS42 (Model 7902) VMware ESX Server:
3.56, 7, 8, 9, 10, 28, 3i6, 7,
9, 10, 28
PCI
Express IBM: Emulex 4Gb SFF Fibre Channel Exp Card
(39Y9186)1, 12, 26, 77, HS20 4GB SFF FC Exp
Card (26R0890/26R0893)12, 29, 76, QLogic 4gb
FC Exp Card (CFFv, 41Y8527)12, 29, QLogic
Ethernet and 4gb FC Exp Card (CFFh,
39Y9306)12, 29, 87
FC−SW EMC
CLARiiON
AX150
Y13, 14
523 IBM BladeCenter HS12 (Model
8014,8028)103 VMware ESX Server:
3.56, 7, 8, 9, 10, 28, 3i6, 7,
9, 10, 28
PCI
Express,
PCI−X
IBM Emulex 4gb FC Exp Card (CFFv,
43W6859)1, 12, 26 FC−SW EMC
CLARiiON
AX150
Y13, 14
524 IBM BladeCenter: HS21 XM
(Model 7995), LS21 (Model
7971), LS41 (Model 7972)
VMware ESX Server:
3.56, 7, 8, 9, 10, 28, 3i6, 7,
9, 10, 28
PCI−X IBM Emulex 4gb FC Exp Card (CFFv,
43W6859)1, 12, 26 FC−SW EMC
CLARiiON
AX150
Y13, 14
1. Firmware Version 2.80a4 Available in the EMC−approved section of the Emulex website. http://www.emulex.com
2. BIOS 2.02a1 Available in the EMC−approved section of the Emulex website. http://www.emulex.com
3. Windows 2003 x86 and x64 guest VM with Microsoft iSCSI Initiator 2.06 is supported on ESX 3.0.2 and ESX 3.5 Supported arrays: CX3 10c, CX3 20c, CX3−40c and the AX4−5i.
EMC PowerPath 5.1 − single initiator only.
4. Beginning with ESX Server v2.5.0 Navisphere components are supported as follows:
Navi Agent and Server Utility at the console level; as of Navisphere 6.27, Navisphere Host Agent and Server Utility are also supported on Windows Virtual Machines on ESX 3.5 and
ESXi 3.5 systems.
Array Initialization Utility and CLI at both the console and VM levels;
Browser interface to Navisphere Express or Navi Manager at the VM level only.
5. Replication of both VMFS and Raw Device Mapping (RDM) volumes is supported using Symmetrix replication software (SRDF and TimeFinder). Please refer to the solution guide,
'VMware ESX Server Using EMC Symmetrix Storage Systems' for details.
Replicas of application data volumes of physical (native) servers created using Symmetrix replication software can be presented to a VMware ESX Server. Note:
1) Replication of Operating System volumes and application volumes from a virtual to physical environment using Symmetrix replication software is not supported at this time.
2) Replication of Operating System volumes, application volumes and application data volumes from a virtual to virtual environment using Symmetrix replication software is
supported.
6. PowerPath is not supported on VMWare ESX. The VMWare ESX native failover functionality is supported.
7. Supported with VirtualCenter VMotion.
8. For details on Microsoft Cluster Services (MSCS) support with Windows guest operating systems, refer to the Storage/SAN Compatibility Guide For ESX Server 3.x on
www.vmware.com.
9. For a list of supported guest operating systems, refer to the "Guest Operating System Installation Guide" at http://www.vmware.com/pdf/GuestOS_guide.pdf
10. Supported with VMFS−3.
11. Driver Version 7.3.2 This driver is included in the VMware ESX kernel.
12. SNIA HBA API Supported.
13. Virtual machines running on VMWare ESX are supported booting from the array.
14. VMWare ESX Server itself is supported booting from external array.
15. While using an AX100−series storage system managed using Navisphere Express with VMware ESX Server, Snapshots cannot be assigned to the ESX server.
16. For details on Microsoft Cluster Services (MSCS) support with Windows guest operating systems, refer to the "SAN Compatibility Guide For ESX Server 2.x" at
http://www.vmware.com/pdf/esx_SAN_guide.pdf
17. Supported with VMFS and VMFS−2.
18. Replication of both VMFS and Raw Device Mapping (RDM) volumes is supported using CLARiiON replication software (SnapView, MirrorView and SAN Copy) starting with Release
19. The OS/application as well as application data can be replicated using CLARiiON replication software.
Note:
1) The virtual disks in a VMFS volume must be in persistent−mode during the replication process.
2) CLARiiON VSS provider is not supported on VMFS volumes.
3) If replicating an entire VMFS volume that contains a number of virtual disks on a single CLARiiON LUN, the granularity of replication is the entire LUN with all its virtual disks.
4) Use the consistency technology available on SnapView/MirrorView, when making copies of VMFS volumes that span multiple CLARiiON LUNs.
5) The VMFS replica must be assigned to a different ESX server. The target ESX server must not already have access to the original source VMFS volume.
6) For source LUNs configured as RDM volumes, the replica can be presented to the same ESX server but to a different Virtual Machine unless the guest OS is qualified to allow
same host access.
7) RDM volumes cannot be created when the ESX server is booting from SAN. VMFS volumes can be created when the ESX server is booting from SAN. If ESX server is not booting
from SAN, Virtual Machines can be configured using either VMFS or RDM volumes.
8) Ensure the snapshot is activated or clone fractured before presenting them to an ESX server.
9) When replicating OS images, you will only have a crash consistent copy if the Virtual Machines are not shutdown during the replication process.
10) Please note that replicating a suspended Virtual Machine will result in a crash consistent copy.
11) Admsnap, Admhost utilities must be installed only on Virtual Machines. With admsnap or admhost installed on the Virtual Machine, most of the commands issued on VMFS
volumes will fail, since VMFS volumes do not support scsi pass−through commands to talk to the CLARiiON. Use Navisphere Manager or Navisphere CLI instead. The only
commands that will work with VMFS volumes are admsnap flush and admhost flush. Admsnap is not supported under Netware Virtual Machines with either RDM or VMFS volumes.
19. Replicas of application data volumes of physical (native) servers created using SnapView, MirrorView and SAN Copy can be presented to a VMware ESX Server. These replicas
configured on the VMware ESX server must be Raw Device Mapping only (VMFS is not supported). Replicas of application data volumes configured as Raw Device Mapping
volumes on a VMware ESX server can also be presented to a physical (native) server.
CLARiiON AX150
03/04/2009 892 CLARiiON AX150