Dell PowerVault MD32xxi Configuration Guide for VMware ESX4.1 Server Software
3.More than one Network Interface Card (NIC) set aside for iSCSI traffic
4.No Distributed Virtual Switch (DVS) for iSCSI traffic
Not every environment requires all of the steps detailed in this whitepaper.
Users wishing to only enable Jumbo Frame support for the iSCSI connection need to follow steps 1 and steps 2 with the following changes:
Step 1: Configure vSwitch and Enable Jumbo Frames – No changes to the instructions
Step 2: Add iSCSI VMkernel Ports – Instead of assigning multiple VMkernel Ports, administrators will only assign a single VMkernel Port
Once these two steps are done, the rest of the configuration can be accomplished in the vCenter GUI by attaching NICs, assigning storage and then connecting to the storage.
The rest of this document assumes the environment will be using multiple NICs and attaching to a Dell PowerVault SAN utilizing Native Multipathing (NMP) from VMware.
Establishing Sessions to a SAN
Before continuing the examples, we first must discuss how VMware ESX4.1 establishes its connection to the SAN utilizing the new vSphere4 iSCSI Software Adapter. VMware uses VMkernel ports as the session initiators and so we must configure each port that we want to use as a path to the storage. This is independent of the number of network interfaces but in most configurations it will be a
Each volume on the PowerVault array can be utilized by ESX4.1 as either a Datastore or a Raw Device Map (RDM). To do this, the iSCSI software adapter utilizes the VMkernel ports that were created and establishes a session to the SAN and to the volume in order to communicate. With previous versions of ESX, this session was established using a single NIC path and any additional NICs were there for failover only. With the improvements to vSphere4 and MPIO, administrators can now take advantage of multiple paths to the SAN for greater bandwidth and performance. This does require some additional configuration which is discussed in detail in this whitepaper.
Each VMkernel is bound to a physical adapter. Depending on the environment this can create a single session to a volume or up to 8 sessions (ESX4.1 maximum number of connections per volume). For a normal deployment, it is acceptable to use a
Page 5