White Paper Consolidation of a
VM | 1 |
| VM |
| VM | n |
| 2 |
|
| |||
Virtual |
| Virtual |
| Virtual |
VMware with NetQueue
NIC with
VMDq
LAN
Figure 5.. Network data flow for virtualization with VMDq and NetQueue..
9.29.5
8.0
(GB) | 6.0 |
|
|
|
|
Throughput |
|
|
| Jumbo Frames | |
4.0 |
|
| |||
|
|
|
|
| Without VMDq |
With VMDq
4.0 | With VMDq |
|
2.0
•2x throughput
•Near Native 10 GbE
Figure 6.. Tests measure wire speed Receive (Rx) side performance with VMDq on Intel® 82598 10 Gigabit Ethernet Controllers..
VMM overhead
•Switching load
•Interrupt bottleneck
We can optimize the network I/O solution to solve both of the issues above.
In Figure 5, we show the effect of using the new Intel® VMDq hardware in our latest NICs along with the new VMware NetQueue software in ESX 3.5. In this case, the network flows destined for each of the VMs are switched in hardware on the NIC itself and put into separate hardware queues. This greatly simplifies the work that the virtualization software layer has to do to forward packets to the destination VMs and delivers improved CPU headroom for application VMs. Each of the queues noted above is equipped with a dedicated interrupt signal that
can be directly routed to the destination VM for handling. This allows us to spread the load of a 10 G pipe across the processor cores running those VMs. In this way we can break through the
In Figure 6, we can see that the receive performance with VMDq + NetQueue is 9.2 Gbps with standard
VMDq and NetQueue
•Optimize switching
•
6