![Result: NIC performance can be up to ~60% underutilized](/images/new-backgrounds/102647/1026479x1.webp)
White Paper Consolidation of a
VM | 1 |
| VM |
| VM | n |
| 2 |
|
| |||
Virtual |
| Virtual |
| Virtual |
Virtualization
Hypervisor
NIC
LAN
Throughput
10.0 |
|
|
|
| |
8.0 |
|
|
| Unused | |
|
| |
6.0 |
| I/O capacity |
|
| |
|
| |
4.0 |
|
|
| 4.0 | |
|
| |
2.0 |
|
|
|
without VMDq
Result: NIC performance can be up to ~60% underutilized
Figure 3.. Network data flow for virtualization without the use of VMDq and NetQueue technologies..
Figure 4.. Impact of virtualization on a 10 GB Ethernet NIC without the use of VMDq and NetQueue..
of virtualization in the Intel lab using common network micro- benchmarks before attempting the virtualization of the gaming server environment. This would allow us to quantify the latency added by virtualization to see if it would be significant. When we were sure that the latency added should not be a concern, we proceeded to test the gaming server virtualization with private testing in the ESL lab and ultimately onto public testing on the Internet with real ESL members.
Server hardware
The PoC targeted the Intel Xeon processor 7300 platform with four processor sockets with the
enhanced
Network I/O
But virtualization is not just about CPU and memory resources. It’s important to have I/O tuned for virtualization, too.
In a typical virtualization scenario (Figure 3), the network I/O for all the VMs is delivered to the hypervisor. The hypervisor then performs the necessary Ethernet switching functions in software to forward each network flow to the destination VM. This software function, called a virtual switch, is much slower than
atypical
As shown in Figure 4, the Intel® 10 GbE NIC runs into this single- core interrupt processing load bottleneck. In this case, the 10 GbE NIC can only receive 4 GB of traffic due to the saturation of the single CPU core processing all the receive interrupts at 10 GB line rate.
5