HP Thunderbolt 2-Port AiO Module ESXi/ESX Modifying a VMware resource pool to reserve memory

Page 20

CAUTION: ESX and ESXi require 512B sector sizes. New IO Accelerator devices come pre-formatted with 512B sector sizes from the factory. If yours is a new device, there is no need to format it.

However, if your IO Accelerator device was previously used in a system that allowed for larger sector sizes (such as Linux and 4KB sectors), then you must perform a format using the fio-formatutility. To prevent data loss, follow formatting instructions carefully, including disabling and re-enabling auto attach.

Within the vSphere Client, select the Configuration tab. Under Hardware click Storage, then click Add Storage in the top right corner. The Add Storage wizard appears. Use this wizard to configure the device.

For more information, and an explanation of options, including setting the VM file system block size, consult your vSphere documentation.

The preferred type of virtual disk is eagerzeroedthick. HP does not recommend thin provisioning because it degrades performance significantly.

You can now store VMs on IO Accelerator devices.

Modifying a VMware resource pool to reserve memory

Under certain circumstances, the ESX or ESXi operating system might temporarily require all or most of the RAM available on the system, leaving no memory for the VSL. For example, a host running VMware View might need to rapidly provision multiple VDI images. This requirement might happen so quickly that the host memory is temporarily exhausted.

If the VMs starve the VSL of RAM, the IO Accelerator devices might go offline or stop processing requests. To address this issue, follow the procedure and guidelines for limiting memory consumed by the VMs.

HP recommends limiting RAM available to the VMs equal to Total Host RAM - RAM equivalent to 0.5% of the total IO Accelerator device capacity. For more information on this calculation, see the following example scenario. The easiest way to set this limit is by modifying the user pool.

The exact amount to limit is workload dependent and requires tuning for specific use cases. To modify the user pool, perform the following steps using the vSphere client:

1.Click the Summary tab in the vSphere client to view the current memory usage and capacity. The total IO Accelerator device datastore capacity is also visible. Record the capacity.

2.Navigate to the user Resource Allocation window:

a.Select the host > Configuration tab > Software pane > System Resource Allocation link > Advanced link. The System Resource Pools appear.

b.Select the user node under the host tree. The details for the user appear.

c.Click the Edit settings link. The user Resource Allocation window appears.

3.Limit the memory allocated to the VMs:

a.Under Memory Resources, clear the Unlimited checkbox so you can set the limit for memory resource allocation.

b.Set the limit on VM memory consumption.

Example scenario:

Software installation 20

Image 20
Contents Abstract Page Contents Module parameters Contents summary About this guideProduct naming IntroductionOverview Performance attributes Required operating environmentSupported firmware revisions Supported hardwareIntroduction Before you begin Introduction Software installation Command-line installationESX command-line installation ESXi command line installationIomemory-vsl-version.zip Installation overviewDownloading the VMware ESXi driver Vifs.pl --server servername --mkdir datastorebundles Vmfs/volumes/datastore/bundlesTransferring the VSL files to the ESX or ESXi server Installing the VSL on ESXi Installing the VSL on ESX or ESXiInstalling the VSL on ESXi 5.0 using vCLI Installing the VSL on ESXi 5.0 using the command-lineUpgrading the firmware using ESX Installing the VSL on ESX or ESXi 4.x using vCLI Upgrading procedure Upgrading device firmware from VSL 1.x.x or 2.x.x toIomemory-vsl block driver for ESX/ESXi Fio-bugreportIomemory-vsl block driver for ESXi Fio-update-iodrive iodriveversion.fff Enabling PCIe powerConfiguring the device to support VM disks Modifying a VMware resource pool to reserve memory Using the IO Accelerator as swap with ESX Maintenance Maintenance toolsCommand-line utilities for Tech Support Mode Management utilities for ESXiEnabling PCIe power override Command line Purpose UtilityEnabling the override parameter Common maintenance tasks Disabling the driverEnabling the driver 1149D0969,1159E0972,24589Uninstalling the IO Accelerator driver package Disabling auto attachEnabling auto attach # esxcfg-module -s autoattach=1 iomemory-vslUnmanaged shutdown issues Performance and tuning Introduction to performance and tuningDisabling Dvfs Limiting Apci C-statesFio-attach UtilitiesUtilities reference Fio-beacon Fio-bugreportFio-attach device options Fio-beacon device optionsFio-bugreport Fio-detach device options Fio-detachFio-format Fio-pci-check Fio-format options deviceFio-status device options Fio-statusFio-pci-check options Utilities Fio-update-iodrive Geometry and capacity information not available appearsFio-update-iodrive options iodriveversion.fff Utilities Monitoring IO Accelerator health Nand flash and component failureHealth metrics Health monitoring techniquesFlashback substitution events Esxcfg-module --server server-name iomemory-vsl -g Using module parametersModule parameters Working with IO Accelerators and VMDirectPathIO Using products with multiple devicesVMDirectPathIO For more information Subscription serviceResources Support and other resources Before you contact HPHP contact information Customer Self RepairRéparation par le client CSR Riparazione da parte del cliente Reparaciones del propio cliente Reparo feito pelo cliente Support and other resources Support and other resources Support and other resources European Union regulatory notice Regulatory compliance noticesRegulatory compliance identification numbers Korean class a notice Acronyms and abbreviations Vmdk Documentation feedback Index Index