36
unused space on the underlying CoW instance disk is reclaimed when a snapshot or clone occurs. The difference
between the two behaviors can be characterized in the following way:
For LVM-based VHDs, the difference disk nodes within the chain consume only as much data as has been
written to disk but the leaf nodes (VDI clones) remain fully inflated to the virtual size of the disk. Snapshot
leaf nodes (VDI snapshots) remain deflated when not in use and can be attached Read-only to preserve the
deflated allocation. Snapshot nodes that are attached Read-Write will be fully inflated on attach, and deflated
on detach.
For file-based VHDs, all nodes consume only as much data as has been written, and the leaf node files grow
to accommodate data as it is actively written. If a 100GB VDI is allocated for a new VM and an OS is installed,
the VDI file will physically be only the size of the OS data that has been written to the disk, plus some minor
metadata overhead.
When cloning VMs based on a single VHD template, each child VM forms a chain where new changes are written
to the new VM, and old blocks are directly read from the parent template. If the new VM was converted into a
further template and more VMs cloned, then the resulting chain will result in degraded performance. XenServer
supports a maximum chain length of 30, but it is generally not recommended that you approach this limit without
good reason. If in doubt, you can always "copy" the VM using XenServer or the vm-copy command, which resets
the chain length back to 0.

VHD Chain Coalescing

VHD images support chaining, which is the process whereby information shared between one or more VDIs is
not duplicated. This leads to a situation where trees of chained VDIs are created over time as VMs and their
associated VDIs get cloned. When one of the VDIs in a chain is deleted, XenServer rationalizes the other VDIs in
the chain to remove unnecessary VDIs.
This coalescing process runs asynchronously. The amount of disk space reclaimed and the time taken to perform
the process depends on the size of the VDI and the amount of shared data. Only one coalescing process will ever
be active for an SR. This process thread runs on the SR master host.
If you have critical VMs running on the master server of the pool and experience occasional slow IO due to this
process, you can take steps to mitigate against this:
Migrate the VM to a host other than the SR master
Set the disk IO priority to a higher level, and adjust the scheduler. See the section called “Virtual Disk QoS
Settings” for more information.

Space Utilization

Space utilization is always reported based on the current allocation of the SR, and may not reflect the amount
of virtual disk space allocated. The reporting of space for LVM-based SRs versus File-based SRs will also differ
given that File-based VHD supports full thin provisioning, while the underlying volume of an LVM-based VHD will
be fully inflated to support potential growth for writeable leaf nodes. Space utilization reported for the SR will
depend on the number of snapshots, and the amount of difference data written to a disk between each snapshot.
LVM-based space utilization differs depending on whether an LVM SR is upgraded or created as a new SR in
XenServer. Upgraded LVM SRs will retain a base node that is fully inflated to the size of the virtual disk, and any
subsequent snapshot or clone operations will provision at least one additional node that is fully inflated. For new
SRs, in contrast, the base node will be deflated to only the data allocated in the VHD overlay.
When VHD-based VDIs are deleted, the space is marked for deletion on disk. Actual removal of allocated data
may take some time to occur as it is handled by the coalesce process that runs asynchronously and independently
for each VHD-based SR.
LUN-based VDIs
Mapping a raw LUN as a Virtual Disk image is typically the most high-performance storage method. For
administrators that want to leverage existing storage SAN infrastructure such as NetApp, EqualLogic or
StorageLink accessible arrays, the array snapshot, clone and thin provisioning capabilities can be exploited directly
using one of the array specific adapter SR types (NetApp, EqualLogic or StorageLink). The virtual machine storage