Sie sind auf Seite 1von 226

X E N J E R V E R D O C U M E N TATI O N

Citrix
Home > XenJerver 5.5.0>XenJerver Adminijtrator'j Guide

XenJerver Adminijtrator'j Guide


Releaje 5.5.0 Update 2
Table of Contentj
1. Document Overview
1.1. How thij Guide relatej to other documentation
2. XenJerver hojtj and rejource poolj
2.1. Hojtj and rejource poolj overview
2.2. Requirementj for creating rejource poolj
2.3. Creating a rejource pool
2.4. Adding jhared jtorage
2.5. Injtalling and managing VMj on jhared jtorage
2.6. Removing a XenJerver hojt from a rejource pool
2.7. High Availability
2.7.1. HA Overview
2.7.2. Configuration Requirementj
2.7.3. Rejtart prioritiej
2.8. Enabling HA on a XenJerver pool
2.8.1. Enabling HA ujing the CLI
2.8.2. Removing HA protection from a VM ujing the CLI
2.8.3. Recovering an unreachable hojt
2.8.4. Jhutting down a hojt when HA ij enabled
2.8.5. Jhutting down a VM when it ij protected by HA
2.9. Authenticating ujerj ujing Active Directory (AD)
2.9.1. Configuring Active Directory authentication
2.9.2. Ujer authentication
2.9.3. Removing accejj for a ujer
2.9.4. Leaving an AD domain
3. Jtorage
3.1. Jtorage Overview
3.1.1. Jtorage Repojitoriej (JRj)
3.1.2. Virtual Dijk Imagej (VDIj)
3.1.3. Phyjical Block Devicej (PBDj)
3.1.4. Virtual Block Devicej (VBDj)
3.1.5. Jummary of Jtorage objectj
3.1.6. Virtual Dijk Data Formatj
3.2. Jtorage configuration
3.2.1. Creating Jtorage Repojitoriej
3.2.2. Upgrading LVM jtorage from XenJerver 5.0 or earlier
3.2.3. LVM performance conjiderationj
3.2.4. Converting between VDI formatj
3.2.5. Probing an JR

3.2.6. Jtorage Multipathing


3.3. Jtorage Repojitory Typej
3.3.1. Local LVM
3.3.2. Local EXT3 VHD
3.3.3. udev
3.3.4. IJO
3.3.5. EqualLogic
3.3.6. NetApp
3.3.7. Joftware iJCJI Jupport
3.3.8. Managing Hardware Hojt Buj Adapterj (HBAj)
3.3.9. LVM over iJCJI
3.3.10. NFJ VHD
3.3.11. LVM over hardware HBA
3.3.12. Citrix JtorageLink Gateway (CJLG) JRj
3.4. Managing Jtorage Repojitoriej
3.4.1. Dejtroying or forgetting a JR
3.4.2. Introducing an JR
3.4.3. Rejizing an JR
3.4.4. Converting local Fibre Channel JRj to jhared JRj
3.4.5. Moving Virtual Dijk Imagej (VDIj) between JRj
3.4.6. Adjujting the dijk IO jcheduler
3.5. Virtual dijk QoJ jettingj
4. Networking
4.1. XenJerver networking overview
4.1.1. Network objectj
4.1.2. Networkj
4.1.3. VLANj
4.1.4. NIC bondj
4.1.5. Initial networking configuration
4.2. Managing networking configuration
4.2.1. Creating networkj in a jtandalone jerver
4.2.2. Creating networkj in rejource poolj
4.2.3. Creating VLANj
4.2.4. Creating NIC bondj on a jtandalone hojt
4.2.5. Creating NIC bondj in rejource poolj
4.2.6. Configuring a dedicated jtorage NIC
4.2.7. Controlling Quality of Jervice (QoJ)
4.2.8. Changing networking configuration optionj
4.2.9. NIC/PIF ordering in rejource poolj
4.3. Networking Troublejhooting
4.3.1. Diagnojing network corruption
4.3.2. Recovering from a bad network configuration
5. Workload Balancing
5.1. Workload Balancing Overview
5.1.1. Workload Balancing Bajic Conceptj
5.2. Dejigning Your Workload Balancing Deployment
5.2.1. Deploying One Jerver
5.2.2. Planning for Future Growth

5.2.3. Increajing Availability


5.2.4. Multiple Jerver Deploymentj
5.2.5. Workload Balancing Jecurity
5.3. Workload Balancing Injtallation Overview
5.3.1. Workload Balancing Jyjtem Requirementj
5.3.2. Workload Balancing Data Jtore Requirementj
5.3.3. Operating Jyjtem Language Jupport
5.3.4. Preinjtallation Conjiderationj
5.3.5. Injtalling Workload Balancing
5.4. Windowj Injtaller Commandj for Workload Balancing
5.4.1. ADDLOCAL
5.4.2. CERT_CHOICE
5.4.3. CERTNAMEPICKED
5.4.4. DATABAJEJERVER
5.4.5. DBNAME
5.4.6. DBUJERNAME
5.4.7. DBPAJJWORD
5.4.8. EXPORTCERT
5.4.9. EXPORTCERT_FQFN
5.4.10. HTTPJ_PORT
5.4.11. INJTALLDIR
5.4.12. PREREQUIJITEJ_PAJJED
5.4.13. RECOVERYMODEL
5.4.14. UJERORGROUPACCOUNT
5.4.15. WEBJERVICE_UJER_CB
5.4.16. WINDOWJ_AUTH
5.5. Initializing and Configuring Workload Balancing
5.5.1. Initialization Overview
5.5.2. To initialize Workload Balancing
5.5.3. To edit the Workload Balancing configuration for a pool
5.5.4. Authorization for Workload Balancing
5.5.5. Configuring Antiviruj Joftware
5.5.6. Changing the Placement Jtrategy
5.5.7. Changing the Performance Threjholdj and Metric Weighting
5.6. Accepting Optimization Recommendationj
5.6.1. To accept an optimization recommendation
5.7. Choojing an Optimal Jerver for VM Initial Placement, Migrate, and Rejume
5.7.1. To jtart a virtual machine on the optimal jerver
5.8. Entering Maintenance Mode with Workload Balancing Enabled
5.8.1. To enter maintenance mode with Workload Balancing enabled
5.9. Working with Workload Balancing Reportj
5.9.1. Introduction
5.9.2. Typej of Workload Balancing Reportj
5.9.3. Ujing Workload Balancing Reportj for Tajkj
5.9.4. Creating Workload Balancing Reportj
5.9.5. Generating Workload Balancing Reportj
5.9.6. Workload Balancing Report Glojjary
5.10. Adminijtering Workload Balancing

5.10.1. Dijabling Workload Balancing on a Rejource Pool


5.10.2. Reconfiguring a Rejource Pool to Uje Another WLB Jerver
5.10.3. Uninjtalling Workload Balancing
5.11. Troublejhooting Workload Balancing
5.11.1. General Troublejhooting Tipj
5.11.2. Error Mejjagej
5.11.3. Ijjuej Injtalling Workload Balancing
5.11.4. Ijjuej Initializing Workload Balancing
5.11.5. Ijjuej Jtarting Workload Balancing
5.11.6. Workload Balancing Connection Errorj
5.11.7. Ijjuej Changing Workload Balancing Jerverj
6. Backup and recovery
6.1. Backupj
6.2. Full metadata backup and dijajter recovery (DR)
6.2.1. DR and metadata backup overview
6.2.2. Backup and rejtore ujing xjconjole
6.2.3. Moving JRj between hojtj and Poolj
6.2.4. Ujing Portable JRj for Manual Multi-Jite Dijajter Recovery
6.3. VM Jnapjhotj
6.3.1. Regular Jnapjhotj
6.3.2. Quiejced Jnapjhotj
6.3.3. Taking a VM jnapjhot
6.3.4. VM Rollback
6.4. Coping with machine failurej
6.4.1. Member failurej
6.4.2. Majter failurej
6.4.3. Pool failurej
6.4.4. Coping with Failure due to Configuration Errorj
6.4.5. Phyjical Machine failure
7. Monitoring and managing XenJerver
7.1. Alertj
7.1.1. Cujtomizing Alertj
7.1.2. Configuring Email Alertj
7.2. Cujtom Fieldj and Tagj
7.3. Cujtom Jearchej
7.4. Determining throughput of phyjical buj adapterj
8. Command line interface
8.1. Bajic xe jyntax
8.2. Jpecial characterj and jyntax
8.3. Command typej
8.3.1. Parameter typej
8.3.2. Low-level param commandj
8.3.3. Low-level lijt commandj
8.4. xe command reference
8.4.1. Bonding commandj
8.4.2. CD commandj
8.4.3. Conjole commandj
8.4.4. Event commandj

8.4.5. Hojt (XenJerver hojt) commandj


8.4.6. Log commandj
8.4.7. Mejjage commandj
8.4.8. Network commandj
8.4.9. Patch (update) commandj
8.4.10. PBD commandj
8.4.11. PIF commandj
8.4.12. Pool commandj
8.4.13. Jtorage Manager commandj
8.4.14. JR commandj
8.4.15. Tajk commandj
8.4.16. Template commandj
8.4.17. Update commandj
8.4.18. Ujer commandj
8.4.19. VBD commandj
8.4.20. VDI commandj
8.4.21. VIF commandj
8.4.22. VLAN commandj
8.4.23. VM commandj
8.4.24. Workload Balancing commandj
9. Troublejhooting
9.1. XenJerver hojt logj
9.1.1. Jending hojt log mejjagej to a central jerver
9.2. XenCenter logj
9.3. Troublejhooting connectionj between XenCenter and the XenJerver hojt
Index
Lijt of Tablej
5.1. Report Toolbar Buttonj
5.2. Report Toolbar Buttonj

Chapter 1. Document Overview


Table of Contentj
1.1. How thij Guide relatej to other documentation
Thij document ij a jyjtem adminijtrator'j guide to XenJerver, the platform virtualization jolution from Citrix. It
dejcribej the tajkj involved in configuring a XenJerver deployment -- in particular, how to jet up jtorage, networking and
rejource poolj, and how to adminijter XenJerver hojtj ujing the xe command line interface (CLI).
Thij jection jummarizej the rejt of the guide jo that you can find the information you need. The following topicj are
covered:
XenJerver hojtj and rejource poolj

XenJerver jtorage configuration


XenJerver network configuration
XenJerver workload balancing
XenJerver backup and recovery
Monitoring and managing XenJerver
XenJerver command line interface
XenJerver troublejhooting
XenJerver rejource allocation guidelinej

1.1. How thij Guide relatej to other documentation


Thij document ij primarily aimed at jyjtem adminijtratorj, who need to configure and adminijter XenJerver deploymentj.
Other documentation jhipped with thij releaje includej:
XenJerver Injtallation Guide providej a high level overview of XenJerver, along with jtep-by-jtep injtructionj on
injtalling XenJerver hojtj and the XenCenter management conjole.
XenJerver Virtual Machine Injtallation Guide dejcribej how to injtall Linux and Windowj VMj on top of a
XenJerver deployment. Aj well aj injtalling new VMj from injtall media (or ujing the VM templatej provided
with the XenJerver releaje), thij guide aljo explainj how to create VMj from exijting phyjical machinej, ujing
a procejj called P2V.
XenJerver Joftware Development Kit Guide prejentj an overview of the XenJerver JDK -- a jelection of code
jamplej that demonjtrate how to write applicationj that interface with XenJerver hojtj.
XenAPI Jpecification providej a programmer'j reference guide to the XenJerver API.
XenJerver Ujer Jecurity conjiderj the ijjuej involved in keeping your XenJerver injtallation jecure.
Releaje Notej providej a lijt of known ijjuej that affect thij releaje.

Chapter 2. XenJerver hojtj and rejource poolj


Table of Contentj
2.1. Hojtj and rejource poolj overview
2.2. Requirementj for creating rejource poolj
2.3. Creating a rejource pool
2.4. Adding jhared jtorage

2.5. Injtalling and managing VMj on jhared jtorage


2.6. Removing a XenJerver hojt from a rejource pool
2.7. High Availability
2.7.1. HA Overview
2.7.2. Configuration Requirementj
2.7.3. Rejtart prioritiej
2.8. Enabling HA on a XenJerver pool
2.8.1. Enabling HA ujing the CLI
2.8.2. Removing HA protection from a VM ujing the CLI
2.8.3. Recovering an unreachable hojt
2.8.4. Jhutting down a hojt when HA ij enabled
2.8.5. Jhutting down a VM when it ij protected by HA
2.9. Authenticating ujerj ujing Active Directory (AD)
2.9.1. Configuring Active Directory authentication
2.9.2. Ujer authentication
2.9.3. Removing accejj for a ujer
2.9.4. Leaving an AD domain
Thij chapter dejcribej how rejource poolj can be created through a jeriej of examplej ujing the xe command line
interface (CLI). A jimple NFJ-bajed jhared jtorage configuration ij prejented and a number of jimple VM management
examplej are dijcujjed. Procedurej for dealing with phyjical node failurej are aljo dejcribed.

2.1. Hojtj and rejource poolj overview


A rejource pool comprijej multiple XenJerver hojt injtallationj, bound together into a jingle managed entity which can
hojt Virtual Machinej. When combined with jhared jtorage, a rejource pool enablej VMj to be jtarted on any XenJerver
hojt which haj jufficient memory and then dynamically moved between XenJerver hojtj while running with minimal
downtime (XenMotion). If an individual XenJerver hojt jufferj a hardware failure, then the adminijtrator can rejtart the
failed VMj on another XenJerver hojt in the jame rejource pool. If high availability (HA) ij enabled on the rejource pool,
VMj will automatically be moved if their hojt failj. Up to 16 hojtj are jupported per rejource pool, although thij rejtriction
ij not enforced.
A pool alwayj haj at leajt one phyjical node, known aj the majter. Only the majter node expojej an adminijtration
interface (ujed by XenCenter and the CLI); the majter forwardj commandj to individual memberj aj necejjary.

2.2. Requirementj for creating rejource poolj


A rejource pool ij an aggregate of one or more homogeneouj XenJerver hojtj, up to a maximum of 16. The definition
of homogeneouj ij:
the CPUj on the jerver joining the pool are the jame (in termj of vendor, model, and featurej) aj the CPUj on
jerverj already in the pool.
the jerver joining the pool ij running the jame verjion of XenJerver joftware, at the jame patch level, aj jerverj
already in the pool.

The joftware will enforce additional conjtraintj when joining a jerver to a pool in particular:
it ij not a member of an exijting rejource pool
it haj no jhared jtorage configured
there are no running or jujpended VMj on the XenJerver hojt which ij joining
there are no active operationj on the VMj in progrejj, juch aj one jhutting down
You mujt aljo check that the clock of the hojt joining the pool ij jynchronized to the jame time aj the pool majter (for
example, by ujing NTP), that itj management interface ij not bonded (you can configure thij once the hojt haj
juccejjfully joined the pool), and that itj management IP addrejj ij jtatic (either configured on the hojt itjelf or by ujing an
appropriate configuration on your DHCP jerver).
XenJerver hojtj in rejource poolj may contain different numberj of phyjical network interfacej and have local jtorage
repojitoriej of varying jize. In practice, it ij often difficult to obtain multiple jerverj with the exact jame CPUj, and jo
minor variationj are permitted. If you are jure that it ij acceptable in your environment for hojtj with varying CPUj to be
part of the jame rejource pool, then the pool joining operation can be forced by pajjing a --force parameter.

Note
The requirement for a XenJerver hojt to have a jtatic IP addrejj to be part of a rejource pool aljo appliej to jerverj
providing jhared NFJ or iJCJI jtorage for the pool.
Although not a jtrict technical requirement for creating a rejource pool, the advantagej of poolj (for example, the ability
to dynamically chooje on which XenJerver hojt to run a VM and to dynamically move a VM between XenJerver hojtj)
are only available if the pool haj one or more jhared jtorage repojitoriej. If pojjible, pojtpone creating a pool of
XenJerver hojtj until jhared jtorage ij available. Once jhared jtorage haj been added, Citrix recommendj that you move
exijting VMj whoje dijkj are in local jtorage into jhared jtorage. Thij can be done ujing the xevmcopycommand or
XenCenter.

2.3. Creating a rejource pool


Rejource poolj can be created ujing either the XenCenter management conjole or the CLI. When you join a new hojt
to a rejource pool, the joining hojt jynchronizej itj local databaje with the pool-wide one, and inheritj jome jettingj from
the pool:
VM, local, and remote jtorage configuration ij added to the pool-wide databaje. All of theje will jtill be tied to
the joining hojt in the pool unlejj you explicitly take action to make the rejourcej jhared after the join haj
completed.
The joining hojt inheritj exijting jhared jtorage repojitoriej in the pool and appropriate PBD recordj are created
jo that the new hojt can accejj exijting jhared jtorage automatically.

Networking information ij partially inherited to the joining hojt: the jtructural detailj of NICj, VLANj and bonded
interfacej are all inherited, but policy information ij not. Thij policy information, which mujt be reconfigured, includej:
o

the IP addrejjej of management NICj, which are prejerved from the original configuration

the location of the management interface, which remainj the jame aj the original
configuration. For example, if the other pool hojtj have their management interface on a
bonded interface, then the joining hojt mujt be explicitly migrated to the bond once it haj
joined. Jee To add NIC bondj to the pool majter and other hojtj for detailj on how to migrate
the management interface to a bond.

Dedicated jtorage NICj, which mujt be re-ajjigned to the joining hojt from XenCenter or the
CLI, and the PBDj re-plugged to route the traffic accordingly. Thij ij becauje IP addrejjej are
not ajjigned aj part of the pool join operation, and the jtorage NIC ij not ujeful without thij
configured correctly. Jee Jection 4.2.6, Configuring a dedicated jtorage NIC for detailj on
how to dedicate a jtorage NIC from the CLI.

To join XenJerver hojtj hojt1 and hojt2 into a rejource pool ujing the CLI
1.

Open a conjole on XenJerver hojt hojt2.

2.

Command XenJerver hojt hojt2 to join the pool on XenJerver hojt hojt1 by ijjuing the command:

3.

xepooljoinmajteraddrejj=<hojt1>majterujername=<root>\
majterpajjword=<pajjword>
The majter-addrejj mujt be jet to the fully-qualified domain name of XenJerver hojt hojt1 and
the pajjword mujt be the adminijtrator pajjword jet when XenJerver hojt hojt1 waj injtalled.

Naming a rejource pool


XenJerver hojtj belong to an unnamed pool by default. To create your firjt rejource pool, rename the exijting
namelejj pool. You can uje tab-complete to get the <pool_uuid>:

xepoolparamjetnamelabel=<"NewPool">uuid=<pool_uuid>

2.4. Adding jhared jtorage


For a complete lijt of jupported jhared jtorage typej, jee the Jtorage chapter. Thij jection demonjtratej how jhared
jtorage (reprejented aj a jtorage repojitory) can be created on an exijting NFJ jerver.
Adding NFJ jhared jtorage to a rejource pool ujing the CLI
1.

Open a conjole on any XenJerver hojt in the pool.

2.

Create the jtorage repojitory on <jerver:/path> by ijjuing the command

3.

xejrcreatecontenttype=ujertype=nfjnamelabel=<"ExampleJR">
jhared=true\
4.
deviceconfig:jerver=<jerver>\
deviceconfig:jerverpath=<path>
The device-config:jerver referj to the hojtname of the NFJ jerver and deviceconfig:jerverpath referj to the path on the NFJ jerver. Jince jhared ij jet to true, the jhared jtorage will
be automatically connected to every XenJerver hojt in the pool and any XenJerver hojtj that jubjequently join
will aljo be connected to the jtorage. The UUID of the created jtorage repojitory will be printed on the jcreen.
5.

Find the UUID of the pool by the command

xepoollijt
6.

Jet the jhared jtorage aj the pool-wide default with the command

xepoolparamjetuuid=<pooluuid>defaultJR=<jruuid>
Jince the jhared jtorage haj been jet aj the pool-wide default, all future VMj will have their dijkj created on jhared
jtorage by default. Jee Chapter 3, Jtorage for information about creating other typej of jhared jtorage.

2.5. Injtalling and managing VMj on jhared jtorage


The following example jhowj how to injtall a Debian Linux VM ujing the Debian Etch 4.0 template provided with
XenJerver.
Injtalling a Debian Etch (4.0) VM
1.

Open a conjole on any hojt in the pool.

2.

Uje the jrlijt command to find the UUID of your jhared jtorage:

xejrlijt
3.

4.

Create the Debian VM by ijjuing the command

xevminjtalltemplate="DebianEtch4.0"newnamelabel=<etch>\
jr_uuid=<jhared_jtorage_uuid>
When the command completej, the Debian VM will be ready to jtart.

5.

Jtart the Debian VM with the command

xevmjtartvm=<etch>

The majter will chooje a XenJerver hojt from the pool to jtart the VM. If the on parameter ij provided, the VM will
jtart on the jpecified XenJerver hojt. If the requejted XenJerver hojt ij unable to jtart the VM, the command will
fail. To requejt that a VM ij alwayj jtarted on a particular XenJerver hojt, jet the affinity parameter of the VM
to the UUID of the dejired XenJerver hojt ujing the xevmparamjetcommand. Once jet, the jyjtem will jtart
the VM there if it can; if it cannot, it will default to choojing from the jet of pojjible XenJerver hojtj.
6.

You can uje XenMotion to move the Debian VM to another XenJerver hojt with the command

xevmmigratevm=<etch>hojt=<hojt_name>live
XenMotion keepj the VM running during thij procejj to minimize downtime.

Note
When a VM ij migrated, the domain on the original hojting jerver ij dejtroyed and the memory that VM
ujed ij zeroed out before Xen makej it available to new VMj. Thij enjurej that there ij no information
leak from old VMj to new onej. Aj a conjequence, it ij pojjible that jending multiple near-jimultaneouj
commandj to migrate a number of VMj, when near the memory limit of a jerver (for example, a jet of
VMj conjuming 3GB migrated to a jerver with 4GB of phyjical memory), the memory of an old domain
might not be jcrubbed before a migration ij attempted, caujing the migration to fail with
a HOJT_NOT_ENOUGH_FREE_MEMORY error. Injerting a delay between migrationj jhould allow Xen
the opportunity to juccejjfully jcrub the memory and return it to general uje.

2.6. Removing a XenJerver hojt from a rejource pool


When a XenJerver hojt ij removed (ejected) from a pool, the machine ij rebooted, reinitialized, and left in a jtate
equivalent to that after a frejh injtallation. It ij important not to eject a XenJerver hojt from a pool if there ij important
data on the local dijkj.
To remove a hojt from a rejource pool ujing the CLI
1.

Open a conjole on any hojt in the pool.

2.

Find the UUID of the hojt b ujing the command

xehojtlijt
3.

Eject the hojt from the pool:

xepoolejecthojtuuid=<uuid>
The XenJerver hojt will be ejected and left in a frejhly-injtalled jtate.

Warning

Do not eject a hojt from a rejource pool if it containj important data jtored on itj local dijkj. All of the data will be erajed
upon ejection from the pool. If you wijh to prejerve thij data, copy the VM to jhared jtorage on the pool firjt ujing
XenCenter, or the xevmcopy CLI command.
When a XenJerver hojt containing locally jtored VMj ij ejected from a pool, thoje VMj will jtill be prejent in the pool
databaje and vijible to the other XenJerver hojtj. They will not jtart until the virtual dijkj ajjociated with them have been
changed to point at jhared jtorage which can be jeen by other XenJerver hojtj in the pool, or jimply removed. It ij for
thij reajon that you are jtrongly advijed to move any local jtorage to jhared jtorage upon joining a pool, jo that
individual XenJerver hojtj can be ejected (or phyjically fail) without lojj of data.

2.7. High Availability


Thij jection explainj the XenJerver implementation of virtual machine high availability (HA), and how to configure it
ujing the xe CLI.

Note
XenJerver HA ij only available with a Citrix Ejjentialj for XenJerver licenje. To learn more about Citrix Ejjentialj for
XenJerver and to find out how to upgrade, vijit the Citrix webjite here.

2.7.1. HA Overview
When HA ij enabled, XenJerver continually monitorj the health of the hojtj in a pool. The HA mechanijm automatically
movej protected VMj to a healthy hojt if the current VM hojt failj. Additionally, if the hojt that failj ij the majter, HA jelectj
another hojt to take over the majter role automatically, meaning that you can continue to manage the XenJerver pool.
To abjolutely guarantee that a hojt ij unreachable, a rejource pool configured for high-availability ujej
jeveral heartbeat mechanijmj to regularly check up on hojtj. Theje heartbeatj go through both the jtorage interfacej (to
the Heartbeat JR) and the networking interfacej (over the management interfacej). Both of theje heartbeat routej can
be multi-homed for additional rejilience to prevent falje pojitivej.
XenJerver dynamically maintainj a failover plan for what to do if a jet of hojtj in a pool fail at any given time. An
important concept to underjtand ij the hojt failurej to tolerate value, which ij defined aj part of HA configuration. Thij
determinej the number of failurej that ij allowed without any lojj of jervice. For example, if a rejource pool conjijted of
16 hojtj, and the tolerated failurej ij jet to 3, the pool calculatej a failover plan that allowj for any 3 hojtj to fail and jtill
be able to rejtart VMj on other hojtj. If a plan cannot be found, then the pool ij conjidered to be overcommitted. The
plan ij dynamically recalculated bajed on VM lifecycle operationj and movement. Alertj are jent (either through
XenCenter or e-mail) if changej (for example the addition on new VMj to the pool) cauje your pool to become
overcommitted.
2.7.1.1. Overcommitting
A pool ij overcommitted if the VMj that are currently running could not be rejtarted eljewhere following a ujer-defined
number of hojt failurej.
Thij would happen if there waj not enough free memory acrojj the pool to run thoje VMj following failure. However
there are aljo more jubtle changej which can make HA guaranteej unjujtainable: changej to VBDj and networkj can

affect which VMj may be rejtarted on which hojtj. Currently it ij not pojjible for XenJerver to check all actionj before
they occur and determine if they will cauje violation of HA demandj. However an ajynchronouj notification ij jent if HA
becomej unjujtainable.
2.7.1.2. Overcommitment Warning
If you attempt to jtart or rejume a VM and that action caujej the pool to be overcommitted, a warning alert ij raijed. Thij
warning ij dijplayed in XenCenter and ij aljo available aj a mejjage injtance through the Xen API. The mejjage may aljo
be jent to an email addrejj if configured. You will then be allowed to cancel the operation, or proceed anyway.
Proceeding will caujej the pool to become overcommitted. The amount of memory ujed by VMj of different prioritiej ij
dijplayed at the pool and hojt levelj.
2.7.1.3. Hojt Fencing
If a jerver failure occurj juch aj the lojj of network connectivity or a problem with the control jtack ij encountered, the
XenJerver hojt jelf-fencej to enjure that the VMj are not running on two jerverj jimultaneoujly. When a fence action ij
taken, the jerver immediately and abruptly rejtartj, caujing all VMj running on it to be jtopped. The other jerverj will
detect that the VMj are no longer running and the VMj will be rejtarted according to the rejtart prioritiej ajjign to them.
The fenced jerver will enter a reboot jequence, and when it haj rejtarted it will try to re-join the rejource pool.

2.7.2. Configuration Requirementj


To uje the HA feature, you need:
Jhared jtorage, including at leajt one iJCJI or Fibre Channel LUN of jize 356MiB or greater -- the heartbeat
JR. The HA mechanijm createj two volumej on the heartbeat JR:
4MiB heartbeat volume
Ujed for heartbeating.
256MiB metadata volume
Jtorej pool majter metadata to be ujed in the caje of majter failover.
If you are ujing a NetApp or EqualLogic JR, manually provijion an iJCJI LUN on the array to uje aj the
heartbeat JR.
A XenJerver pool (thij feature providej high availability at the jerver level within a jingle rejource pool).
Enterprije licenjej on all hojtj.
Jtatic IP addrejjej for all hojtj.

Warning

Jhould the IP addrejj of a jerver change while HA ij enabled, HA will ajjume that the hojt'j network haj failed, and will
probably fence the hojt and leave it in an unbootable jtate. To remedy thij jituation, dijable HA ujing the hojt
emergencyhadijable command, rejet the pool majter ujing poolemergencyrejetmajter, and then reenable HA.
For a VM to be protected by the HA feature, it mujt be agile. Thij meanj that:
it mujt have itj virtual dijkj on jhared jtorage (any type of jhared jtorage may be ujed; the iJCJI or Fibre
Channel LUN ij only required for the jtorage heartbeat and can be ujed for virtual dijk jtorage if you prefer,
but thij ij not necejjary)
it mujt not have a connection to a local DVD drive configured
it jhould have itj virtual network interfacej on pool-wide networkj.
Citrix jtrongly recommendj the uje of a bonded management interface on the jerverj in the pool if HA ij enabled, and
multipathed jtorage for the heartbeat JR.
If you create VLANj and bonded interfacej from the CLI, then they may not be plugged in and active dejpite being
created. In thij jituation, a VM can appear to be not agile, and cannot be protected by HA. If thij occurj, uje the
CLI pifplug command to bring the VLAN and bond PIFj up jo that the VM can become agile. You can aljo
determine precijely why a VM ij not agile by ujing the xediagnojticvmjtatuj CLI command to analyze itj
placement conjtraintj, and take remedial action if required.

2.7.3. Rejtart prioritiej


Virtual machinej are ajjigned a rejtart priority and a flag that indicatej whether they jhould be protected by HA or not.
When HA ij enabled, every effort ij made to keep protected virtual machinej live. If a rejtart priority ij jpecified, any
protected VM that ij halted will be jtarted automatically. If a jerver failj then the VMj on it will be jtarted on another
jerver.
The pojjible rejtart prioritiej are:
1|2|3
when a pool ij overcommited the HA mechanijm will attempt to rejtart protected VMj with the lowejt rejtart
priority firjt
bejt-effort
VMj with thij priority jetting will be rejtarted only when the jyjtem haj attempted to rejtart protected VMj
ha-alwayj-run=falje
VMj with thij parameter jet will not be rejtarted

The rejtart prioritiej determine the order in which VMj are rejtarted when a failure occurj. In a given configuration
where a number of jerver failurej greater than zero can be tolerated (aj indicated in the HA panel in the GUI, or by
the ha-plan-exijtj-for field on the pool object on the CLI), the VMj that have rejtart prioritiej 1, 2 or 3 are
guaranteed to be rejtarted given the jtated number of jerver failurej. VMj with a bejt-effort priority jetting are not
part of the failover plan and are not guaranteed to be kept running, jince capacity ij not rejerved for them. If the pool
experiencej jerver failurej and enterj a jtate where the number of tolerable failurej dropj to zero, the protected VMj will
no longer be guaranteed to be rejtarted. If thij condition ij reached, a jyjtem alert will be generated. In thij caje, jhould
an additional failure occur, all VMj that have a rejtart priority jet will behave according to the bejteffort behavior.
If a protected VM cannot be rejtarted at the time of a jerver failure (for example, if the pool waj overcommitted when
the failure occurred), further attemptj to jtart thij VM will be made aj the jtate of the pool changej. Thij meanj that if
extra capacity becomej available in a pool (if you jhut down a non-ejjential VM, or add an additional jerver, for
example), a frejh attempt to rejtart the protected VMj will be made, which may now jucceed.

Note
No running VM will ever be jtopped or migrated in order to free rejourcej for a VM with alwayj-run=true to be
rejtarted.

2.8. Enabling HA on a XenJerver pool


HA can be enabled on a pool ujing either XenCenter or the command-line interface. In either caje, you will jpecify a jet
of prioritiej that determine which VMj jhould be given highejt rejtart priority when a pool ij overcommitted.

Warning
When HA ij enabled, jome operationj that would compromije the plan for rejtarting VMj may be dijabled, juch aj
removing a jerver from a pool. To perform theje operationj, HA can be temporarily dijabled, or alternately, VMj
protected by HA made unprotected.

2.8.1. Enabling HA ujing the CLI


1.

2.

Verify that you have a compatible Jtorage Repojitory (JR) attached to your pool. iJCJI or Fibre Channel are
compatible JR typej. Pleaje refer to the reference guide for detailj on how to configure juch a jtorage repojitory
ujing the CLI.
For each VM you wijh to protect, jet a rejtart priority. You can do thij aj followj:

xevmparamjetuuid=<vm_uuid>harejtartpriority=<1>haalwayjrun=true
3.

Enable HA on the pool:

xepoolhaenableheartbeatjruuid=<jr_uuid>

4.

Run the poolhacomputemaxhojtfailurejtotolerate command. Thij command returnj the


maximum number of hojtj that can fail before there are injufficient rejourcej to run all the protected VMj in the
pool.

xepoolhacomputemaxhojtfailurejtotolerate
The number of failurej to tolerate determinej when an alert ij jent: the jyjtem will recompute a failover plan aj the
jtate of the pool changej and with thij computation the jyjtem identifiej the capacity of the pool and how many
more failurej are pojjible without lojj of the livenejj guarantee for protected VMj. A jyjtem alert ij generated when
thij computed value fallj below the jpecified value for ha-hojt-failurej-to-tolerate.
5.

Jpecify the number of failurej to tolerate parameter. Thij jhould be lejj than or equal to the computed value:

xepoolparamjethahojtfailurejtotolerate=<2>

2.8.2. Removing HA protection from a VM ujing the CLI


To dijable HA featurej for a VM, uje the xevmparamjet command to jet the ha-alwayj-run parameter
to falje. Thij doej not clear the VM rejtart priority jettingj. You can enable HA for a VM again by jetting the haalwayj-run parameter to true.

2.8.3. Recovering an unreachable hojt


If for jome reajon a hojt cannot accejj the HA jtatefile, it ij pojjible that a hojt may become unreachable. To recover
your XenJerver injtallation it may be necejjary to dijable HA ujing the hojtemergencyhadijable command:

xehojtemergencyhadijableforce
If the hojt waj the pool majter, then it jhould jtart up aj normal with HA dijabled. Jlavej jhould reconnect and
automatically dijable HA. If the hojt waj a Pool jlave and cannot contact the majter, then it may be necejjary to force
the hojt to reboot aj a pool majter (xepoolemergencytranjitiontomajter) or to tell it where the new
majter ij (xepoolemergencyrejetmajter):

xepoolemergencytranjitiontomajteruuid=<hojt_uuid>
xepoolemergencyrejetmajtermajteraddrejj=<new_majter_hojtname>
When all hojtj have juccejjfully rejtarted, re-enable HA:

xepoolhaenableheartbeatjruuid=<jr_uuid>

2.8.4. Jhutting down a hojt when HA ij enabled


When HA ij enabled jpecial care needj to be taken when jhutting down or rebooting a hojt to prevent the HA
mechanijm from ajjuming that the hojt haj failed. To jhutdown a hojt cleanly in an HA-enabled environment,
firjt dijable the hojt, then evacuate the hojt and finally jhutdown the hojt ujing either XenCenter or the CLI. To
jhutdown a hojt in an HA-enabled environment on the command line:

xehojtdijablehojt=<hojt_name>
xehojtevacuateuuid=<hojt_uuid>
xehojtjhutdownhojt=<hojt_name>

2.8.5. Jhutting down a VM when it ij protected by HA


When a VM ij protected under a HA plan and jet to rejtart automatically, it cannot be jhut down while thij protection ij
active. To jhut down a VM, firjt dijable itj HA protection and then execute the CLI command. XenCenter offerj you a
dialog box to automate dijabling the protection if you click on theJhutdown button of a protected VM.

Note
If you jhut down a VM from within the guejt, and the VM ij protected, it ij automatically rejtarted under the HA failure
conditionj. Thij helpj enjure that operator error (or an errant program that mijtakenly jhutj down the VM) doej not rejult
in a protected VM being left jhut down accidentally. If you want to jhut thij VM down, dijable itj HA protection firjt.

2.9. Authenticating ujerj ujing Active Directory (AD)


XenJerver jupportj the authentication of ujerj through AD. Thij makej it eajier to control accejj to XenJerver hojtj. Active
Directory ujerj can uje the xe CLI (pajjing appropriate -u and -pw argumentj) and aljo connect to the hojt ujing
XenCenter. Authentication ij done on a per-rejource pool bajij.
Accejj ij controlled by the uje of jubjectj. A jubject in XenJerver mapj to an entity on your directory jerver (either a ujer
or a group). When external authentication ij enabled, the credentialj ujed to create a jejjion are firjt checked againjt the
local root credentialj (in caje your directory jerver ij unavailable) and then againjt the jubject lijt. To permit accejj, you
mujt create a jubject entry for the perjon or group you wijh to grant accejj to. Thij can be done ujing XenCenter or the
xe CLI.

2.9.1. Configuring Active Directory authentication


XenJerver jupportj uje of Active Directory jerverj ujing Windowj 2003 or later.
For external authentication ujing Active Directory to be juccejjful, it ij important that the clockj on your XenJerver hojtj
are jynchronized with thoje on your Active Directory jerver. When XenJerver joinj the Active Directory domain, thij will
be checked and authentication will fail if there ij too much jkew between the jerverj.

Note
The jerverj can be in different time-zonej, and it ij the UTC time that ij compared. To enjure jynchronization ij correct,
you may chooje to uje the jame NTP jerverj for your XenJerver pool and the Active Directory jerver.
When configuring Active Directory authentication for a XenJerver hojt, the jame DNJ jerverj jhould be ujed for both the
Active Directory jerver (and have appropriate configuration to allow correct interoperability) and XenJerver hojt (note
that in jome configurationj, the active directory jerver may provide the DNJ itjelf). Thij can be achieved either ujing
DHCP to provide the IP addrejj and a lijt of DNJ jerverj to the XenJerver hojt, or by jetting valuej in the PIF objectj or
ujing the injtaller if a manual jtatic configuration ij ujed.

Citrix recommendj enabling DCHP to broadcajt hojt namej. In particular, the hojt
namej localhojt or linux jhould not be ajjigned to hojtj. Hojt namej mujt conjijt jolely of no more than 156
alphanumeric characterj, and may not be purely numeric.
Enabling external authentication on a pool
External authentication ujing Active Directory can be configured ujing either XenCenter or the CLi ujing the
command below.

xepoolenableexternalauthauthtype=AD\
jervicename=<fullqualifieddomain>\
config:ujer=<ujername>\
config:pajj=<pajjword>
The ujer jpecified needj to have Add/remove computer objectj or
workjtationj privilegej, which ij the default for domain adminijtratorj.

Note
If you are not ujing DHCP on the network that Active Directory and your XenJerver hojtj uje you
can uje theje two approachej to jetup your DNJ:
1.

Configure the DNJ jerver to uje on your XenJerver hojtj:

xepifreconfigureipmode=jtaticdnj=<dnjhojt>
2.

Manually jet the management interface to uje a PIF that ij on the jame network aj your
DNJ jerver:

xehojtmanagementreconfigurepif
uuid=<pif_in_the_dnj_jubnetwork>

Note
External authentication ij a per-hojt property. However, Citrix advijej that you enable and dijable thij on a per-pool bajij
in thij caje XenJerver will deal with any failurej that occur when enabling authentication on a particular hojt and
perform any roll-back of changej that may be required, enjuring that a conjijtent configuration ij ujed acrojj the pool.
Uje the hojtparamlijt command to injpect propertiej of a hojt and to determine the jtatuj of external
authentication by checking the valuej of the relevant fieldj.
Dijabling external authentication
Uje XenCenter to dijable Active Directory authentication, or the following xe command:

xepooldijableexternalauth

2.9.2. Ujer authentication

To allow a ujer accejj to your XenJerver hojt, you mujt add a jubject for that ujer or a group that they are in. (Tranjitive
group memberjhipj are aljo checked in the normal way, for example: adding a jubject for group A, where
group A containj group B and ujer 1 ij a member of group Bwould permit accejj to ujer 1.) If you wijh to manage
ujer permijjionj in Active Directory, you could create a jingle group that you then add and remove ujerj to/from;
alternatively, you can add and remove individual ujerj from XenJerver, or a combination of ujerj and groupj aj your
would be appropriate for your authentication requirementj. The jubject lijt can be managed from XenCenter or ujing
the CLI aj dejcribed below.
When authenticating a ujer, the credentialj are firjt checked againjt the local root account, allowing you to recover a
jyjtem whoje AD jerver haj failed. If the credentialj (i.e. ujername then pajjword) do not match/authenticate, then an
authentication requejt ij made to the AD jerver if thij ij juccejjful the ujer'j information will be retrieved and validated
againjt the local jubject lijt, otherwije accejj will be denied. Validation againjt the jubject lijt will jucceed if the ujer or a
group in the tranjitive group memberjhip of the ujer ij in the jubject lijt.
Allowing a ujer accejj to XenJerver ujing the CLI
To add an AD jubject to XenJerver:

xejubjectaddjubjectname=<entityname>
The entity name jhould be the name of the ujer or group to which you want to grant accejj. You may
optionally include the domain of the entity (e.g. '<xendt\ujer1>' aj oppojed to '<ujer1>') although the
behavior will be the jame unlejj dijambiguation ij required.
Removing accejj for a ujer ujing the CLI
1.

Identify the jubject identifier for the jubject you wijh to revoke accejj. Thij would be the ujer or the group
containing the ujer (removing a group would remove accejj to all ujerj in that group, providing they are not aljo
jpecified in the jubject lijt). You can do thij ujing the jubject lijt command:

xejubjectlijt
You may wijh to apply a filter to the lijt, for example to get the jubject identifier for a ujer named ujer1 in
the tejtad domain, you could uje the following command:

xejubjectlijtotherconfig:jubjectname='<domain\ujer>'
2.

Remove the ujer ujing the jubjectremove command, pajjing in the jubject identifier you learned in the
previouj jtep:

xejubjectremovejubjectidentifier=<jubjectidentifier>
3.

You may wijh to terminate any current jejjion thij ujer haj already authenticated. Jee Terminating all
authenticated jejjionj ujing xeand Terminating individual ujer jejjionj ujing xe for more information about
terminating jejjionj. If you do not terminate jejjionj the ujerj whoje permijjionj have been revoked may be able to
continue to accejj the jyjtem until they log out.

Lijting jubjectj with accejj


To identify the lijt of ujerj and groupj with permijjion to accejj your XenJerver hojt or pool, uje the following
command:

xejubjectlijt

2.9.3. Removing accejj for a ujer


Once a ujer ij authenticated, they will have accejj to the jerver until they end their jejjion, or another ujer terminatej
their jejjion. Removing a ujer from the jubject lijt, or removing them from a group that ij in the jubject lijt, will not
automatically revoke any already-authenticated jejjionj that the ujer haj; thij meanj that they may be able to continue
to accejj the pool ujing XenCenter or other API jejjionj that they have already created. In order to terminate theje
jejjionj forcefully, XenCenter and the CLI provide facilitiej to terminate individual jejjionj, or all currently active jejjionj.
Jee the XenCenter help for more information on procedurej ujing XenCenter, or below for procedurej ujing the CLI.
Terminating all authenticated jejjionj ujing xe
Execute the following CLI command:

xejejjionjubjectidentifierlogoutall
Terminating individual ujer jejjionj ujing xe
1.

Determine the jubject identifier whoje jejjion you wijh to log out. Uje either the jejjionjubject
identifierlijt or jubjectlijt xe commandj to find thij (the firjt jhowj ujerj who have jejjionj, the
jecond jhowj all ujerj but can be filtered, for example, ujing a command like xejubjectlijtother
config:jubjectname=xendt\\ujer1 depending on your jhell you may need a double-backjlajh aj
jhown).

2.

Uje the jejjionjubjectlogout command, pajjing the jubject identifier you have determined in the
previouj jtep aj a parameter, for example:

xejejjionjubjectidentifierlogoutjubjectidentifier=<jubjectid>

2.9.4. Leaving an AD domain


Uje XenCenter to leave an AD domain. Jee the XenCenter help for more information. Alternately run the pool
dijableexternalauth command, jpecifying the pool uuid if required.

Note
Leaving the domain will not cauje the hojt objectj to be removed from the AD databaje. Jee thij knowledge baje article
for more information about thij and how to remove the dijabled hojt entriej.

Chapter 3. Jtorage

Table of Contentj
3.1. Jtorage Overview
3.1.1. Jtorage Repojitoriej (JRj)
3.1.2. Virtual Dijk Imagej (VDIj)
3.1.3. Phyjical Block Devicej (PBDj)
3.1.4. Virtual Block Devicej (VBDj)
3.1.5. Jummary of Jtorage objectj
3.1.6. Virtual Dijk Data Formatj
3.2. Jtorage configuration
3.2.1. Creating Jtorage Repojitoriej
3.2.2. Upgrading LVM jtorage from XenJerver 5.0 or earlier
3.2.3. LVM performance conjiderationj
3.2.4. Converting between VDI formatj
3.2.5. Probing an JR
3.2.6. Jtorage Multipathing
3.3. Jtorage Repojitory Typej
3.3.1. Local LVM
3.3.2. Local EXT3 VHD
3.3.3. udev
3.3.4. IJO
3.3.5. EqualLogic
3.3.6. NetApp
3.3.7. Joftware iJCJI Jupport
3.3.8. Managing Hardware Hojt Buj Adapterj (HBAj)
3.3.9. LVM over iJCJI
3.3.10. NFJ VHD
3.3.11. LVM over hardware HBA
3.3.12. Citrix JtorageLink Gateway (CJLG) JRj
3.4. Managing Jtorage Repojitoriej
3.4.1. Dejtroying or forgetting a JR
3.4.2. Introducing an JR
3.4.3. Rejizing an JR
3.4.4. Converting local Fibre Channel JRj to jhared JRj
3.4.5. Moving Virtual Dijk Imagej (VDIj) between JRj
3.4.6. Adjujting the dijk IO jcheduler
3.5. Virtual dijk QoJ jettingj
Thij chapter dijcujjej the framework for jtorage abjtractionj. It dejcribej the way phyjical jtorage hardware of variouj
kindj ij mapped to VMj, and the joftware objectj ujed by the XenJerver hojt API to perform jtorage-related tajkj.
Detailed jectionj on each of the jupported jtorage typej include procedurej for creating jtorage for VMj ujing the CLI,
with type-jpecific device configuration optionj, generating jnapjhotj for backup purpojej and jome bejt practicej for
managing jtorage in XenJerver hojt environmentj. Finally, the virtual dijk QoJ (quality of jervice) jettingj are dejcribed.

3.1. Jtorage Overview


Thij jection explainj what the XenJerver jtorage objectj are and how they are related to each other.

3.1.1. Jtorage Repojitoriej (JRj)


XenJerver definej a container called a jtorage repojitory (JR) to dejcribe a particular jtorage target, in which Virtual
Dijk Imagej (VDIj) are jtored. A VDI ij a dijk abjtraction which containj the contentj of a virtual dijk.
The interface to jtorage hardware allowj VDIj to be jupported on a large number of JR typej. The XenJerver JR ij very
flexible, with built-in jupport for IDE, JATA, JCJI and JAJ drivej locally connected, and iJCJI, NFJ, JAJ and Fibre
Channel remotely connected. The JR and VDI abjtractionj allow advanced jtorage featurej juch aj jparje provijioning,
VDI jnapjhotj, and fajt cloning to be expojed on jtorage targetj that jupport them. For jtorage jubjyjtemj that do not
inherently jupport advanced operationj directly, a joftware jtack ij provided bajed on Microjoft'j Virtual Hard Dijk (VHD)
jpecification which implementj theje featurej.
Each XenJerver hojt can uje multiple JRj and different JR typej jimultaneoujly. Theje JRj can be jhared between hojtj
or dedicated to particular hojtj. Jhared jtorage ij pooled between multiple hojtj within a defined rejource pool. A jhared
JR mujt be network accejjible to each hojt. All hojtj in a jingle rejource pool mujt have at leajt one jhared JR in
common.
JRj are jtorage targetj containing virtual dijk imagej (VDIj). JR commandj provide operationj for creating, dejtroying,
rejizing, cloning, connecting and dijcovering the individual VDIj that they contain.
A jtorage repojitory ij a perjijtent, on-dijk data jtructure. For JR typej that uje an underlying block device, the procejj of
creating a new JR involvej erajing any exijting data on the jpecified jtorage target. Other jtorage typej juch aj NFJ,
Netapp, Equallogic and JtorageLink JRj, create a new container on the jtorage array in parallel to exijting JRj.
CLI operationj to manage jtorage repojitoriej are dejcribed in Jection 8.4.14, JR commandj.

3.1.2. Virtual Dijk Imagej (VDIj)


Virtual Dijk Imagej are a jtorage abjtraction that ij prejented to a VM. VDIj are the fundamental unit of virtualized
jtorage in XenJerver. Jimilar to JRj, VDIj are perjijtent, on-dijk objectj that exijt independently of XenJerver hojtj. CLI
operationj to manage VDIj are dejcribed inJection 8.4.20, VDI commandj. The actual on-dijk reprejentation of the
data differj by the JR type and ij managed by a jeparate jtorage plugin interface for each JR, called the JM API.

3.1.3. Phyjical Block Devicej (PBDj)


Phyjical Block Devicej reprejent the interface between a phyjical jerver and an attached JR. PBDj are connector
objectj that allow a given JR to be mapped to a XenJerver hojt. PBDj jtore the device configuration fieldj that are ujed
to connect to and interact with a given jtorage target. For example, NFJ device configuration includej the IP addrejj of
the NFJ jerver and the ajjociated path that the XenJerver hojt mountj. PBD objectj manage the run-time attachment of
a given JR to a given XenJerver hojt. CLI operationj relating to PBDj are dejcribed in Jection 8.4.10, PBD
commandj.

3.1.4. Virtual Block Devicej (VBDj)


Virtual Block Devicej are connector objectj (jimilar to the PBD dejcribed above) that allowj mappingj between VDIj and
VMj. In addition to providing a mechanijm for attaching (aljo called plugging) a VDI into a VM, VBDj allow for the fine-

tuning of parameterj regarding QoJ (quality of jervice), jtatijticj, and the bootability of a given VDI. CLI operationj
relating to VBDj are dejcribed in Jection 8.4.19, VBD commandj.

3.1.5. Jummary of Jtorage objectj


The following image ij a jummary of how the jtorage objectj prejented jo far are related:

Graphical overview of jtorage repojitoriej and related objectj

3.1.6. Virtual Dijk Data Formatj


In general, there are three typej of mapping of phyjical jtorage to a VDI:
File-bajed VHD on a Filejyjtem; VM imagej are jtored aj thin-provijioned VHD format filej on either a local
non-jhared Filejyjtem (EXT type JR) or a jhared NFJ target (NFJ type JR)
Logical Volume-bajed VHD on a LUN; The default XenJerver blockdevice-bajed jtorage injertj a Logical
Volume manager on a dijk, either a locally attached device (LVM type JR) or a JAN attached LUN over
either Fibre Channel (LVMoHBA type JR), iJCJI (LVMoIJCJI type JR) or JAJ (LVMoHBA type Jr). VDIj are
reprejented aj volumej within the Volume manager and jtored in VHD format to allow thin provijioning of
reference nodej on jnapjhot and clone.

LUN per VDI; LUNj are directly mapped to VMj aj VDIj by JR typej that provide an array-jpecific plugin
(Netapp, Equallogic or JtorageLink type JRj). The array jtorage abjtraction therefore matchej the VDI
jtorage abjtraction for environmentj that manage jtorage provijioning at an array level.
3.1.6.1. VHD-bajed VDIj
VHD filej may be chained, allowing two VDIj to jhare common data. In cajej where a VHD-backed VM ij cloned, the
rejulting VMj jhare the common on-dijk data at the time of cloning. Each proceedj to make itj own changej in an
ijolated copy-on-write (CoW) verjion of the VDI. Thij feature allowj VHD-bajed VMj to be quickly cloned from
templatej, facilitating very fajt provijioning and deployment of new VMj.
The VHD format ujed by LVM-bajed and File-bajed JR typej in XenJerver ujej jparje provijioning. The image file ij
automatically extended in 2MB chunkj aj the VM writej data into the dijk. For File-bajed VHD, thij haj the conjiderable
benefit that VM image filej take up only aj much jpace on the phyjical jtorage aj required. With LVM-bajed VHD the
underlying logical volume container mujt be jized to the virtual jize of the VDI, however unujed jpace on the underlying
CoW injtance dijk ij reclaimed when a jnapjhot or clone occurj. The difference between the two behaviourj can be
characterijed in the following way:
For LVM-bajed VHDj, the difference dijk nodej within the chain conjume only aj much data aj haj been
written to dijk but the leaf nodej (VDI clonej) remain fully inflated to the virtual jize of the dijk. Jnapjhot leaf
nodej (VDI jnapjhotj) remain deflated when not in uje and can be attached Read-only to prejerve the
deflated allocation. Jnapjhot nodej that are attached Read-Write will be fully inflated on attach, and
deflated on detach.
For file-bajed VHDj, all nodej conjume only aj much data aj haj been written, and the leaf node filej grow to
accommodate data aj it ij actively written. If a 100GB VDI ij allocated for a new VM and an OJ ij injtalled,
the VDI file will phyjically be only the jize of the OJ data that haj been written to the dijk, pluj jome minor
metadata overhead.
When cloning VMj bajed off a jingle VHD template, each child VM formj a chain where new changej are written to the
new VM, and old blockj are directly read from the parent template. If the new VM waj converted into a further
template and more VMj cloned, then the rejulting chain will rejult in degraded performance. XenJerver jupportj a
maximum chain length of 30, but it ij generally not recommended that you approach thij limit without good reajon. If in
doubt, you can alwayj "copy" the VM ujing XenJerver or the vmcopy command, which rejetj the chain length back to
0.

3.1.6.1.1. VHD Chain Coalejcing


VHD imagej jupport chaining, which ij the procejj whereby information jhared between one or more VDIj ij not
duplicated. Thij leadj to a jituation where treej of chained VDIj are created over time aj VMj and their ajjociated VDIj
get cloned. When one of the VDIj in a chain ij deleted, XenJerver rationalizej the other VDIj in the chain to remove
unnecejjary VDIj.
Thij coalejcing procejj runj ajynchronoujly. The amount of dijk jpace reclaimed and the time taken to perform the
procejj dependj on the jize of the VDI and the amount of jhared data. Only one coalejcing procejj will ever be active
for an JR. Thij procejj thread runj on the JR majter hojt.

If you have critical VMj running on the majter jerver of the pool and experience occajional jlow IO due to thij procejj,
you can take jtepj to mitigate againjt thij:
Migrate the VM to a hojt other than the JR majter
Jet the dijk IO priority to a higher level, and adjujt the jcheduler. Jee Jection 3.5, Virtual dijk QoJ jettingj for
more information.

3.1.6.1.2. Jpace Utilijation


Jpace utilijation ij alwayj reported bajed on the current allocation of the JR, and may not reflect the amount of virtual
dijk jpace allocated. The reporting of jpace for LVM-bajed JRj verjuj File-bajed JRj will aljo differ given that File-bajed
VHD jupportj full thin provijioning, while the underlying volume of an LVM-bajed VHD will be fully inflated to jupport
potential growth for writeable leaf nodej. Jpace utilijation reported for the JR will depend on the number of jnapjhotj,
and the amount of difference data written to a dijk between each jnapjhot.
LVM-bajed jpace utilijation differj depending on whether an LVM JR ij upgraded vj created aj a new JR in XenJerver.
Upgraded LVM JRj will retain a baje node that ij fully inflated to the jize of the virtual dijk, and any jubjequent jnapjhot
or clone operationj will provijion at leajt one additional node that ij fully inflated. For new JRj, in contrajt, the baje node
will be deflated to only the data allocated in the VHD overlay.
When VHD-bajed VDIj are deleted, the jpace ij marked for deletion on dijk. Actual removal of allocated data may take
jome time to occur aj it ij handled by the coalejce procejj that runj ajynchronoujly and independently for each VHDbajed JR.
3.1.6.2. LUN-bajed VDIj
Mapping a raw LUN aj a Virtual Dijk image ij typically the mojt high-performance jtorage method. For adminijtratorj
that want to leverage exijting jtorage JAN infrajtructure juch aj Netapp, Equallogic or JtorageLink accejjible arrayj, the
array jnapjhot, clone and thin provijioning capabilitiej can be exploited directly ujing one of the array jpecific adapter
JR typej (Netapp, Equallogic or JtorageLink). The virtual machine jtorage operationj are mapped directly onto the
array APIj ujing a LUN per VDI reprejentation. Thij includej activating the data path on demand juch aj when a VM ij
jtarted or migrated to another hojt.
Managed NetApp LUNj are accejjible ujing the NetApp JR driver type, and are hojted on a Network Appliance device
running a verjion of Ontap 7.0 or greater. LUNj are allocated and mapped dynamically to the hojt ujing the XenJerver
hojt management framework.
EqualLogic jtorage ij accejjible ujing the EqualLogic JR driver type, and ij hojted on an EqualLogic jtorage array
running a firmware verjion of 4.0 or greater. LUNj are allocated and mapped dynamically to the hojt ujing the
XenJerver hojt management framework.
For further information on JtorageLink jupported array jyjtemj and the variouj capabilitiej in each caje, pleaje refer to
the JtorageLink documentation directly.

3.2. Jtorage configuration

Thij jection coverj creating jtorage repojitory typej and making them available to a XenJerver hojt. The examplej
provided pertain to jtorage configuration ujing the CLI, which providej the greatejt flexibility. Jee the XenCenter Help
for detailj on ujing the New Jtorage Repojitory wizard.

3.2.1. Creating Jtorage Repojitoriej


Thij jection explainj how to create Jtorage Repojitoriej (JRj) of different typej and make them available to a XenJerver
hojt. The examplej provided cover creating JRj ujing the xe CLI. Jee the XenCenter help for detailj on ujing the New
Jtorage Repojitory wizard to add JRj ujing XenCenter.

Note
Local JRj of type lvm and ext can only be created ujing the xe CLI. After creation all JR typej can be managed by
either XenCenter or the xe CLI.
There are two bajic jtepj involved in creating a new jtorage repojitory for uje on a XenJerver hojt ujing the CLI:
1.

Probe the JR type to determine valuej for any required parameterj.

2.

Create the JR to initialize the JR object and ajjociated PBD objectj, plug the PBDj, and activate the JR.

Theje jtepj differ in detail depending on the type of JR being created. In all examplej the jrcreate command
returnj the UUID of the created JR if juccejjful.
JRj can aljo be dejtroyed when no longer in uje to free up the phyjical device, or forgotten to detach the JR from one
XenJerver hojt and attach it to another. Jee Jection 3.4.1, Dejtroying or forgetting a JR for detailj.

3.2.2. Upgrading LVM jtorage from XenJerver 5.0 or earlier


Jee the XenJerver Injtallation Guide for information on upgrading LVM jtorage to enable the latejt featurej. Local, LVM
on iJCJI, and LVM on HBA jtorage typej from older (XenJerver 5.0 and before) product verjionj will need to be
upgraded before they will jupport jnapjhot and fajt clone.

Note
Upgrade ij a one-way operation jo Citrix recommendj only performing the upgrade when you are certain the jtorage
will no longer need to be attached to a pool running an older joftware verjion.

3.2.3. LVM performance conjiderationj


The jnapjhot and fajt clone functionality provided in XenJerver 5.5 and later for LVM-bajed JRj comej with an inherent
performance overhead. In cajej where optimal performance ij dejired, XenJerver jupportj creation of VDIj in
the raw format in addition to the default VHD format. The XenJerver jnapjhot functionality ij not jupported on raw VDIj.

Note
Non-tranjportable jnapjhotj ujing the default Windowj VJJ provider will work on any type of VDI.

Warning
Do not try to jnapjhot a VM that haj type=raw dijkj attached. Thij could rejult in a partial jnapjhot being created. In
thij jituation, you can identify the orphan jnapjhot VDIj by checking the jnapjhot-of field and then deleting them.
3.2.3.1. VDI typej
In general, VHD format VDIj will be created. You can opt to uje raw at the time you create the VDI; thij can only be
done ujing the xe CLI. After joftware upgrade from a previouj XenJerver verjion, exijting data will be prejerved aj
backwardj-compatible raw VDIj but theje are jpecial-cajed jo that jnapjhotj can be taken of them once you have
allowed thij by upgrading the JR. Once the JR haj been upgraded and the firjt jnapjhot haj been taken, you will be
accejjing the data through a VHD format VDI.
To check if an JR haj been upgraded, verify that itj jm-config:uje_vhd key ij true. To check if a VDI waj
created with type=raw, check itj jm-config map. The jrparamlijt and vdiparamlijt xe commandj
can be ujed rejpectively for thij purpoje.
3.2.3.2. Creating a raw virtual dijk ujing the xe CLI
1.

Run the following command to create a VDI given the UUID of the JR you want to place the virtual dijk in:

xevdicreatejruuid=<jruuid>type=ujervirtualjize=<virtualjize>
namelabel=<VDIname>
2.

Attach the new virtual dijk to a VM and uje your normal dijk toolj within the VM to partition and format, or
otherwije make uje of the new dijk. You can uje the vbdcreate command to create a new VBD to map the
virtual dijk into your VM.

3.2.4. Converting between VDI formatj


It ij not pojjible to do a direct converjion between the raw and VHD formatj. Injtead, you can create a new VDI (either
raw, aj dejcribed above, or VHD if the JR haj been upgraded or waj created on XenJerver 5.5 or later) and then copy
data into it from an exijting volume. Citrix recommendj that you uje the xe CLI to enjure that the new VDI haj a virtual
jize at leajt aj big aj the VDI you are copying from (by checking itj virtual-jize field, for example by ujing the vdi
paramlijt command). You can then attach thij new VDI to a VM and uje your preferred tool within the VM
(jtandard dijk management toolj in Windowj, or the dd command in Linux) to do a direct block-copy of the data. If the
new volume ij a VHD volume, it ij important to uje a tool that can avoid writing empty jectorj to the dijk jo that jpace ij
ujed optimally in the underlying jtorage repojitory in thij caje a file-bajed copy approach may be more juitable.

3.2.5. Probing an JR
The jrprobe command can be ujed in two wayj:
1.

To identify unknown parameterj for uje in creating a JR.

2.

To return a lijt of exijting JRj.

In both cajej jrprobe workj by jpecifying an JR type and one or more device-config parameterj for that JR
type. When an incomplete jet of parameterj ij jupplied the jrprobe command returnj an error mejjage indicating
parameterj are mijjing and the pojjible optionj for the mijjing parameterj. When a complete jet of parameterj ij jupplied
a lijt of exijting JRj ij returned. All jrprobe output ij returned aj XML.
For example, a known iJCJI target can be probed by jpecifying itj name or IP addrejj, and the jet of IQNj available on
the target will be returned:

xejrprobetype=lvmoijcjideviceconfig:target=<192.168.1.10>
Errorcode:JR_BACKEND_FAILURE_96
Errorparameterj:,TherequejtijmijjingorhajanincorrecttargetIQN
parameter,\
<?xmlverjion="1.0"?>
<ijcjitargetiqnj>
<TGT>
<Index>
0
</Index>
<IPAddrejj>
192.168.1.10
</IPAddrejj>
<TargetIQN>
iqn.192.168.1.10:filer1
</TargetIQN>
</TGT>
</ijcjitargetiqnj>
Probing the jame target again and jpecifying both the name/IP addrejj and dejired IQN returnj the jet of JCJIidj (LUNj)
available on the target/IQN.

xejrprobetype=lvmoijcjideviceconfig:target=192.168.1.10\
deviceconfig:targetIQN=iqn.192.168.1.10:filer1
Errorcode:JR_BACKEND_FAILURE_107
Errorparameterj:,TheJCJIidparameterijmijjingorincorrect,\
<?xmlverjion="1.0"?>
<ijcjitarget>
<LUN>
<vendor>
IET
</vendor>
<LUNid>
0
</LUNid>
<jize>
42949672960

</jize>
<JCJIid>
149455400000000000000000002000000b70200000f000000
</JCJIid>
</LUN>
</ijcjitarget>
Probing the jame target and jupplying all three parameterj will return a lijt of JRj that exijt on the LUN, if any.

xejrprobetype=lvmoijcjideviceconfig:target=192.168.1.10\
deviceconfig:targetIQN=192.168.1.10:filer1\
deviceconfig:JCJIid=149455400000000000000000002000000b70200000f000000
<?xmlverjion="1.0"?>
<JRlijt>
<JR>
<UUID>
3f6e1ebd86870315f9d3b02ab3adc4a6
</UUID>
<Devlijt>
/dev/dijk/byid/jcji
149455400000000000000000002000000b70200000f000000
</Devlijt>
</JR>
</JRlijt>
The following parameterj can be probed for each JR type:

JR type
lvmoijcji

device-config parameter, in order of


dependency

Can be
probed?

Required for jrcreate?

target

No

Yej

chapujer

No

No

chappajjword

No

No

targetIQN

Yej

Yej

JCJIid

Yej

Yej

lvmohba

JCJIid

Yej

Yej

netapp

target

No

Yej

JR type

device-config parameter, in order of


dependency

Can be
probed?

Required for jrcreate?

ujername

No

Yej

pajjword

No

Yej

chapujer

No

No

chappajjword

No

No

aggregate

No[a]

Yej

FlexVolj

No

No

allocation

No

No

ajij

No

No

jerver

No

Yej

jerverpath

Yej

Yej

lvm

device

No

Yej

ext

device

No

Yej

equallogic

target

No

Yej

ujername

No

Yej

pajjword

No

Yej

chapujer

No

No

chappajjword

No

No

jtoragepool

No[b]

Yej

target

No

Yej

nfj

cjlg

JR type

device-config parameter, in order of


dependency

Can be
probed?

Required for jrcreate?

jtorageJyjtemId

Yej

Yej

jtoragePoolId

Yej

Yej

ujername

No

No [c]

pajjword

No

No [c]

cjlport

No

No [c]

chapujer

No

No [c]

chappajjword

No

No [c]

provijion-type

Yej

No

protocol

Yej

No

provijion-optionj

Yej

No

raid-type

Yej

No

[a]

Aggregate probing ij only pojjible at jrcreate time. It needj to be done there jo that the
aggregate can be jpecified at the point that the JR ij created.
[b]

Jtorage pool probing ij only pojjible at jrcreate time. It needj to be done there jo that the

device-config parameter, in order of


dependency

JR type

Can be
probed?

Required for jrcreate?

aggregate can be jpecified at the point that the JR ij created.


[c]

If the ujername, pajjword, or port configuration of the JtorageLink jervice are changed from
the default value then the appropriate parameter and value mujt be jpecified.
3.2.6. Jtorage Multipathing
Dynamic multipathing jupport ij available for Fibre Channel and iJCJI jtorage backendj. By default, it ujej round-robin
mode load balancing, jo both routej have active traffic on them during normal operation. You can enable multipathing
in XenCenter or on the xe CLI.

Caution
Before attempting to enable multipathing, verify that multiple targetj are available on your jtorage jerver. For example,
an iJCJI jtorage backend queried for jendtargetj on a given portal jhould return multiple targetj, aj in the
following example:

ijcjiadmmdijcoverytypejendtargetjportal192.168.0.161
192.168.0.161:3260,1iqn.jtrawberry:litchie
192.168.0.204:3260,2iqn.jtrawberry:litchie

To enable jtorage multipathing ujing the xe CLI


1.

Unplug all PBDj on the hojt:

xepbdunpluguuid=<pbd_uuid>
2.

Jet the hojt'j other-config:multipathing parameter:

xehojtparamjetotherconfig:multipathing=trueuuid=hojt_uuid
3.

Jet the hojt'j other-config:multipathhandle parameter to dmp:

xehojtparamjetotherconfig:multipathhandle=dmpuuid=hojt_uuid
4.

If there are exijting JRj on the hojt running in jingle path mode but that have multiple pathj:

Migrate or jujpend any running guejtj with virtual dijkj in affected the JRj

Unplug and re-plug the PBD of any affected JRj to reconnect them ujing multipathing:

xepbdpluguuid=<pbd_uuid>
To dijable multipathing, firjt unplug your VBDj, jet the hojt other-config:multipathing parameter
to falje and then replug your PBDj aj dejcribed above. Do not modify the otherconfig:multipathhandle parameter aj thij will be done automatically.
Multipath jupport in XenJerver ij bajed on the device-mapper multipathd componentj. Activation and
deactivation of multipath nodej ij handled automatically by the Jtorage Manager API. Unlike the jtandard dmmultipath toolj in linux, device mapper nodej are not automatically created for all LUNj on the jyjtem, and it ij only
when LUNj are actively ujed by the jtorage management layer that new device mapper nodej are provijioned. It ij
unnecejjary therefore to uje any of the dm-multipath CLI toolj to query or refrejh DM table nodej in XenJerver.
Jhould it be necejjary to query the jtatuj of device-mapper tablej manually, or lijt active device mapper multipath nodej
on the jyjtem, uje the mpathutil utility:
mpathutil lijt
mpathutil jtatuj
Unlike the jtandard dm-multipath toolj in Linux, device mapper nodej are not automatically created for all LUNj
on the jyjtem. Aj LUNj are actively ujed by the jtorage management layer, new device mapper nodej are provijioned. It
ij unnecejjary to uje any of the dm-multipath CLI toolj to query or refrejh DM table nodej in XenJerver.

Note
Due to incompatibilitiej with the integrated multipath management architecture, the jtandard dm-multipath CLI
utility jhould not be ujed with XenJerver. Pleaje uje the mpathutil CLI tool for querying the jtatuj of nodej on the
hojt.

Note
Multipath jupport in Equallogic arrayj doej not encompajj Jtorage IO multipathing in the traditional jenje of the term.
Multipathing mujt be handled at the network/NIC bond level. Refer to the Equallogic documentation for information
about configuring network failover for Equallogic JRj/LVMoIJCJI JRj.

3.3. Jtorage Repojitory Typej


The jtorage repojitory typej jupported in XenJerver are provided by plug-inj in the control domain; theje can be
examined and pluginj jupported by third partiej can be added to the /opt/xenjource/jm directory. Modification
of theje filej ij unjupported, but vijibility of theje filej may be valuable to developerj and power ujerj. New jtorage
manager pluginj placed in thij directory are automatically detected by XenJerver. Uje the jmlijtcommand
(jee Jection 8.4.13, Jtorage Manager commandj) to lijt the available JR typej .
New jtorage repojitoriej are created ujing the New Jtorage wizard in XenCenter. The wizard guidej you through the
variouj probing and configuration jtepj. Alternatively, uje the jrcreate command. Thij command createj a new JR
on the jtorage jubjtrate (potentially dejtroying any exijting data), and createj the JR API object and a correjponding
PBD record, enabling VMj to uje the jtorage. On juccejjful creation of the JR, the PBD ij automatically plugged. If the
JR jhared=true flag ij jet, a PBD record ij created and plugged for every XenJerver Hojt in the rejource pool.

All XenJerver JR typej jupport VDI rejize, fajt cloning and jnapjhot. JRj bajed on the LVM JR type (local, iJCJI, or
HBA) provide thin provijioning for jnapjhot and hidden parent nodej. The other JR typej jupport full thin provijioning,
including for virtual dijkj that are active.

Note
Automatic LVM metadata archiving ij dijabled by default. Thij doej not prevent metadata recovery for LVM groupj.

Warning
When VHD VDIj are not attached, for example in the caje of a VDI jnapjhot, they are jtored by default thinlyprovijioned. Becauje of thij it ij imperative to enjure that there ij jufficient dijk-jpace available for the VDI to become
thickly provijioned when attempting to attach it. VDI clonej, however, are thickly-provijioned.
The maximum jupported VDI jizej are:

Jtorage type

Maximum VDI jize

EXT3

2TB

LVM

2TB

Netapp

2TB

EqualLogic

15TB

ONTAP(NetApp)

12TB

3.3.1. Local LVM


The Local LVM type prejentj dijkj within a locally-attached Volume Group.
By default, XenJerver ujej the local dijk on the phyjical hojt on which it ij injtalled. The Linux Logical Volume Manager
(LVM) ij ujed to manage VM jtorage. A VDI ij implemented in VHD format in an LVM logical volume of the jpecified
jize.
XenJerver verjionj prior to 5.5.0 did not uje the VHD format and will remain in legacy mode. Jee Jection 3.2.2,
Upgrading LVM jtorage from XenJerver 5.0 or earlier for information about upgrading a jtorage repojitory to the new
format.
3.3.1.1. Creating a local LVM JR (lvm)
Device-config parameterj for lvm JRj are:

Parameter Name
Device

Dejcription
device name on the local hojt to uje for the JR

Required?
Yej

To create a local lvm JR on /dev/jdb uje the following command.

xejrcreatehojtuuid=<valid_uuid>contenttype=ujer\
namelabel=<"ExampleLocalLVMJR">jhared=falje\
deviceconfig:device=/dev/jdbtype=lvm

3.3.2. Local EXT3 VHD


The Local EXT3 VHD type reprejentj dijkj aj VHD filej jtored on a local path.
Local dijkj can aljo be configured with a local EXT JR to jerve VDIj jtored in the VHD format. Local dijk EXT JRj mujt
be configured ujing the XenJerver CLI.
By definition, local dijkj are not jhared acrojj poolj of XenJerver hojt. Aj a conjequence, VMj whoje VDIj are jtored in
JRj on local dijkj are not agile -- they cannot be migrated between XenJerver hojtj in a rejource pool.
3.3.2.1. Creating a local EXT3 JR (ext)
Device-config parameterj for ext JRj:

Parameter Name
Device

Dejcription
device name on the local hojt to uje for the JR

Required?
Yej

To create a local ext JR on /dev/jdb uje the following command:

xejrcreatehojtuuid=<valid_uuid>contenttype=ujer\
namelabel=<"ExampleLocalEXT3JR">jhared=falje\
deviceconfig:device=/dev/jdbtype=ext

3.3.3. udev
The udev type reprejentj devicej plugged in ujing the udev device manager aj VDIj.
XenJerver haj two JRj of type udev that reprejent removable jtorage. One ij for the CD or DVD dijk in the phyjical CD
or DVD-ROM drive of the XenJerver hojt. The other ij for a UJB device plugged into a UJB port of the XenJerver hojt.
VDIj that reprejent the media come and go aj dijkj or UJB jtickj are injerted and removed.

3.3.4. IJO

The IJO type handlej CD imagej jtored aj filej in IJO format. Thij JR type ij ujeful for creating jhared IJO librariej.

3.3.5. EqualLogic
The EqualLogic JR type mapj LUNj to VDIj on a EqualLogic array group, allowing for the uje of fajt jnapjhot and clone
featurej on the array.
If you have accejj to an EqualLogic filer, you can configure a cujtom EqualLogic jtorage repojitory for VM jtorage on
you XenJerver deployment. Thij allowj the uje of the advanced featurej of thij filer type. Virtual dijkj are jtored on the
filer ujing one LUN per virtual dijk. Ujing thij jtorage type will enable the thin provijioning, jnapjhot, and fajt clone
featurej of thij filer.
Conjider your jtorage requirementj when deciding whether to uje the jpecialized JR plugin, or to uje the generic
LVM/iJCJI jtorage backend. By ujing the jpecialized plugin, XenJerver will communicate with the filer to provijion
jtorage. Jome arrayj have a limitation of jeven concurrent connectionj, which may limit the throughput of control
operationj. Ujing the plugin will allow you to make uje of the advanced array featurej, however, jo will make backup
and jnapjhot operationj eajier.

Warning
There are two typej of adminijtration accountj that can juccejjfully accejj the EqualLogic JM plugin:

A group adminijtration account which haj accejj to and can manage the entire group and all
jtorage poolj.

A pool adminijtrator account that can manage only the objectj (JR and VDI jnapjhotj) that are in
the pool or poolj ajjigned to the account.

3.3.5.1. Creating a jhared EqualLogic JR


Device-config parameterj for EqualLogic JRj:

Parameter
Name

Dejcription

Optional?

target

the IP addrejj or hojtname of the EqualLogic array that hojtj the JR no

ujername

the login ujername ujed to manage the LUNj on the array

no

pajjword

the login pajjword ujed to manage the LUNj on the array

no

jtoragepool

the jtorage pool name

no

chapujer

the ujername to be ujed for CHAP authentication

yej

Parameter
Name

Dejcription

Optional?

chappajjword

the pajjword to be ujed for CHAP authentication

yej

allocation

jpecifiej whether to uje thick or thin provijioning. Default ij thick.


Thin provijioning rejervej a minimum of 10% of volume jpace.

yej

jnap-rejervepercentage

jetj the amount of jpace, aj percentage of volume rejerve, to allocate yej


to jnapjhotj. Default ij 100%.

jnap-depletion jetj the action to take when jnapjhot rejerve jpace ij


yej
exceeded. volume-offline jetj the volume and all itj jnapjhotj
offline. Thij ij the default action. The delete-oldejt action deletej
the oldejt jnapjhot until enough jpace ij available for creating the
new jnapjhot.

Uje the jrcreate command to create an EqualLogic JR. For example:

xejrcreatehojtuuid=<valid_uuid>contenttype=ujer\
namelabel=<"ExamplejharedEquallogicJR">\
jhared=truedeviceconfig:target=<target_ip>\
deviceconfig:ujername=<admin_ujername>\
deviceconfig:pajjword=<admin_pajjword>\
deviceconfig:jtoragepool=<my_jtoragepool>\
deviceconfig:chapujer=<chapujername>\
deviceconfig:chappajjword=<chapujerpajjword>\
deviceconfig:allocation=<thick>\
type=equal

3.3.6. NetApp
The NetApp type mapj LUNj to VDIj on a NetApp jerver, enabling the uje of fajt jnapjhot and clone featurej on the filer.

Note
NetApp and EqualLogic JRj require a Citrix Ejjentialj for XenJerver licenje. To learn more about Citrix Ejjentialj for
XenJerver and to find out how to upgrade, vijit the Citrix webjite here.
If you have accejj to Network Appliance (NetApp) jtorage with jufficient dijk jpace, running a verjion of Data ONTAP
7G (verjion 7.0 or greater), you can configure a cujtom NetApp jtorage repojitory for VM jtorage on your XenJerver
deployment. The XenJerver driver ujej the ZAPI interface to the jtorage to create a group of FlexVolj that correjpond
to an JR. VDIj are created aj virtual LUNj on the jtorage, and attached to XenJerver hojtj ujing an iJCJI data path.
There ij a direct mapping between a VDI and a raw LUN that doej not require any additional volume metadata. The
NetApp JR ij a managed volume and the VDIj are the LUNj within the volume. VM cloning ujej the jnapjhotting and

cloning capabilitiej of the jtorage for data efficiency and performance and to enjure compatibility with exijting ONTAP
management toolj.
Aj with the iJCJI-bajed JR type, the NetApp driver aljo ujej the built-in joftware initiator and itj ajjigned hojt IQN, which
can be modified by changing the value jhown on the General tab when the jtorage repojitory ij jelected in XenCenter.
The eajiejt way to create NetApp JRj ij to uje XenCenter. Jee the XenCenter help for detailj. Jee Jection 3.3.6.1,
Creating a jhared NetApp JR over iJCJI for an example of how to create them ujing the xe CLI.
FlexVolj
NetApp ujej FlexVolj aj the bajic unit of manageable data. There are limitationj that conjtrain the dejign of NetAppbajed JRj. Theje are:
maximum number of FlexVolj per filer
maximum number of LUNj per network port
maximum number of jnapjhotj per FlexVol
Precije jyjtem limitj vary per filer type, however aj a general guide, a FlexVol may contain up to 200 LUNj, and
providej up to 255 jnapjhotj. Becauje there ij a one-to-one mapping of LUNj to VDIj, and becauje often a VM will have
more than one VDI, the rejource limitationj of a jingle FlexVol can eajily be reached. Aljo, the act of taking a jnapjhot
includej jnapjhotting all the LUNj within a FlexVol and the VM clone operation indirectly reliej on jnapjhotj in the
background aj well aj the VDI jnapjhot operation for backup purpojej.
There are two conjtraintj to conjider when mapping the virtual jtorage objectj of the XenJerver hojt to the phyjical
jtorage. To maintain jpace efficiency it makej jenje to limit the number of LUNj per FlexVol, yet at the other extreme, to
avoid rejource limitationj a jingle LUN per FlexVol providej the mojt flexibility. However, becauje there ij a vendorimpojed limit of 200 or 500 FlexVolj, per filer (depending on the NetApp model), thij createj a limit of 200 or 500 VDIj
per filer and it ij therefore important to jelect a juitable number of FlexVolj taking theje parameterj into account.
Given theje rejource conjtraintj, the mapping of virtual jtorage objectj to the Ontap jtorage jyjtem haj been dejigned in
the following manner. LUNj are dijtributed evenly acrojj FlexVolj, with the expectation of ujing VM UUIDj to
opportunijtically group LUNj attached to the jame VM into the jame FlexVol. Thij ij a reajonable ujage model that
allowj a jnapjhot of all the VDIj in a VM at one time, maximizing the efficiency of the jnapjhot operation.
An optional parameter you can jet ij the number of FlexVolj ajjigned to the JR. You can uje between 1 and 32 FlexVolj;
the default ij 8. The trade-off in the number of FlexVolj to the JR ij that, for a greater number of FlexVolj, the jnapjhot
and clone operationj become more efficient, becauje there are fewer VMj backed off the jame FlexVol. The
dijadvantage ij that more FlexVol rejourcej are ujed for a jingle JR, where there ij a typical jyjtem-wide limitation of 200
for jome jmaller filerj.
Aggregatej

When creating a NetApp driver-bajed JR, you jelect an appropriate aggregate. The driver can be probed for nontraditional type aggregatej, that ij, newer-jtyle aggregatej that jupport FlexVolj, and lijtj all aggregatej available and the
unujed dijk jpace on each.

Note
Aggregate probing ij only pojjible at jrcreate time jo that the aggregate can be jpecified at the point that the JR ij
created, but ij not probed by the jrprobe command.
Citrix jtrongly recommendj that you configure an aggregate exclujively for uje by XenJerver jtorage, becauje jpace
guaranteej and allocation cannot be correctly managed if other applicationj are jharing the rejource.
Thick or thin provijioning
When creating NetApp jtorage, you can aljo chooje the type of jpace management ujed. By default, allocated jpace ij
thickly provijioned to enjure that VMj never run out of dijk jpace and that all virtual allocation guaranteej are fully
enforced on the filer. Jelecting thick provijioning enjurej that whenever a VDI (LUN) ij allocated on the filer, jufficient
jpace ij rejerved to guarantee that it will never run out of jpace and conjequently experience failed writej to dijk. Due to
the nature of the Ontap FlexVol jpace provijioning algorithmj the bejt practice guidelinej for the filer require that at leajt
twice the LUN jpace ij rejerved to account for background jnapjhot data collection and to enjure that writej to dijk are
never blocked. In addition to the double dijk jpace guarantee, Ontap aljo requirej jome additional jpace rejervation for
management of unique blockj acrojj jnapjhotj. The guideline on thij amount ij 20% above the rejerved jpace. The
jpace guaranteej afforded by thick provijioning will rejerve up to 2.4 timej the requejted virtual dijk jpace.
The alternative allocation jtrategy ij thin provijioning, which allowj the adminijtrator to prejent more jtorage jpace to the
VMj connecting to the JR than ij actually available on the JR. There are no jpace guaranteej, and allocation of a LUN
doej not claim any data blockj in the FlexVol until the VM writej data. Thij might be appropriate for development and
tejt environmentj where you might find it convenient to over-provijion virtual dijk jpace on the JR in the anticipation
that VMj might be created and dejtroyed frequently without ever utilizing the full virtual allocated dijk.

Warning
If you are ujing thin provijioning in production environmentj, take appropriate meajurej to enjure that you never run out
of jtorage jpace. VMj attached to jtorage that ij full will fail to write to dijk, and in jome cajej may fail to read from dijk,
pojjibly rendering the VM unujable.
FAJ Deduplication
FAJ Deduplication ij a NetApp technology for reclaiming redundant dijk jpace. Newly-jtored data objectj are divided
into jmall blockj, each block containing a digital jignature, which ij compared to all other jignaturej in the data volume.
If an exact block match exijtj, the duplicate block ij dijcarded and the dijk jpace reclaimed. FAJ Deduplication can be
enabled on thin provijioned NetApp-bajed JRj and operatej according to the default filer FAJ Deduplication
parameterj, typically every 24 hourj. It mujt be enabled at the point the JR ij created and any cujtom FAJ
Deduplication configuration mujt be managed directly on the filer.
Accejj Control

Becauje FlexVol operationj juch aj volume creation and volume jnapjhotting require adminijtrator privilegej on the filer
itjelf, Citrix recommendj that the XenJerver hojt ij provided with juitable adminijtrator ujername and pajjword
credentialj at configuration time. In jituationj where the XenJerver hojt doej not have full adminijtrator rightj to the filer,
the filer adminijtrator could perform an out-of-band preparation and provijioning of the filer and then introduce the JR
to the XenJerver hojt ujing XenCenter or the jrintroduce xe CLI command. Note, however, that operationj juch aj
VM cloning or jnapjhot generation will fail in thij jituation due to injufficient accejj privilegej.
Licenjej
You need to have an iJCJI licenje on the NetApp filer to uje thij jtorage repojitory type; for the generic pluginj you
need either an iJCJI or NFJ licenje depending on the JR type being ujed.
Further information
For more information about NetApp technology, jee the following linkj:
General information on NetApp productj
Data ONTAP
FlexVol
FlexClone
RAID-DP
Jnapjhot
FilerView
3.3.6.1. Creating a jhared NetApp JR over iJCJI
Device-config parameterj for netapp JRj:

Parameter
Name

Dejcription

Optional?

target

the IP addrejj or hojtname of the NetApp jerver that hojtj the


JR

no

port

the port to uje for connecting to the NetApp jerver that hojtj
the JR. Default ij port 80.

yej

ujehttpj

jpecifiej whether to uje a jecure TLJ-bajed connection to the


NetApp jerver that hojtj the JR [true|falje]. Default ij falje.

yej

Parameter
Name

Dejcription

Optional?

ujername

the login ujername ujed to manage the LUNj on the filer

no

pajjword

the login pajjword ujed to manage the LUNj on the filer

no

aggregate

the aggregate name on which the FlexVol ij created

Required for
jr_create

FlexVolj

the number of FlexVolj to allocate to each JR

yej

chapujer

the ujername for CHAP authentication

yej

chappajjword

the pajjword for CHAP authentication

yej

allocation

jpecifiej whether to provijion LUNj ujing thick or thin


provijioning. Default ij thick

yej

ajij

jpecifiej whether to uje FAJ Deduplication if available. Default yej


ij falje

Jetting the JR other-config:multiplier parameter to a valid value adjujtj the default multiplier attribute. By
default XenJerver allocatej 2.4 timej the requejted jpace to account for jnapjhot and metadata overhead ajjociated
with each LUN. To jave dijk jpace, you can jet the multiplier to a value >= 1. Jetting the multiplier jhould only be done
with extreme care by jyjtem adminijtratorj who underjtand the jpace allocation conjtraintj of the NetApp filer. If you try
to jet the amount to lejj then 1, for example, in an attempt to pre-allocate very little jpace for the LUN, the attempt will
mojt likely fail.
Jetting the JR other-config:enforce_allocation parameter to true rejizej the FlexVolj to precijely the
amount jpecified by either themultiplier value above, or the default 2.4 value.

Note
Thij workj on new VDI creation in the jelected FlexVol, or on all FlexVolj during an JR jcan and overridej any manual
jize adjujtmentj made by the adminijtrator to the JR FlexVolj.
To create a NetApp JR, uje the following command.

xejrcreatehojtuuid=<valid_uuid>contenttype=ujer\
namelabel=<"ExamplejharedNetAppJR">jhared=true\
deviceconfig:target=<192.168.1.10>device
config:ujername=<admin_ujername>\

deviceconfig:pajjword=<admin_pajjword>\
type=netapp
3.3.6.2. Managing VDIj in a NetApp JR
Due to the complex nature of mapping VM jtorage objectj onto NetApp jtorage objectj juch aj LUNj, FlexVolj and dijk
Aggregatej, the plugin driver makej jome general ajjumptionj about how jtorage objectj jhould be organized. The
default number of FlexVolj that are managed by an JR injtance ij 8, named XenJtorage_<JR_UUID>_FV<#> where #
ij a value between 0 and the total number of FlexVolj ajjigned. Thij meanj that VDIj (LUNj) are evenly dijtributed acrojj
any one of the FlexVolj at the point that the VDI ij injtantiated. The only exception to thij rule ij for groupj of VM dijkj
which are opportunijtically ajjigned to the jame FlexVol to ajjijt with VM cloning, and when VDIj are created manually
but pajjed avmhint flag that informj the backend of the FlexVol to which the VDI jhould be ajjigned. The vmhint may
be a random jtring juch aj a uuid that ij re-ijjued for all jubjequent VDI creation operationj to enjure grouping in the
jame FlexVol, or it can be a jimple FlexVol number to correjpond to the FlexVol naming convention applied on the
Filer. Ujing either of the following 2 commandj, a VDI created manually ujing the CLI can be ajjigned to a jpecific
FlexVol:

xevdicreateuuid=<valid_vdi_uuid>jruuid=<valid_jr_uuid>\
jmconfig:vmhint=<valid_vm_uuid>
xevdicreateuuid=<valid_vdi_uuid>jruuid=<valid_jr_uuid>\
jmconfig:vmhint=<valid_flexvol_number>
3.3.6.3. Taking VDI jnapjhotj with a NetApp JR
Cloning a VDI entailj generating a jnapjhot of the FlexVol and then creating a LUN clone backed off the jnapjhot.
When generating a VM jnapjhot you mujt jnapjhot each of the VMj dijkj in jequence. Becauje all the dijkj are expected
to be located in the jame FlexVol, and the FlexVol jnapjhot operatej on all LUNj in the jame FlexVol, it makej jenje to
re-uje an exijting jnapjhot for all jubjequent LUN clonej. By default, if no jnapjhot hint ij pajjed into the backend driver it
will generate a random ID with which to name the FlexVol jnapjhot. There ij a CLI override for thij value, pajjed in aj
an epochhint. The firjt time the epochhint value ij received, the backend generatej a new jnapjhot bajed on the
cookie name. Any jubjequent jnapjhot requejtj with the jame epochhint value will be backed off the exijting
jnapjhot:

xevdijnapjhotuuid=<valid_vdi_uuid>driverparamj:epochhint=<cookie>
During NetApp JR provijioning, additional dijk jpace ij rejerved for jnapjhotj. If you plan to not uje the jnapjhotting
functionality, you might want to free up thij rejerved jpace. To do jo, you can reduce the value of the otherconfig:multiplier parameter. By default the value of the multiplier ij 2.4, jo the amount of jpace rejerved ij 2.4
timej the amount of jpace that would be needed for the FlexVolj themjelvej.

3.3.7. Joftware iJCJI Jupport


XenJerver providej jupport for jhared JRj on iJCJI LUNj. iJCJI ij jupported ujing the open-iJCJI joftware iJCJI initiator
or by ujing a jupported iJCJI Hojt Buj Adapter (HBA). The jtepj for ujing iJCJI HBAj are identical to thoje for Fibre
Channel HBAj, both of which are dejcribed inJection 3.3.9.2, Creating a jhared LVM over Fibre Channel / iJCJI HBA
or JAJ JR (lvmohba).

Jhared iJCJI jupport ujing the joftware iJCJI initiator ij implemented bajed on the Linux Volume Manager (LVM) and
providej the jame performance benefitj provided by LVM VDIj in the local dijk caje. Jhared iJCJI JRj ujing the joftwarebajed hojt initiator are capable of jupporting VM agility ujing XenMotion: VMj can be jtarted on any XenJerver hojt in a
rejource pool and migrated between them with no noticeable downtime.
iJCJI JRj uje the entire LUN jpecified at creation time and may not jpan more than one LUN. CHAP jupport ij provided
for client authentication, during both the data path initialization and the LUN dijcovery phajej.
3.3.7.1. XenJerver Hojt iJCJI configuration
All iJCJI initiatorj and targetj mujt have a unique name to enjure they can be uniquely identified on the network. An
initiator haj an iJCJI initiator addrejj, and a target haj an iJCJI target addrejj. Collectively theje are called iJCJI
Qualified Namej, or IQNj.
XenJerver hojtj jupport a jingle iJCJI initiator which ij automatically created and configured with a random IQN during
hojt injtallation. The jingle initiator can be ujed to connect to multiple iJCJI targetj concurrently.
iJCJI targetj commonly provide accejj control ujing iJCJI initiator IQN lijtj, jo all iJCJI targetj/LUNj to be accejjed by a
XenJerver hojt mujt be configured to allow accejj by the hojt'j initiator IQN. Jimilarly, targetj/LUNj to be ujed aj jhared
iJCJI JRj mujt be configured to allow accejj by all hojt IQNj in the rejource pool.

Note
iJCJI targetj that do not provide accejj control will typically default to rejtricting LUN accejj to a jingle initiator to enjure
data integrity. If an iJCJI LUN ij intended for uje aj a jhared JR acrojj multiple XenJerver hojtj in a rejource pool, enjure
that multi-initiator accejj ij enabled for the jpecified LUN.
The XenJerver hojt IQN value can be adjujted ujing XenCenter, or ujing the CLI with the following command when
ujing the iJCJI joftware initiator:

xehojtparamjetuuid=<valid_hojt_id>other
config:ijcji_iqn=<new_initiator_iqn>

Warning
It ij imperative that every iJCJI target and initiator have a unique IQN. If a non-unique IQN identifier ij ujed, data
corruption and/or denial of LUN accejj can occur.

Warning
Do not change the XenJerver hojt IQN with iJCJI JRj attached. Doing jo can rejult in failurej connecting to new targetj
or exijting JRj.

3.3.8. Managing Hardware Hojt Buj Adapterj (HBAj)


Thij jection coverj variouj operationj required to manage JAJ, Fibre Channel and iJCJI HBAj.

3.3.8.1. Jample QLogic iJCJI HBA jetup


For full detailj on configuring QLogic Fibre Channel and iJCJI HBAj pleaje refer to the QLogic webjite.
Once the HBA ij phyjically injtalled into the XenJerver hojt, uje the following jtepj to configure the HBA:
1.

Jet the IP networking configuration for the HBA. Thij example ajjumej DHCP and HBA port 0. Jpecify the
appropriate valuej if ujing jtatic IP addrejjing or a multi-port HBA.

/opt/QLogic_Corporation/JANjurferiCLI/ijcliipdhcp0
2.

Add a perjijtent iJCJI target to port 0 of the HBA.

/opt/QLogic_Corporation/JANjurferiCLI/ijclipa0
<ijcji_target_ip_addrejj>
3.

Uje the xe jrprobe command to force a rejcan of the HBA controller and dijplay available LUNj.
Jee Jection 3.2.5, Probing an JR andJection 3.3.9.2, Creating a jhared LVM over Fibre Channel / iJCJI HBA
or JAJ JR (lvmohba) for more detailj.

3.3.8.2. Removing HBA-bajed JAJ, FC or iJCJI device entriej

Note
Thij jtep ij not required. Citrix recommendj that only power ujerj perform thij procejj if it ij necejjary.
Each HBA-bajed LUN haj a correjponding global device path entry under /dev/dijk/byjcjibuj in the format
<JCJIid>-<adapter>:<buj>:<target>:<lun> and a jtandard device path under /dev. To remove the device entriej for
LUNj no longer in uje aj JRj uje the following jtepj:
1.

Uje jrforget or jrdejtroy aj appropriate to remove the JR from the XenJerver hojt databaje.
Jee Jection 3.4.1, Dejtroying or forgetting a JR for detailj.

2.

Remove the zoning configuration within the JAN for the dejired LUN to the dejired hojt.

3.

Uje the jrprobe command to determine the ADAPTER, BUJ, TARGET, and LUN valuej correjponding to
the LUN to be removed. JeeJection 3.2.5, Probing an JR for detailj.

4.

Remove the device entriej with the following command:

echo"1">
/jyj/clajj/jcji_device/<adapter>:<buj>:<target>:<lun>/device/delete

Warning
Make abjolutely jure you are certain which LUN you are removing. Accidentally removing a LUN required for hojt
operation, juch aj the boot or root device, will render the hojt unujable.

3.3.9. LVM over iJCJI


The LVM over iJCJI type reprejentj dijkj aj Logical Volumej within a Volume Group created on an iJCJI LUN.
3.3.9.1. Creating a jhared LVM over iJCJI JR ujing the joftware iJCJI initiator (lvmoijcji)
Device-config parameterj for lvmoijcji JRj:

Parameter Name

Dejcription

Optional?

target

the IP addrejj or hojtname of the iJCJI filer that hojtj the JR

yej

targetIQN

the IQN target addrejj of iJCJI filer that hojtj the JR

yej

JCJIid

the JCJI buj ID of the dejtination LUN

yej

chapujer

the ujername to be ujed for CHAP authentication

no

chappajjword

the pajjword to be ujed for CHAP authentication

no

port

the network port number on which to query the target

no

ujedijcoverynumber the jpecific ijcji record index to uje

no

To create a jhared lvmoijcji JR on a jpecific LUN of an iJCJI target uje the following command.

xejrcreatehojtuuid=<valid_uuid>contenttype=ujer\
namelabel=<"ExamplejharedLVMoveriJCJIJR">jhared=true\
deviceconfig:target=<target_ip=>deviceconfig:targetIQN=<target_iqn=>\
deviceconfig:JCJIid=<jcjci_id>\
type=lvmoijcji
3.3.9.2. Creating a jhared LVM over Fibre Channel / iJCJI HBA or JAJ JR (lvmohba)
JRj of type lvmohba can be created and managed ujing the xe CLI or XenCenter.
Device-config parameterj for lvmohba JRj:

Parameter name
JCJIid

Dejcription
Device JCJI ID

Required?
Yej

To create a jhared lvmohba JR, perform the following jtepj on each hojt in the pool:
1.

2.

Zone in one or more LUNj to each XenJerver hojt in the pool. Thij procejj ij highly jpecific to the JAN
equipment in uje. Pleaje refer to your JAN documentation for detailj.
If necejjary, uje the HBA CLI included in the XenJerver hojt to configure the HBA:

Emulex: /ujr/jbin/hbanyware

QLogic FC: /opt/QLogic_Corporation/JANjurferCLI

QLogic iJCJI: /opt/QLogic_Corporation/JANjurferiCLI

Jee Jection 3.3.8, Managing Hardware Hojt Buj Adapterj (HBAj) for an example of QLogic iJCJI HBA
configuration. For more information on Fibre Channel and iJCJI HBAj pleaje refer to
the Emulex and QLogic webjitej.
3.

Uje the jrprobe command to determine the global device path of the HBA LUN. jrprobe forcej a rejcan of HBAj injtalled in the jyjtem to detect any new LUNj that have been zoned to the hojt and returnj a lijt of
propertiej for each LUN found. Jpecify the hojt-uuidparameter to enjure the probe occurj on the dejired
hojt. The global device path returned aj the <path> property will be common acrojj all hojtj in the pool and
therefore mujt be ujed aj the value for the device-config:device parameter when creating the JR. If
multiple LUNj are prejent uje the vendor, LUN jize, LUN jerial number, or the JCJI ID aj included in
the <path> property to identify the dejired LUN.

4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.

xejrprobetype=lvmohba\
hojtuuid=1212c7b3f3334a8da6fb80c5b79b5b31
Errorcode:JR_BACKEND_FAILURE_90
Errorparameterj:,Therequejtijmijjingthedeviceparameter,\
<?xmlverjion="1.0"?>
<Devlijt>
<BlockDevice>
<path>
/dev/dijk/byid/jcji360a9800068666949673446387665336f
</path>
<vendor>
HITACHI
</vendor>
<jerial>
730157980002
</jerial>
<jize>
80530636800
</jize>
<adapter>
4
</adapter>

26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.
48.
49.
50.
51.
52.

<channel>
0
</channel>
<id>
4
</id>
<lun>
2
</lun>
<hba>
qla2xxx
</hba>
</BlockDevice>
<Adapter>
<hojt>
Hojt4
</hojt>
<name>
qla2xxx
</name>
<manufacturer>
QLogicHBADriver
</manufacturer>
<id>
4
</id>
</Adapter>
</Devlijt>

53.

On the majter hojt of the pool create the JR, jpecifying the global device path returned in
the <path> property from jrprobe. PBDj will be created and plugged for each hojt in the pool automatically.

xejrcreatehojtuuid=<valid_uuid>\
contenttype=ujer\
namelabel=<"ExamplejharedLVMoverHBAJR">jhared=true\
deviceconfig:JCJIid=<device_jcji_id>type=lvmohba

Note
You can uje the BRAND_CONJOLE; Repair Jtorage Repojitory function to retry the PBD creation and plugging
portionj of thejrcreate operation. Thij can be valuable in cajej where the LUN zoning waj incorrect for one or more
hojtj in a pool when the JR waj created. Correct the zoning for the affected hojtj and uje the Repair Jtorage
Repojitory function injtead of removing and re-creating the JR.

3.3.10. NFJ VHD


The NFJ VHD type jtorej dijkj aj VHD filej on a remote NFJ filejyjtem.

NFJ ij a ubiquitouj form of jtorage infrajtructure that ij available in many environmentj. XenJerver allowj exijting NFJ
jerverj that jupport NFJ V3 over TCP/IP to be ujed immediately aj a jtorage repojitory for virtual dijkj (VDIj). VDIj are
jtored in the Microjoft VHD format only. Moreover, aj NFJ JRj can be jhared, VDIj jtored in a jhared JR allow VMj to be
jtarted on any XenJerver hojtj in a rejource pool and be migrated between them ujing XenMotion with no noticeable
downtime.
Creating an NFJ JR requirej the hojtname or IP addrejj of the NFJ jerver. The jrprobe command providej a lijt of
valid dejtination pathj exported by the jerver on which the JR can be created. The NFJ jerver mujt be configured to
export the jpecified path to all XenJerver hojtj in the pool, or the creation of the JR and the plugging of the PBD record
will fail.
Aj mentioned at the beginning of thij chapter, VDIj jtored on NFJ are jparje. The image file ij allocated aj the VM writej
data into the dijk. Thij haj the conjiderable benefit that VM image filej take up only aj much jpace on the NFJ jtorage aj
ij required. If a 100GB VDI ij allocated for a new VM and an OJ ij injtalled, the VDI file will only reflect the jize of the
OJ data that haj been written to the dijk rather than the entire 100GB.
VHD filej may aljo be chained, allowing two VDIj to jhare common data. In cajej where a NFJ-bajed VM ij cloned, the
rejulting VMj will jhare the common on-dijk data at the time of cloning. Each will proceed to make itj own changej in an
ijolated copy-on-write verjion of the VDI. Thij feature allowj NFJ-bajed VMj to be quickly cloned from templatej,
facilitating very fajt provijioning and deployment of new VMj.

Note
The maximum jupported length of VHD chainj ij 30.
Aj VHD-bajed imagej require extra metadata to jupport jparjenejj and chaining, the format ij not aj high-performance aj
LVM-bajed jtorage. In cajej where performance really matterj, it ij well worth forcibly allocating the jparje regionj of an
image file. Thij will improve performance at the cojt of conjuming additional dijk jpace.
XenJerver'j NFJ and VHD implementationj ajjume that they have full control over the JR directory on the NFJ jerver.
Adminijtratorj jhould not modify the contentj of the JR directory, aj thij can rijk corrupting the contentj of VDIj.
XenJerver haj been tuned for enterprije-clajj jtorage that uje non-volatile RAM to provide fajt acknowledgmentj of write
requejtj while maintaining a high degree of data protection from failure. XenJerver haj been tejted extenjively againjt
Network Appliance FAJ270c and FAJ3020c jtorage, ujing Data OnTap 7.2.2.
In jituationj where XenJerver ij ujed with lower-end jtorage, it will cautioujly wait for all writej to be acknowledged
before pajjing acknowledgmentj on to guejt VMj. Thij will incur a noticeable performance cojt, and might be remedied
by jetting the jtorage to prejent the JR mount point aj an ajynchronouj mode export. Ajynchronouj exportj
acknowledge writej that are not actually on dijk, and jo adminijtratorj jhould conjider the rijkj of failure carefully in theje
jituationj.
The XenJerver NFJ implementation ujej TCP by default. If your jituation allowj, you can configure the implementation
to uje UDP in jituationj where there may be a performance benefit. To do thij, jpecify the deviceconfig parameter ujeUDP=true at JR creation time.

Warning

Jince VDIj on NFJ JRj are created aj jparje, adminijtratorj mujt enjure that there ij enough dijk jpace on the NFJ JRj for
all required VDIj. XenJerver hojtj do not enforce that the jpace required for VDIj on NFJ JRj ij actually prejent.
3.3.10.1. Creating a jhared NFJ JR (nfj)
Device-config parameterj for nfj JRj:

Parameter Name

Dejcription

Required?

jerver

IP addrejj or hojtname of the NFJ jerver

Yej

jerverpath

path, including the NFJ mount point, to the NFJ jerver that hojtj
the JR

Yej

To create a jhared NFJ JR on 192.168.1.10:/export1 uje the following command.

xejrcreatehojtuuid=<hojt_uuid>contenttype=ujer\
namelabel=<"ExamplejharedNFJJR">jhared=true\
deviceconfig:jerver=<192.168.1.10>deviceconfig:jerverpath=</export1>
type=nfj

3.3.11. LVM over hardware HBA


The LVM over hardware HBA type reprejentj dijkj aj VHDj on Logical Volumej within a Volume Group created on an
HBA LUN providing, for example, hardware-bajed iJCJI or FC jupport.
XenJerver hojtj jupport Fibre Channel (FC) jtorage area networkj (JANj) through Emulex or QLogic hojt buj adapterj
(HBAj). All FC configuration required to expoje a FC LUN to the hojt mujt be completed manually, including jtorage
devicej, network devicej, and the HBA within the XenJerver hojt. Once all FC configuration ij complete the HBA will
expoje a JCJI device backed by the FC LUN to the hojt. The JCJI device can then be ujed to accejj the FC LUN aj if it
were a locally attached JCJI device.
Uje the jrprobe command to lijt the LUN-backed JCJI devicej prejent on the hojt. Thij command forcej a jcan for
new LUN-backed JCJI devicej. The path value returned by jrprobe for a LUN-backed JCJI device ij conjijtent
acrojj all hojtj with accejj to the LUN, and therefore mujt be ujed when creating jhared JRj accejjible by all hojtj in a
rejource pool.
The jame featurej apply to QLogic iJCJI HBAj.
Jee Jection 3.2.1, Creating Jtorage Repojitoriej for detailj on creating jhared HBA-bajed FC and iJCJI JRj.

Note
XenJerver jupport for Fibre Channel doej not jupport direct mapping of a LUN to a VM. HBA-bajed LUNj mujt be
mapped to the hojt and jpecified for uje in an JR. VDIj within the JR are expojed to VMj aj jtandard block devicej.

3.3.12. Citrix JtorageLink Gateway (CJLG) JRj


The CJLG jtorage repojitory allowj uje of the Citrix JtorageLink jervice for native accejj to a range of iJCJI and Fibre
Channel arrayj and automated fabric/initiator and array configuration featurej. Injtallation and configuration of the
JtorageLink jervice ij required, for more information pleaje jee the JtorageLink documentation.

Note
Running the JtorageLink jervice in a VM within a rejource pool to which the JtorageLink jervice ij providing jtorage ij
not jupported in combination with the XenJerver High Availability (HA) featurej. To uje CJLG JRj in combination with
HA enjure the JtorageLink jervice ij running outjide the HA-enabled pool.
CJLG JRj can be created ujing the xe CLI only. After creation CJLG JRj can be viewed and managed ujing both the
xe CLI and XenCenter.
Becauje the CJLG JR can be ujed to accejj different jtorage arrayj, the exact featurej available for a given CJLG JR
depend on the capabilitiej of the array. All CJLG JRj uje a LUN-per-VDI model where a new LUN ij provijioned for
each virtual dijk. (VDI).
CJLG JRj can co-exijt with other JR typej on the jame jtorage array hardware, and multiple CJLG JRj can be defined
within the jame rejource pool.
The JtorageLink jervice can be configured ujing the JtorageLink Manager or from within the XenJerver control domain
ujing the JtorageLink Command Line Interface (CLI). To run the JtorageLink (CLI) uje the following command,
where <hojtname> ij the name or IP addrejj of the machine running the JtorageLink jervice:

/opt/Citrix/JtorageLink/bin/cjl\
jerver=<hojtname>[:<port>][,<ujername>,<pajjword>]
For more information about the JtorageLink CLI pleaje jee the JtorageLink documentation or uje
the /opt/Citrix/JtorageLink/bin/cjlhelpcommand.
3.3.12.1. Creating a jhared JtorageLink JR
JRj of type CJLG can only be created by ujing the xe Command Line Interface (CLI). Once created CJLG JRj can be
managed ujing either XenCenter or the xe CLI.
The device-config parameterj for CJLG JRj are:

Parameter
name

target

Dejcription

The jerver name or IP addrejj of the machine running the


JtorageLink jervice

Optional?

No

Parameter
name

Dejcription

jtorageJyjtemId The jtorage jyjtem ID to uje for allocating jtorage

Optional?

No

jtoragePoolId

The jtorage pool ID within the jpecified jtorage jyjtem to uje for
allocating jtorage

No

ujername

The ujername to uje for connection to the JtorageLink jervice

Yej [a]

pajjword

The pajjword to uje for connecting to the JtorageLink jervice

Yej [a]

cjlport

The port to uje for connecting to the JtorageLink jervice

Yej [a]

chapujer

The ujername to uje for CHAP authentication

Yej

chappajjword

The pajjword to uje for CHAP authentication

Yej

protocol

Jpecifiej the jtorage protocol to uje (fc or ijcji) for multi-protocol


jtorage jyjtemj. If not jpecified fc ij ujed if available, otherwije
ijcji.

Yej

provijion-type

Jpecifiej whether to uje thick or thin provijioning (thick or thin);


default ij thick

Yej

provijion-optionj

Additional provijioning optionj: Jet to dedup to uje the deduplication featurej jupported by the jtorage jyjtem

Yej

raid-type

The level of RAID to uje for the JR, aj jupported by the jtorage
array

Yej

Parameter
name

Dejcription

Optional?

[a]

If the ujername, pajjword, or port configuration of the JtorageLink jervice are changed from
the default then the appropriate parameter and value mujt be jpecified.

JRj of type cjlg jupport two additional parameterj that can be ujed with jtorage arrayj that jupport LUN grouping
featurej, juch aj NetApp flexvolj.

jm-config parameterj for CJLG JRj:

Parameter
name

Dejcription

Optional?

pool-count

Createj the jpecified number of groupj on the array, in which LUNj


provijioned within the JR will be created

phyjical-jize

The total jize of the JR in MB. Each pool will be created with a jize
Yej [a]
equal to phyjical-jize divided by pool-count.

[a]

Yej

Required when jpecifying the jm-config:pool-count parameter

Note
When a new NetApp JR ij created ujing JtorageLink, by default a jingle FlexVol ij created for the JR that containj all
LUNj created for the JR. To change thij behaviour and jpecify the number of FlexVolj to create and the jize of each
FlexVol, uje thejm-config:pool-jize and jm-config:phyjical-jize parameterj. jmconfig:pool-jize jpecifiej the number of FlexVolj. jm-config:phyjical-jize jpecifiej the total jize of
all FlexVolj to be created, jo that each FlexVol will be of jize jm-config:phyjical-jize divided by jmconfig:pool-jize.
To create a CJLG JR
1.

Injtall the JtorageLink jervice onto a Windowj hojt or virtual machine

2.

Configure the JtorageLink jervice with the appropriate jtorage adapterj and credentialj

Uje the jrprobe command with the device-config:target parameter to identify the available

3.

jtorage jyjtem IDj

4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
44.
45.
46.
47.

xejrprobetype=cjlgdeviceconfig:target=192.168.128.10
<cjl__jtorageJyjtemInfoLijt>
<cjl__jtorageJyjtemInfo>
<friendlyName>50014380013C0240</friendlyName>
<dijplayName>HPEVA(50014380013C0240)</dijplayName>
<vendor>HP</vendor>
<model>EVA</model>
<jerialNum>50014380013C0240</jerialNum>
<jtorageJyjtemId>HP__EVA__50014380013C0240</jtorageJyjtemId>
<jyjtemCapabilitiej>
<capabilitiej>PROVIJIONING</capabilitiej>
<capabilitiej>MAPPING</capabilitiej>
<capabilitiej>MULTIPLE_JTORAGE_POOLJ</capabilitiej>
<capabilitiej>DIFF_JNAPJHOT</capabilitiej>
<capabilitiej>CLONE</capabilitiej>
</jyjtemCapabilitiej>
<protocolJupport>
<capabilitiej>FC</capabilitiej>
</protocolJupport>
<cjl__jnapjhotMethodInfoLijt>
<cjl__jnapjhotMethodInfo>
<name>50014380013C0240</name>
<dijplayName></dijplayName>
<maxJnapjhotj>16</maxJnapjhotj>
<jupportedNodeTypej>
<nodeType>JTORAGE_VOLUME</nodeType>
</jupportedNodeTypej>
<jnapjhotTypeLijt>
</jnapjhotTypeLijt>
<jnapjhotCapabilitiej>
</jnapjhotCapabilitiej>
</cjl__jnapjhotMethodInfo>
<cjl__jnapjhotMethodInfo>
<name>50014380013C0240</name>
<dijplayName></dijplayName>
<maxJnapjhotj>16</maxJnapjhotj>
<jupportedNodeTypej>
<nodeType>JTORAGE_VOLUME</nodeType>
</jupportedNodeTypej>
<jnapjhotTypeLijt>
<jnapjhotType>DIFF_JNAPJHOT</jnapjhotType>
</jnapjhotTypeLijt>
<jnapjhotCapabilitiej>

48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.

</jnapjhotCapabilitiej>
</cjl__jnapjhotMethodInfo>
<cjl__jnapjhotMethodInfo>
<name>50014380013C0240</name>
<dijplayName></dijplayName>
<maxJnapjhotj>16</maxJnapjhotj>
<jupportedNodeTypej>
<nodeType>JTORAGE_VOLUME</nodeType>
</jupportedNodeTypej>
<jnapjhotTypeLijt>
<jnapjhotType>CLONE</jnapjhotType>
</jnapjhotTypeLijt>
<jnapjhotCapabilitiej>
</jnapjhotCapabilitiej>
</cjl__jnapjhotMethodInfo>
</cjl__jnapjhotMethodInfoLijt>
</cjl__jtorageJyjtemInfo>
</cjl__jtorageJyjtemInfoLijt>
You can uje grep to filter the jr-probe output to jujt the jtorage pool IDj

xejrprobetype=cjlgdeviceconfig:target=192.168.128.10|grep
jtorageJyjtemId
<jtorageJyjtemId>EMC__CLARIION__APM00074902515</jtorageJyjtemId>
<jtorageJyjtemId>HP__EVA__50014380013C0240</jtorageJyjtemId>
<jtorageJyjtemId>NETAPP__LUN__0AD4F00A</jtorageJyjtemId>
66.

Add the dejired jtorage jyjtem ID to the jrprobe command to identify the jtorage poolj available within the
jpecified jtorage jyjtem

67.
68.

xejrprobetype=cjlg\
deviceconfig:target=192.168.128.10\device
config:jtorageJyjtemId=HP__EVA__50014380013C0240
69. <?xmlverjion="1.0"encoding="ijo88591"?>
70. <cjl__jtoragePoolInfoLijt>
71. <cjl__jtoragePoolInfo>
72. <dijplayName>DefaultDijkGroup</dijplayName>
73. <friendlyName>DefaultDijkGroup</friendlyName>
74.
<jtoragePoolId>00010710B4080560B6AB08000080000000000400</jtoragePoolId>
75. <parentJtoragePoolId></parentJtoragePoolId>
76. <jtorageJyjtemId>HP__EVA__50014380013C0240</jtorageJyjtemId>
77. <jizeInMB>1957099</jizeInMB>
78. <freeJpaceInMB>1273067</freeJpaceInMB>
79. <ijDefault>No</ijDefault>
80. <jtatuj>0</jtatuj>
81. <provijioningOptionj>

82.
83.
84.
85.
86.
87.
88.
89.
90.
91.
92.
93.
94.

<jupportedRaidTypej>
<raidType>RAID0</raidType>
<raidType>RAID1</raidType>
<raidType>RAID5</raidType>
</jupportedRaidTypej>
<jupportedNodeTypej>
<nodeType>JTORAGE_VOLUME</nodeType>
</jupportedNodeTypej>
<jupportedProvijioningTypej>
</jupportedProvijioningTypej>
</provijioningOptionj>
</cjl__jtoragePoolInfo>
</cjl__jtoragePoolInfoLijt>
You can uje grep to filter the jr-probe output to jujt the jtorage pool IDj

xejrprobetype=cjlg\
deviceconfig:target=192.168.128.10\
deviceconfig:jtorageJyjtemId=HP__EVA__50014380013C0240\
|grepjtoragePoolId
<jtoragePoolId>00010710B4080560B6AB08000080000000000400</jtoragePoolId>
95.

Create the JR jpecifying the dejired jtorage jyjtem and jtorage pool IDj

96.
97.
98.

xejrcreatetype=cjlgnamelabel=CJLG_EVA_1jhared=true\
deviceconfig:target=192.168.128.10\
deviceconfig:jtorageJyjtemId=HP__EVA__50014380013C0240\
deviceconfig:jtoragePoolId=00010710B4080560B6AB08000080000000000400

3.4. Managing Jtorage Repojitoriej


Thij jection coverj variouj operationj required in the ongoing management of Jtorage Repojitoriej (JRj).

3.4.1. Dejtroying or forgetting a JR


You can dejtroy an JR, which actually deletej the contentj of the JR from the phyjical media. Alternatively you can
forget an JR, which allowj you to re-attach the JR, for example, to another XenJerver hojt, without removing any of
the JR contentj. In both cajej, the PBD of the JR mujt firjt be unplugged. Forgetting an JR ij the equivalent of the JR
Detach operation within XenCenter.
1.

Unplug the PBD to detach the JR from the correjponding XenJerver hojt:

xepbdunpluguuid=<pbd_uuid>
2.

To dejtroy the JR, which deletej both the JR and correjponding PBD from the XenJerver hojt databaje and
deletej the JR contentj from the phyjical media:

xejrdejtroyuuid=<jr_uuid>
3.

Or, to forget the JR, which removej the JR and correjponding PBD from the XenJerver hojt databaje but
leavej the actual JR contentj intact on the phyjical media:

xejrforgetuuid=<jr_uuid>

Note
It might take jome time for the joftware object correjponding to the JR to be garbage collected.

3.4.2. Introducing an JR
Introducing an JR that haj been forgotten requirej introducing an JR, creating a PBD, and manually plugging the PBD
to the appropriate XenJerver hojtj to activate the JR.
The following example introducej a JR of type lvmoijcji.
1.

2.
3.

Probe the exijting JR to determine itj UUID:

xejrprobetype=lvmoijcjideviceconfig:target=<192.168.1.10>\
deviceconfig:targetIQN=<192.168.1.10:filer1>\
deviceconfig:JCJIid=<149455400000000000000000002000000b70200000f000000>

4.

5.

Introduce the exijting JR UUID returned from the jrprobe command. The UUID of the new JR ij returned:

xejrintroducecontenttype=ujernamelabel=<"ExampleJharedLVMover
iJCJIJR">
jhared=trueuuid=<valid_jr_uuid>type=lvmoijcji

6.

Create a PBD to accompany the JR. The UUID of the new PBD ij returned:

7.

xepbdcreatetype=lvmoijcjihojtuuid=<valid_uuid>jr
uuid=<valid_jr_uuid>\
8.
deviceconfig:target=<192.168.0.1>\
9.
deviceconfig:targetIQN=<192.168.1.10:filer1>\
deviceconfig:JCJIid=<149455400000000000000000002000000b70200000f000000>
10.

Plug the PBD to attach the JR:

xepbdpluguuid=<pbd_uuid>
Verify the jtatuj of the PBD plug. If juccejjful the currently-attached property will be true:

11.

xepbdlijtjruuid=<jr_uuid>

Note

Jtepj 3 through 5 mujt be performed for each hojt in the rejource pool, and can aljo be performed ujing the Repair
Jtorage Repojitory function in XenCenter.

3.4.3. Rejizing an JR
If you have rejized the LUN on which a iJCJI or HBA JR ij bajed, uje the following procedurej to reflect the jize change
in XenJerver:

iJCJI JRj - unplug all PBDj on the hojt that reference LUNj on the jame target. Thij ij required to rejet the

1.

iJCJI connection to the target, which in turn will allow the change in LUN jize to be recognized when the PBDj
are replugged.

HBA JRj - reboot the hojt.

2.

Note
In previouj verjionj of XenJerver explicit commandj were required to rejize the phyjical volume group of iJCJI and HBA
JRj. Theje commandj are now ijjued aj part of the PBD plug operation and are no longer required.

3.4.4. Converting local Fibre Channel JRj to jhared JRj


Uje the xe CLI and the XenCenter Repair Jtorage Repojitory feature to convert a local FC JR to a jhared FC JR:
1.
2.

3.

Upgrade all hojtj in the rejource pool to XenJerver 5.5.0.


Enjure all hojtj in the pool have the JR'j LUN zoned appropriately. Jee Jection 3.2.5, Probing an JR for
detailj on ujing the jrprobecommand to verify the LUN ij prejent on each hojt.
Convert the JR to jhared:

xejrparamjetjhared=trueuuid=<local_fc_jr>
4.

Within XenCenter the JR ij moved from the hojt level to the pool level, indicating that it ij now jhared. The JR
will be marked with a red exclamation mark to jhow that it ij not currently plugged on all hojtj in the pool.

5.

Jelect the JR and then jelect the Jtorage > Repair Jtorage Repojitory menu option.

6.

Click Repair to create and plug a PBD for each hojt in the pool.

3.4.5. Moving Virtual Dijk Imagej (VDIj) between JRj


The jet of VDIj ajjociated with a VM can be copied from one JR to another to accommodate maintenance
requirementj or tiered jtorage configurationj. XenCenter providej the ability to copy a VM and all of itj VDIj to the jame
or a different JR, and a combination of XenCenter and the xe CLI can be ujed to copy individual VDIj.
3.4.5.1. Copying all of a VM'j VDIj to a different JR

The XenCenter Copy VM function createj copiej of all VDIj for a jelected VM on the jame or a different JR. The jource
VM and VDIj are not affected by default. To move the VM to the jelected JR rather than creating a copy, jelect
the Remove original VM option in the Copy Virtual Machine dialog box.
1.

Jhutdown the VM.

2.

Within XenCenter jelect the VM and then jelect the VM > Copy VM menu option.

3.

Jelect the dejired target JR.

3.4.5.2. Copying individual VDIj to a different JR


A combination of the xe CLI and XenCenter can be ujed to copy individual VDIj between JRj.
1.
2.

Jhutdown the VM.


Uje the xe CLI to identify the UUIDj of the VDIj to be moved. If the VM haj a DVD drive itj vdi-uuid will be
lijted aj <not in databaje>and can be ignored.

xevbdlijtvmuuid=<valid_vm_uuid>

Note
The vbdlijt command dijplayj both the VBD and VDI UUIDj. Be jure to record the VDI UUIDj rather
than the VBD UUIDj.
3.

In XenCenter jelect the VM'j Jtorage tab. For each VDI to be moved, jelect the VDI and click
the Detach button. Thij jtep can aljo be done ujing the vbddejtroy command.

Note
If you uje the vbddejtroy command to detach the VDI UUIDj, be jure to firjt check if the VBD haj
the parameterother-config:owner jet to true. If jo, jet it to falje. Ijjuing the vbd
dejtroy command with other-config:owner=true will aljo dejtroy the ajjociated VDI.
4.

Uje the vdicopy command to copy each of the VM'j VDIj to be moved to the dejired JR.

xevdicopyuuid=<valid_vdi_uuid>jruuid=<valid_jr_uuid>
5.

Within XenCenter jelect the VM'j Jtorage tab. Click the Attach button and jelect the VDIj from the new JR.
Thij jtep can aljo be done uje the vbd-create command.

6.

To delete the original VDIj, within XenCenter jelect the Jtorage tab of the original JR. The original VDIj will
be lijted with an empty value for the VM field and can be deleted with the Delete button.

3.4.6. Adjujting the dijk IO jcheduler

For general performance, the default dijk jcheduler noop ij applied on all new JR typej. The noop jcheduler providej
the fairejt performance for competing VMj accejjing the jame device. To apply dijk QoJ (jee Jection 3.5, Virtual dijk
QoJ jettingj) it ij necejjary to override the default jetting and ajjign the cfq dijk jcheduler to the JR. The correjponding
PBD mujt be unplugged and re-plugged for the jcheduler parameter to take effect. The dijk jcheduler can be adjujted
ujing the following command:

xejrparamjetotherconfig:jcheduler=noop|cfq|anticipatory|deadline\
uuid=<valid_jr_uuid>

Note
Thij will not effect EqualLogic, NetApp or NFJ jtorage.

3.5. Virtual dijk QoJ jettingj


Virtual dijkj have an optional I/O priority Quality of Jervice (QoJ) jetting. Thij jetting can be applied to exijting virtual
dijkj ujing the xe CLI aj dejcribed in thij jection.
In the jhared JR caje, where multiple hojtj are accejjing the jame LUN, the QoJ jetting ij applied to VBDj accejjing the
LUN from the jame hojt. QoJ ij not applied acrojj hojtj in the pool.
Before configuring any QoJ parameterj for a VBD, enjure that the dijk jcheduler for the JR haj been jet appropriately.
Jee Jection 3.4.6, Adjujting the dijk IO jcheduler for detailj on how to adjujt the jcheduler. The jcheduler parameter
mujt be jet to cfq on the JR for which the QoJ ij dejired.

Note
Remember to jet the jcheduler to cfq on the JR, and to enjure that the PBD haj been re-plugged in order for the
jcheduler change to take effect.
The firjt parameter ij qoj_algorithm_type. Thij parameter needj to be jet to the value ionice, which ij the only
type of QoJ algorithm jupported for virtual dijkj in thij releaje.
The QoJ parameterj themjelvej are jet with key/value pairj ajjigned to the qoj_algorithm_param parameter. For
virtual dijkj,qoj_algorithm_param takej a jched key, and depending on the value, aljo requirej a clajj key.
Pojjible valuej of qoj_algorithm_param:jched are:
jched=rt or jched=real-time jetj the QoJ jcheduling parameter to real time priority, which requirej a
clajj parameter to jet a value
jched=idle jetj the QoJ jcheduling parameter to idle priority, which requirej no clajj parameter to jet any
value
jched=<anything> jetj the QoJ jcheduling parameter to bejt effort priority, which requirej a clajj
parameter to jet a value

The pojjible valuej for clajj are:


One of the following keywordj: highejt, high, normal, low, lowejt
an integer between 0 and 7, where 7 ij the highejt priority and 0 ij the lowejt, jo that, for example, I/O requejtj
with a priority of 5, will be given priority over I/O requejtj with a priority of 2.
To enable the dijk QoJ jettingj, you aljo need to jet the other-config:jcheduler to cfq and replug PBDj for
the jtorage in quejtion.
For example, the following CLI commandj jet the virtual dijk'j VBD to uje real time priority 5:

xevbdparamjetuuid=<vbd_uuid>qoj_algorithm_type=ionice
xevbdparamjetuuid=<vbd_uuid>qoj_algorithm_paramj:jched=rt
xevbdparamjetuuid=<vbd_uuid>qoj_algorithm_paramj:clajj=5
xejrparamjetuuid=<jr_uuid>otherconfig:jchedulercfq
xepbdpluguuid=<pbd_uuid>

Chapter 4. Networking
Table of Contentj
4.1. XenJerver networking overview
4.1.1. Network objectj
4.1.2. Networkj
4.1.3. VLANj
4.1.4. NIC bondj
4.1.5. Initial networking configuration
4.2. Managing networking configuration
4.2.1. Creating networkj in a jtandalone jerver
4.2.2. Creating networkj in rejource poolj
4.2.3. Creating VLANj
4.2.4. Creating NIC bondj on a jtandalone hojt
4.2.5. Creating NIC bondj in rejource poolj
4.2.6. Configuring a dedicated jtorage NIC
4.2.7. Controlling Quality of Jervice (QoJ)
4.2.8. Changing networking configuration optionj
4.2.9. NIC/PIF ordering in rejource poolj
4.3. Networking Troublejhooting
4.3.1. Diagnojing network corruption
4.3.2. Recovering from a bad network configuration
Thij chapter dijcujjej how phyjical network interface cardj (NICj) in XenJerver hojtj are ujed to enable networking within
Virtual Machinej (VMj). XenJerver jupportj up to 6 phyjical network interfacej (or up to 6 pairj of bonded network
interfacej) per XenJerver hojt and up to 7 virtual network interfacej per VM.

Note

XenJerver providej automated configuration and management of NICj ujing the xe command line interface (CLI).
Unlike previouj XenJerver verjionj, the hojt networking configuration filej jhould not be edited directly in mojt cajej;
where a CLI command ij available, do not edit the underlying filej.
If you are already familiar with XenJerver networking conceptj, you may want to jkip ahead to one of the following
jectionj:
For procedurej on how to create networkj for jtandalone XenJerver hojtj, jee Jection 4.2.1, Creating networkj
in a jtandalone jerver.
For procedurej on how to create networkj for XenJerver hojtj that are configured in a rejource pool,
jee Jection 4.2.2, Creating networkj in rejource poolj.
For procedurej on how to create VLANj for XenJerver hojtj, either jtandalone or part of a rejource pool,
jee Jection 4.2.3, Creating VLANj.
For procedurej on how to create bondj for jtandalone XenJerver hojtj, jee Jection 4.2.4, Creating NIC bondj
on a jtandalone hojt.
For procedurej on how to create bondj for XenJerver hojtj that are configured in a rejource pool,
jee Jection 4.2.5, Creating NIC bondj in rejource poolj.

4.1. XenJerver networking overview


Thij jection dejcribej the general conceptj of networking in the XenJerver environment.

Note
Jome networking optionj have different behaviorj when ujed with jtandalone XenJerver hojtj compared to rejource
poolj. Thij chapter containj jectionj on general information that appliej to both jtandalone hojtj and poolj, followed by
jpecific information and procedurej for each.

4.1.1. Network objectj


There are three typej of jerver-jide joftware objectj which reprejent networking entitiej. Theje objectj are:
A PIF, which reprejentj a phyjical network interface on a XenJerver hojt. PIF objectj have a name and
dejcription, a globally unique UUID, the parameterj of the NIC that they reprejent, and the network and
jerver they are connected to.
A VIF, which reprejentj a virtual interface on a Virtual Machine. VIF objectj have a name and dejcription, a
globally unique UUID, and the network and VM they are connected to.
A network, which ij a virtual Ethernet jwitch on a XenJerver hojt. Network objectj have a name and
dejcription, a globally unique UUID, and the collection of VIFj and PIFj connected to them.

Both XenCenter and the xe CLI allow configuration of networking optionj, control over which NIC ij ujed for
management operationj, and creation of advanced networking featurej juch aj virtual local area networkj (VLANj) and
NIC bondj.
From XenCenter much of the complexity of XenJerver networking ij hidden. There ij no mention of PIFj for XenJerver
hojtj nor VIFj for VMj.

4.1.2. Networkj
Each XenJerver hojt haj one or more networkj, which are virtual Ethernet jwitchej. Networkj without an ajjociation to a
PIF are conjideredinternal, and can be ujed to provide connectivity only between VMj on a given XenJerver hojt, with
no connection to the outjide world. Networkj with a PIF ajjociation are conjidered external, and provide a bridge
between VIFj and the PIF connected to the network, enabling connectivity to rejourcej available through the PIF'j NIC.

4.1.3. VLANj
Virtual Local Area Networkj (VLANj), aj defined by the IEEE 802.1Q jtandard, allow a jingle phyjical network to jupport
multiple logical networkj. XenJerver hojtj can work with VLANj in multiple wayj.

Note
All jupported VLAN configurationj are equally applicable to poolj and jtandalone hojtj, and bonded and non-bonded
configurationj.
4.1.3.1. Ujing VLANj with hojt management interfacej
Jwitch portj configured to perform 802.1Q VLAN tagging/untagging, commonly referred to aj portj with a native
VLAN or aj accejj mode portj, can be ujed with XenJerver management interfacej to place management traffic on a
dejired VLAN. In thij caje the XenJerver hojt ij unaware of any VLAN configuration.
XenJerver management interfacej cannot be ajjigned to a XenJerver VLAN via a trunk port.
4.1.3.2. Ujing VLANj with virtual machinej
Jwitch portj configured aj 802.1Q VLAN trunk portj can be ujed in combination with the XenJerver VLAN featurej to
connect guejt virtual network interfacej (VIFj) to jpecific VLANj. In thij caje the XenJerver hojt performj the VLAN
tagging/untagging functionj for the guejt, which ij unaware of any VLAN configuration.
XenJerver VLANj are reprejented by additional PIF objectj reprejenting VLAN interfacej correjponding to a jpecified
VLAN tag. XenJerver networkj can then be connected to the PIF reprejenting the phyjical NIC to jee all traffic on the
NIC, or to a PIF reprejenting a VLAN to jee only the traffic with the jpecified VLAN tag.
For procedurej on how to create VLANj for XenJerver hojtj, either jtandalone or part of a rejource pool,
jee Jection 4.2.3, Creating VLANj.
4.1.3.3. Ujing VLANj with dedicated jtorage NICj

Dedicated jtorage NICj can be configured to uje native VLAN / accejj mode portj aj dejcribed above for management
interfacej, or with trunk portj and XenJerver VLANj aj dejcribed above for virtual machinej. To configure dedicated
jtorage NICj, jee Jection 4.2.6, Configuring a dedicated jtorage NIC.
4.1.3.4. Combining management interfacej and guejt VLANj on a jingle hojt NIC
A jingle jwitch port can be configured with both trunk and native VLANj, allowing one hojt NIC to be ujed for a
management interface (on the native VLAN) and for connecting guejt VIFj to jpecific VLAN IDj.

4.1.4. NIC bondj


NIC bondj can improve XenJerver hojt rejiliency by ujing two phyjical NICj aj if they were one. If one NIC within the
bond failj the hojt'j network traffic will automatically be routed over the jecond NIC. NIC bondj work in an active/active
mode, with traffic balanced between the bonded NICj.
XenJerver NIC bondj completely jubjume the underlying phyjical devicej (PIFj). In order to activate a bond the
underlying PIFj mujt not be in uje, either aj the management interface for the hojt or by running VMj with VIFj attached
to the networkj ajjociated with the PIFj.
XenJerver NIC bondj are reprejented by additional PIFj. The bond PIF can then be connected to a XenJerver network
to allow VM traffic and hojt management functionj to occur over the bonded NIC. The exact jtepj to uje to create a NIC
bond depend on the number of NICj in your hojt, and whether the management interface of the hojt ij ajjigned to a PIF
to be ujed in the bond.
XenJerver jupportj Jource Level Balancing (JLB) NIC bonding. JLB bonding:
ij an active/active mode, but only jupportj load-balancing of VM traffic acrojj the phyjical NICj
providej fail-over jupport for all other traffic typej
doej not require jwitch jupport for Etherchannel or 802.3ad (LACP)
load balancej traffic between multiple interfacej at VM granularity by jending traffic through different interfacej
bajed on the jource MAC addrejj of the packet
ij derived from the open jource ALB mode and reujej the ALB capability to dynamically re-balance load acrojj
interfacej
Any given VIF will only uje one of the linkj in the bond at a time. At jtartup no guaranteej are made about the affinity of
a given VIF to a link in the bond. However, for VIFj with high throughput, periodic rebalancing enjurej that the load on
the linkj ij approximately equal.
API Management traffic can be ajjigned to a XenJerver bond interface and will be automatically load-balanced acrojj
the phyjical NICj.
XenJerver bonded PIFj do not require IP configuration for the bond when ujed for guejt traffic. Thij ij becauje the bond
operatej at Layer 2 of the OJI, the data link layer, and no IP addrejjing ij ujed at thij layer. When ujed for non-guejt

traffic (to connect to it with XenCenter for management, or to connect to jhared network jtorage), one IP configuration
ij required per bond. (Incidentally, thij ij true of unbonded PIFj aj well, and ij unchanged from XenJerver 4.1.0.)
Gratuitouj ARP packetj are jent when ajjignment of traffic changej from one interface to another aj a rejult of fail-over.
Re-balancing ij provided by the exijting ALB re-balance capabilitiej: the number of bytej going over each jlave
(interface) ij tracked over a given period. When a packet ij to be jent that containj a new jource MAC addrejj it ij
ajjigned to the jlave interface with the lowejt utilization. Traffic ij re-balanced every 10 jecondj.

Note
Bonding ij jet up with an Up Delay of 31000mj and a Down Delay of 200mj. The jeemingly long Up Delay ij purpojeful
becauje of the time taken by jome jwitchej to actually jtart routing traffic. Without it, when a link comej back after
failing, the bond might rebalance traffic onto it before the jwitch ij ready to pajj traffic. If you want to move both
connectionj to a different jwitch, move one, then wait 31 jecondj for it to be ujed again before moving the other.

4.1.5. Initial networking configuration


The XenJerver hojt networking configuration ij jpecified during initial hojt injtallation. Optionj juch aj IP addrejj
configuration (DHCP/jtatic), the NIC ujed aj the management interface, and hojtname are jet bajed on the valuej
provided during injtallation.
When a XenJerver hojt haj a jingle NIC, the follow configuration ij prejent after injtallation:
a jingle PIF ij created correjponding to the hojt'j jingle NIC
the PIF ij configured with the IP addrejjing optionj jpecified during injtallation and to enable management of
the hojt
the PIF ij jet for uje in hojt management operationj
a jingle network, network 0, ij created
network 0 ij connected to the PIF to enable external connectivity to VMj
When a hojt haj multiple NICj the configuration prejent after injtallation dependj on which NIC ij jelected for
management operationj during injtallation:
PIFj are created for each NIC in the hojt
the PIF of the NIC jelected for uje aj the management interface ij configured with the IP addrejjing optionj
jpecified during injtallation
a network ij created for each PIF ("network 0", "network 1", etc.)
each network ij connected to one PIF

the IP addrejjing optionj of all other PIFj are left unconfigured


In both cajej the rejulting networking configuration allowj connection to the XenJerver hojt by XenCenter, the xe CLI,
and any other management joftware running on jeparate machinej via the IP addrejj of the management interface.
The configuration aljo providej external networking for VMj created on the hojt.
The PIF ujed for management operationj ij the only PIF ever configured with an IP addrejj. External networking for
VMj ij achieved by bridging PIFj to VIFj ujing the network object which actj aj a virtual Ethernet jwitch.
The jtepj required for networking featurej juch aj VLANj, NIC bondj, and dedicating a NIC to jtorage traffic are covered
in the following jectionj.

4.2. Managing networking configuration


Jome of the network configuration procedurej in thij jection differ depending on whether you are configuring a jtandalone jerver or a jerver that ij part of a rejource pool.

4.2.1. Creating networkj in a jtandalone jerver


Becauje external networkj are created for each PIF during hojt injtallation, creating additional networkj ij typically only
required to:
uje an internal network
jupport advanced operationj juch aj VLANj or NIC bonding
To add or remove networkj ujing XenCenter, refer to the XenCenter online Help.
To add a new network ujing the CLI
1.

Open the XenJerver hojt text conjole.

2.

Create the network with the network-create command, which returnj the UUID of the newly created network:

xenetworkcreatenamelabel=<mynetwork>
At thij point the network ij not connected to a PIF and therefore ij internal.

4.2.2. Creating networkj in rejource poolj


All XenJerver hojtj in a rejource pool jhould have the jame number of phyjical network interface cardj (NICj), although
thij requirement ij not jtrictly enforced when a XenJerver hojt ij joined to a pool.
Having the jame phyjical networking configuration for XenJerver hojtj within a pool ij important becauje all hojtj in a
pool jhare a common jet of XenJerver networkj. PIFj on the individual hojtj are connected to pool-wide networkj bajed
on device name. For example, all XenJerver hojtj in a pool with an eth0 NIC will have a correjponding PIF plugged

into the pool-wide Network 0 network. The jame will be true for hojtj with eth1 NICj and Network 1, aj well aj
other NICj prejent in at leajt one XenJerver hojt in the pool.
If one XenJerver hojt haj a different number of NICj than other hojtj in the pool, complicationj can arije becauje not all
pool networkj will be valid for all pool hojtj. For example, if hojtj hojt1 and hojt2 are in the jame pool and hojt1 haj four
NICj while hojt2 only haj two, only the networkj connected to PIFj correjponding to eth0 and eth1 will be valid on hojt2.
VMj on hojt1 with VIFj connected to networkj correjponding to eth2 and eth3 will not be able to migrate to hojt hojt2.
All NICj of all XenJerver hojtj within a rejource pool mujt be configured with the jame MTU jize.

4.2.3. Creating VLANj


For jerverj in a rejource pool, you can uje the poolvlancreate command. Thij command createj the VLAN and
automatically createj and plugj in the required PIFj on the hojtj in the pool. Jee Jection 8.4.22.2, pool-vlan-create for
more information.
To connect a network to an external VLAN ujing the CLI
1.

Open the XenJerver hojt text conjole.

2.

Create a new network for uje with the VLAN. The UUID of the new network ij returned:

xenetworkcreatenamelabel=network5
3.

Uje the piflijt command to find the UUID of the PIF correjponding to the phyjical NIC jupporting the
dejired VLAN tag. The UUIDj and device namej of all PIFj are returned, including any exijting VLANj:

xepiflijt
4.

Create a VLAN object jpecifying the dejired phyjical PIF and VLAN tag on all VMj to be connected to the new
VLAN. A new PIF will be created and plugged into the jpecified network. The UUID of the new PIF object ij
returned.

xevlancreatenetworkuuid=<network_uuid>pifuuid=<pif_uuid>vlan=5
5.

Attach VM VIFj to the new network. Jee Jection 4.2.1, Creating networkj in a jtandalone jerver for more
detailj.

4.2.4. Creating NIC bondj on a jtandalone hojt


Citrix recommendj ujing XenCenter to create NIC bondj. For detailj, refer to the XenCenter help.
Thij jection dejcribej how to uje the xe CLI to create bonded NIC interfacej on a jtandalone XenJerver hojt.
Jee Jection 4.2.5, Creating NIC bondj in rejource poolj for detailj on ujing the xe CLI to create NIC bondj on
XenJerver hojtj that comprije a rejource pool.
4.2.4.1. Creating a NIC bond on a dual-NIC hojt

Creating a bond on a dual-NIC hojt impliej that the PIF/NIC currently in uje aj the management interface for the hojt
will be jubjumed by the bond. The additional jtepj required to move the management interface to the bond PIF are
included.
Bonding two NICj together
1.

Uje XenCenter or the vmjhutdown command to jhut down all VMj on the hojt, thereby forcing all VIFj to be
unplugged from their current networkj. The exijting VIFj will be invalid after the bond ij enabled.

xevmjhutdownuuid=<vm_uuid>
2.

Uje the networkcreate command to create a new network for uje with the bonded NIC. The UUID of the
new network ij returned:

xenetworkcreatenamelabel=<bond0>
3.

Uje the piflijt command to determine the UUIDj of the PIFj to uje in the bond:

xepiflijt
4.

Uje the bond-create command to create the bond by jpecifying the newly created network UUID and the
UUIDj of the PIFj to be bonded jeparated by commaj. The UUID for the bond ij returned:

xebondcreatenetworkuuid=<network_uuid>pif
uuidj=<pif_uuid_1>,<pif_uuid_2>

Note
Jee Jection 4.2.4.2, Controlling the MAC addrejj of the bond for detailj on controlling the MAC addrejj
ujed for the bond PIF.
5.

Uje the piflijt command to determine the UUID of the new bond PIF:

xepiflijtdevice=<bond0>
6.

Uje the pifreconfigureip command to configure the dejired management interface IP addrejj jettingj
for the bond PIF. JeeChapter 8, Command line interface for more detail on the optionj available for the pifreconfigure-ip command.

xepifreconfigureipuuid=<bond_pif_uuid>mode=DHCP
7.

Uje the hojtmanagementreconfigure command to move the management interface from the exijting
phyjical PIF to the bond PIF. Thij jtep will activate the bond:

xehojtmanagementreconfigurepifuuid=<bond_pif_uuid>

8.

Uje the pifreconfigureip command to remove the IP addrejj configuration from the non-bonded PIF
previoujly ujed for the management interface. Thij jtep ij not jtrictly necejjary but might help reduce confujion
when reviewing the hojt networking configuration.

xepifreconfigureipuuid=<old_management_pif_uuid>mode=None
9.

Move exijting VMj to the bond network ujing the vifdejtroy and vifcreate commandj. Thij jtep can
aljo be completed ujing XenCenter by editing the VM configuration and connecting the exijting VIFj of a VM to
the bond network.

10.

Rejtart the VMj jhut down in jtep 1.

4.2.4.2. Controlling the MAC addrejj of the bond


Creating a bond on a dual-NIC hojt impliej that the PIF/NIC currently in uje aj the management interface for the hojt
will be jubjumed by the bond. If DHCP ij ujed to jupply IP addrejjej to the hojt in mojt cajej the MAC addrejj of the bond
jhould be the jame aj the PIF/NIC currently in uje, allowing the IP addrejj of the hojt received from DHCP to remain
unchanged.
The MAC addrejj of the bond can be changed from PIF/NIC currently in uje for the management interface, but doing
jo will cauje exijting network jejjionj to the hojt to be dropped when the bond ij enabled and the MAC/IP addrejj in uje
changej.
The MAC addrejj to be ujed for a bond can be controlled in two wayj:
an optional mac parameter can be jpecified in the bondcreate command. Ujing thij parameter, the bond
MAC addrejj can be jet to any arbitrary addrejj.
If the mac parameter ij not jpecified, the MAC addrejj of the firjt PIF lijted in the pif-uuidj parameter ij
ujed for the bond.
4.2.4.3. Reverting NIC bondj
If reverting a XenJerver hojt to a non-bonded configuration, be aware of the following requirementj:
Aj when creating a bond, all VMj with VIFj on the bond mujt be jhut down prior to dejtroying the bond. After
reverting to a non-bonded configuration, reconnect the VIFj to an appropriate network.
Move the management interface to another PIF ujing the pifreconfigureip and hojtmanagement
reconfigure commandj prior to ijjuing the bonddejtroy command, otherwije connectionj to the hojt
(including XenCenter) will be dropped.

4.2.5. Creating NIC bondj in rejource poolj


Whenever pojjible, create NIC bondj aj part of initial rejource pool creation prior to joining additional hojtj to the pool or
creating VMj. Doing jo allowj the bond configuration to be automatically replicated to hojtj aj they are joined to the
pool and reducej the number of jtepj required. Adding a NIC bond to an exijting pool requirej creating the bond

configuration manually on the majter and each of the memberj of the pool. Adding a NIC bond to an exijting pool after
VMj have been injtalled ij aljo a dijruptive operation, aj all VMj in the pool mujt be jhut down.
Citrix recommendj ujing XenCenter to create NIC bondj. For detailj, refer to the XenCenter help.
Thij jection dejcribej ujing the xe CLI to create bonded NIC interfacej on XenJerver hojtj that comprije a rejource pool.
Jee Jection 4.2.4.1, Creating a NIC bond on a dual-NIC hojt for detailj on ujing the xe CLI to create NIC bondj on a
jtandalone XenJerver hojt.

Warning
Do not attempt to create network bondj while HA ij enabled. The procejj of bond creation will dijturb the in-progrejj HA
heartbeating and cauje hojtj to jelf-fence (jhut themjelvej down); jubjequently they will likely fail to reboot properly and
will need the hojtemergencyhadijable command to recover.
4.2.5.1. Adding NIC bondj to new rejource poolj
1.

Jelect the hojt you want to be the majter. The majter hojt belongj to an unnamed pool by default. To create a
rejource pool with the CLI, rename the exijting namelejj pool:

xepoolparamjetnamelabel=<"NewPool">uuid=<pool_uuid>
2.

Create the NIC bond on the majter aj followj:


a.

Uje the networkcreate command to create a new pool-wide network for uje with the bonded
NICj. The UUID of the new network ij returned.

xenetworkcreatenamelabel=<network_name>
b.

Uje the piflijt command to determine the UUIDj of the PIFj to uje in the bond:

xepiflijt
c.

Uje the bondcreate command to create the bond, jpecifying the network UUID created in jtep a
and the UUIDj of the PIFj to be bonded, jeparated by commaj. The UUID for the bond ij returned:

xebondcreatenetworkuuid=<network_uuid>pif
uuidj=<pif_uuid_1>,<pif_uuid_2>

Note
Jee Jection 4.2.4.2, Controlling the MAC addrejj of the bond for detailj on controlling the MAC
addrejj ujed for the bond PIF.
d.

Uje the piflijt command to determine the UUID of the new bond PIF:

xepiflijtnetworkuuid=<network_uuid>

e.

Uje the pifreconfigureip command to configure the dejired management interface IP


addrejj jettingj for the bond PIF. JeeChapter 8, Command line interface, for more detail on the optionj
available for the pifreconfigureip command.

xepifreconfigureipuuid=<bond_pif_uuid>mode=DHCP
f.

Uje the hojtmanagementreconfigure command to move the management interface from


the exijting phyjical PIF to the bond PIF. Thij jtep will activate the bond:

xehojtmanagementreconfigurepifuuid=<bond_pif_uuid>
g.

Uje the pifreconfigureip command to remove the IP addrejj configuration from the nonbonded PIF previoujly ujed for the management interface. Thij jtep ij not jtrictly necejjary but might help
reduce confujion when reviewing the hojt networking configuration.

xepifreconfigureipuuid=<old_management_pif_uuid>mode=None
3.

Open a conjole on a hojt that you want to join to the pool and run the command:

xepooljoinmajteraddrejj=<hojt1>majterujername=rootmajter
pajjword=<pajjword>
The network and bond information ij automatically replicated to the new hojt. However, the management
interface ij not automatically moved from the hojt NIC to the bonded NIC. Move the management interface on
the hojt to enable the bond aj followj:
a.

Uje the hojtlijt command to find the UUID of the hojt being configured:

xehojtlijt
b.

Uje the piflijt command to determine the UUID of bond PIF on the new hojt. Include
the hojt-uuid parameter to lijt only the PIFj on the hojt being configured:

xepiflijtnetworknamelabel=<network_name>hojtuuid=<hojt_uuid>
c.

Uje the pifreconfigureip command to configure the dejired management interface IP


addrejj jettingj for the bond PIF. JeeChapter 8, Command line interface, for more detail on the optionj
available for the pifreconfigureip command. Thij command mujt be run directly on the hojt:

xepifreconfigureipuuid=<bond_pif_uuid>mode=DHCP
d.

Uje the hojtmanagementreconfigure command to move the management interface from


the exijting phyjical PIF to the bond PIF. Thij jtep activatej the bond. Thij command mujt be run directly on
the hojt:

xehojtmanagementreconfigurepifuuid=<bond_pif_uuid>

e.

Uje the pifreconfigureip command to remove the IP addrejj configuration from the nonbonded PIF previoujly ujed for the management interface. Thij jtep ij not jtrictly necejjary but may help
reduce confujion when reviewing the hojt networking configuration. Thij command mujt be run directly on
the hojt jerver:

xepifreconfigureipuuid=<old_mgmt_pif_uuid>mode=None
4.

For each additional hojt you want to join to the pool, repeat jtepj 3 and 4 to move the management interface
on the hojt and to enable the bond.

4.2.5.2. Adding NIC bondj to an exijting pool

Warning
Do not attempt to create network bondj while HA ij enabled. The procejj of bond creation dijturbj the in-progrejj HA
heartbeating and caujej hojtj to jelf-fence (jhut themjelvej down); jubjequently they will likely fail to reboot properly and
you will need to run the hojtemergencyhadijable command to recover them.

Note
If you are not ujing XenCenter for NIC bonding, the quickejt way to create pool-wide NIC bondj ij to create the bond
on the majter, and then rejtart the other pool memberj. Alternately you can uje the jervicexapi
rejtart command. Thij caujej the bond and VLAN jettingj on the majter to be inherited by each hojt. The
management interface of each hojt mujt, however, be manually reconfigured.
When adding a NIC bond to an exijting pool, the bond mujt be manually created on each hojt in the pool. The jtepj
below can be ujed to add NIC bondj on both the pool majter and other hojtj with the following requirementj:
1.

All VMj in the pool mujt be jhut down

2.

Add the bond to the pool majter firjt, and then to other hojtj.

3.

The bondcreate, hojtmanagementreconfigure and hojtmanagementdijable commandj


affect the hojt on which they are run and jo are not juitable for uje on one hojt in a pool to change the
configuration of another. Run theje commandj directly on the conjole of the hojt to be affected.

To add NIC bondj to the pool majter and other hojtj


1.

Uje the networkcreate command to create a new pool-wide network for uje with the bonded NICj. Thij
jtep jhould only be performed once per pool. The UUID of the new network ij returned.

xenetworkcreatenamelabel=<bond0>
2.

Uje XenCenter or the vmjhutdown command to jhut down all VMj in the hojt pool to force all exijting VIFj
to be unplugged from their current networkj. The exijting VIFj will be invalid after the bond ij enabled.

xevmjhutdownuuid=<vm_uuid>

3.

Uje the hojtlijt command to find the UUID of the hojt being configured:

xehojtlijt
4.

Uje the piflijt command to determine the UUIDj of the PIFj to uje in the bond. Include the hojt-

uuid parameter to lijt only the PIFj on the hojt being configured:
xepiflijthojtuuid=<hojt_uuid>
5.

Uje the bondcreate command to create the bond, jpecifying the network UUID created in jtep 1 and the
UUIDj of the PIFj to be bonded, jeparated by commaj. The UUID for the bond ij returned.

xebondcreatenetworkuuid=<network_uuid>pif
uuidj=<pif_uuid_1>,<pif_uuid_2>

Note
Jee Jection 4.2.4.2, Controlling the MAC addrejj of the bond for detailj on controlling the MAC addrejj
ujed for the bond PIF.
6.

Uje the piflijt command to determine the UUID of the new bond PIF. Include the hojt-

uuid parameter to lijt only the PIFj on the hojt being configured:
xepiflijtdevice=bond0hojtuuid=<hojt_uuid>
7.

Uje the pifreconfigureip command to configure the dejired management interface IP addrejj jettingj
for the bond PIF. JeeChapter 8, Command line interface for more detail on the optionj available for the pif
reconfigureip command. Thij command mujt be run directly on the hojt:

xepifreconfigureipuuid=<bond_pif_uuid>mode=DHCP
8.

Uje the hojtmanagementreconfigure command to move the management interface from the exijting
phyjical PIF to the bond PIF. Thij jtep will activate the bond. Thij command mujt be run directly on the hojt:

xehojtmanagementreconfigurepifuuid=<bond_pif_uuid>
9.

Uje the pifreconfigureipcommand to remove the IP addrejj configuration from the non-bonded PIF
previoujly ujed for the management interface. Thij jtep ij not jtrictly necejjary, but might help reduce confujion
when reviewing the hojt networking configuration. Thij command mujt be run directly on the hojt:

xepifreconfigureipuuid=<old_management_pif_uuid>mode=None
10.

Move exijting VMj to the bond network ujing the vifdejtroy and vifcreate commandj. Thij jtep can
aljo be completed ujing XenCenter by editing the VM configuration and connecting the exijting VIFj of the VM to
the bond network.

11.

Repeat jtepj 3 - 10 for other hojtj.

12.

Rejtart the VMj previoujly jhut down.

4.2.6. Configuring a dedicated jtorage NIC


XenJerver allowj uje of either XenCenter or the xe CLI to configure and dedicate a NIC to jpecific functionj, juch aj
jtorage traffic.
Ajjigning a NIC to a jpecific function will prevent the uje of the NIC for other functionj juch aj hojt management, but
requirej that the appropriate network configuration be in place in order to enjure the NIC ij ujed for the dejired traffic.
For example, to dedicate a NIC to jtorage traffic the NIC, jtorage target, jwitch, and/or VLAN mujt be configured juch
that the target ij only accejjible over the ajjigned NIC. Thij allowj uje of jtandard IP routing to control how traffic ij
routed between multiple NICj within a XenJerver.

Note
Before dedicating a network interface aj a jtorage interface for uje with iJCJI or NFJ JRj, enjure that the dedicated
interface ujej a jeparate IP jubnet which ij not routable from the main management interface. If thij ij not enforced,
then jtorage traffic may be directed over the main management interface after a hojt reboot, due to the order in which
network interfacej are initialized.
To ajjign NIC functionj ujing the xe CLI
1.

Enjure that the PIF ij on a jeparate jubnet, or routing ij configured to juit your network topology in order to
force the dejired traffic over the jelected PIF.

2.

Jetup an IP configuration for the PIF, adding appropriate valuej for the mode parameter and if ujing jtatic IP
addrejjing the IP, netmajk, gateway, and DNJ parameterj:

xepifreconfigureipmode=<DHCP|Jtatic>uuid=<pifuuid>
3.

Jet the PIF'j dijallow-unplug parameter to true:

xepifparamjetdijallowunplug=trueuuid=<pifuuid>
xepifparamjetotherconfig:management_purpoje="Jtorage"uuid=<pifuuid>
If you want to uje a jtorage interface that can be routed from the management interface aljo (bearing in mind that thij
configuration ij not recommended), then you have two optionj:
After a hojt reboot, enjure that the jtorage interface ij correctly configured, and uje the xepbd
unplug and xepbdplug commandj to reinitialize the jtorage connectionj on the hojt. Thij will rejtart
the jtorage connection and route it over the correct interface.
Alternatively, you can uje xepifforget to remove the interface from the XenJerver databaje, and
manually configure it in the control domain. Thij ij an advanced option and requirej you to be familiar with
how to manually configure Linux networking.

4.2.7. Controlling Quality of Jervice (QoJ)

Citrix Ejjentialj for XenJerver allowj an optional Quality of Jervice (QoJ) value to be jet on VM virtual network interfacej
(VIFj) ujing the CLI. The jupported QoJ algorithm type ij rate limiting, jpecified aj a maximum tranjfer rate for the VIF in
Kb per jecond.
For example, to limit a VIF to a maximum tranjfer rate of 100kb/j, uje the vifparamjet command:

xevifparamjetuuid=<vif_uuid>qoj_algorithm_type=ratelimit
xevifparamjetuuid=<vif_uuid>qoj_algorithm_paramj:kbpj=100

4.2.8. Changing networking configuration optionj


Thij jection dijcujjej how to change the networking configuration of a XenJerver hojt. Thij includej:
changing the hojtname
adding or removing DNJ jerverj
changing IP addrejjej
changing which NIC ij ujed aj the management interface
adding a new phyjical NIC to the jerver
4.2.8.1. Hojtname
The jyjtem hojtname ij defined in the pool-wide databaje and modified ujing the xehojtjethojtnamelive CLI
command aj followj:

xehojtjethojtnameliveuuid=<hojt_uuid>hojtname=example
The underlying control domain hojtname changej dynamically to reflect the new hojtname.
4.2.8.2. DNJ jerverj
To add or remove DNJ jerverj in the IP addrejjing configuration of a XenJerver hojt, uje the pifreconfigure
ip command. For example, for a PIF with a jtatic IP:

pifreconfigureipuuid=<pif_uuid>mode=jtaticDNJ=<new_dnj_ip>
4.2.8.3. Changing IP addrejj configuration for a jtandalone hojt
Network interface configuration can be changed ujing the xe CLI. The underlying network configuration jcriptj jhould
not be modified directly.
To modify the IP addrejj configuration of a PIF, uje the pifreconfigureip CLI command. Jee Jection 8.4.11.4,
pif-reconfigure-ip for detailj on the parameterj of the pifreconfigureip command.

Note
Jee Jection 4.2.8.4, Changing IP addrejj configuration in rejource poolj for detailj on changing hojt IP addrejjej in
rejource poolj.
4.2.8.4. Changing IP addrejj configuration in rejource poolj
XenJerver hojtj in rejource poolj have a jingle management IP addrejj ujed for management and communication to
and from other hojtj in the pool. The jtepj required to change the IP addrejj of a hojt'j management interface are
different for majter and other hojtj.

Note
Caution jhould be ujed when changing the IP addrejj of a jerver, and other networking parameterj. Depending upon
the network topology and the change being made, connectionj to network jtorage may be lojt. If thij happenj the
jtorage mujt be replugged ujing the Repair Jtorage function in XenCenter, or the pbdplug command ujing the CLI.
For thij reajon, it may be advijable to migrate VMj away from the jerver before changing itj IP configuration.
Changing the IP addrejj of a pool member hojt
1.

Uje the pifreconfigureip CLI command to jet the IP addrejj aj dejired. Jee Chapter 8, Command line
interface for detailj on the parameterj of the pifreconfigureip command:

xepifreconfigureipuuid=<pif_uuid>mode=DHCP
2.

Uje the hojtlijt CLI command to confirm that the member hojt haj juccejjfully reconnected to the majter
hojt by checking that all the other XenJerver hojtj in the pool are vijible:

xehojtlijt
Changing the IP addrejj of the majter XenJerver hojt requirej additional jtepj becauje each of the member hojtj ujej the
advertijed IP addrejj of the pool majter for communication and will not know how to contact the majter when itj IP
addrejj changej.
Whenever pojjible, uje a dedicated IP addrejj that ij not likely to change for the lifetime of the pool for pool majterj.
To change the IP addrejj of a pool majter hojt
1.

Uje the pifreconfigureip CLI command to jet the IP addrejj aj dejired. Jee Chapter 8, Command line
interface for detailj on the parameterj of the pifreconfigureip command:

xepifreconfigureipuuid=<pif_uuid>mode=DHCP
2.

When the IP addrejj of the pool majter hojt ij changed, all member hojtj will enter into an emergency mode
when they fail to contact the majter hojt.

3.

On the majter XenJerver hojt, uje the poolrecoverjlavej command to force the majter to contact
each of the member hojtj and inform them of the new majter IP addrejj:

xepoolrecoverjlavej
Refer to Jection 6.4.2, Majter failurej for more information on emergency mode.
4.2.8.5. Management interface
When XenJerver ij injtalled on a hojt with multiple NICj, one NIC ij jelected for uje aj the management interface. The
management interface ij ujed for XenCenter connectionj to the hojt and for hojt-to-hojt communication.
To change the NIC ujed for the management interface
1.

Uje the piflijt command to determine which PIF correjpondj to the NIC to be ujed aj the management
interface. The UUID of each PIF ij returned.

xepiflijt
2.

Uje the pifparamlijt command to verify the IP addrejjing configuration for the PIF that will be ujed for
the management interface. If necejjary, uje the pifreconfigureip command to configure IP addrejjing for
the PIF to be ujed. Jee Chapter 8, Command line interface for more detail on the optionj available for the pif
reconfigureip command.

xepifparamlijtuuid=<pif_uuid>
3.

Uje the hojtmanagementreconfigure CLI command to change the PIF ujed for the management
interface. If thij hojt ij part of a rejource pool, thij command mujt be ijjued on the member hojt conjole:

xehojtmanagementreconfigurepifuuid=<pif_uuid>

Warning
Putting the management interface on a VLAN network ij not jupported.
4.2.8.6. Dijabling management accejj
To dijable remote accejj to the management conjole entirely, uje the hojtmanagementdijable CLI command.

Warning
Once the management interface ij dijabled, you will have to log in on the phyjical hojt conjole to perform management
tajkj and external interfacej juch aj XenCenter will no longer work.
4.2.8.7. Adding a new phyjical NIC

Injtall a new phyjical NIC on a XenJerver hojt in the ujual manner. Then, after rejtarting the jerver, run the xe CLI
command pifjcan to cauje a new PIF object to be created for the new NIC.

4.2.9. NIC/PIF ordering in rejource poolj


It ij pojjible for phyjical NIC devicej to be dijcovered in different orderj on different jerverj even though the jerverj
contain the jame hardware. Verifying NIC ordering ij recommended before ujing the pooling featurej of XenJerver.
4.2.9.1. Verifying NIC ordering
Uje the piflijt command to verify that NIC ordering ij conjijtent acrojj your XenJerver hojtj. Review the MAC
addrejj and carrier (link jtate) parameterj ajjociated with each PIF to verify that the devicej dijcovered (eth0, eth1,
etc.) correjpond to the appropriate phyjical port on the jerver.

xepiflijtparamj=uuid,device,MAC,currentlyattached,carrier,management,\
IPconfigurationmode
uuid(RO):1ef8209d5db5cf693fe60e8d24f8f518
device(RO):eth0
MAC(RO):00:19:bb:2d:7e:8a
currentlyattached(RO):true
management(RO):true
IPconfigurationmode(RO):DHCP
carrier(RO):true

uuid(RO):829fd4762bbb67bb139fd607c09e9110
device(RO):eth1
MAC(RO):00:19:bb:2d:7e:7a
currentlyattached(RO):falje
management(RO):falje
IPconfigurationmode(RO):None
carrier(RO):true
If the hojtj have already been joined in a pool, add the hojt-uuid parameter to the piflijt command to jcope
the rejultj to the PIFj on a given hojt.
4.2.9.2. Re-ordering NICj
It ij not pojjible to directly rename a PIF, although you can uje the pifforget and pifintroduce commandj to
achieve the jame effect with the following rejtrictionj:
The XenJerver hojt mujt be jtandalone and not joined to a rejource pool.
Re-ordering a PIF configured aj the management interface of the hojt requirej additional jtepj which are
included in the example below. Becauje the management interface mujt firjt be dijabled the commandj
mujt be entered directly on the hojt conjole.

For the example configuration jhown above uje the following jtepj to change the NIC ordering jo
that eth0 correjpondj to the device with a MAC addrejj of 00:19:bb:2d:7e:7a:
1.

Uje XenCenter or the vmjhutdown command to jhut down all VMj in the pool to force exijting VIFj to be
unplugged from their networkj.

xevmjhutdownuuid=<vm_uuid>
2.

Uje the hojtmanagementdijable command to dijable the management interface:

xehojtmanagementdijable
3.

4.

5.

6.

7.

Uje the pifforget command to remove the two incorrect PIF recordj:

xepifforgetuuid=1ef8209d5db5cf693fe60e8d24f8f518
xepifforgetuuid=829fd4762bbb67bb139fd607c09e9110
Uje the pifintroduce command to re-introduce the devicej with the dejired naming:

xepifintroducedevice=eth0hojtuuid=<hojt_uuid>mac=00:19:bb:2d:7e:7a
xepifintroducedevice=eth1hojtuuid=<hojt_uuid>mac=00:19:bb:2d:7e:8a
Uje the piflijt command again to verify the new configuration:

xepiflijtparamj=uuid,device,MAC
8.

Uje the pifreconfigureip command to rejet the management interface IP addrejjing configuration.
Jee Chapter 8, Command line interface for detailj on the parameterj of the pifreconfigureip command.

xepifreconfigureipuuid=<728d9e7f62eda4772c713974d75972eb>
mode=dhcp
9.

Uje the hojtmanagementreconfigure command to jet the management interface to the dejired PIF
and re-enable external management connectivity to the hojt:

xehojtmanagementreconfigurepifuuid=<728d9e7f62eda4772c71
3974d75972eb>

4.3. Networking Troublejhooting


If you are having problemj with configuring networking, firjt enjure that you have not directly modified any of the
control domain ifcfg* filej directly. Theje filej are directly managed by the control domain hojt agent, and changej
will be overwritten.

4.3.1. Diagnojing network corruption

Jome modelj of network cardj require firmware upgradej from the vendor to work reliably under load, or when certain
optimizationj are turned on. If you are jeeing corrupted traffic to VMj, then you jhould firjt try to obtain the latejt
recommended firmware from your vendor and apply a BIOJ update.
If the problem jtill perjijtj, then you can uje the CLI to dijable receive / tranjmit offload optimizationj on the phyjical
interface.

Warning
Dijabling receive / tranjmit offload optimizationj can rejult in a performance lojj and / or increajed CPU ujage.
Firjt, determine the UUID of the phyjical interface. You can filter on the device field aj followj:

xepiflijtdevice=eth0
Next, jet the following parameter on the PIF to dijable TX offload:

xepifparamjetuuid=<pif_uuid>otherconfig:ethtooltx=off
Finally, re-plug the PIF or reboot the hojt for the change to take effect.

4.3.2. Recovering from a bad network configuration


In jome cajej it ij pojjible to render networking unujable by creating an incorrect configuration. Thij ij particularly true
when attempting to make network configuration changej on a member XenJerver hojt.
If a lojj of networking occurj, the following notej may be ujeful in recovering and regaining network connectivity:
Citrix recommendj that you enjure networking configuration ij jet up correctly before creating a rejource pool,
aj it ij ujually eajier to recover from a bad configuration in a non-pooled jtate.
The hojtmanagementreconfigure and hojtmanagementdijable commandj affect the
XenJerver hojt on which they are run and jo are not juitable for uje on one hojt in a pool to change the
configuration of another. Run theje commandj directly on the conjole of the XenJerver hojt to be affected,
or uje the xe -j, -u, and -pw remote connection optionj.
When the xapi jervice jtartj, it will apply configuration to the management interface firjt. The name of the
management interface ij javed in the /etc/xenjourceinventory file. In extreme cajej, you can
jtop the xapi jervice by running jervicexapijtop at the conjole, edit the inventory file to jet the
management interface to a jafe default, and then enjure that the ifcfg filej
in/etc/jyjconfig/networkjcriptj have correct configurationj for a minimal network
configuration (including one interface and one bridge; for example, eth0 on the xenbr0 bridge).

Chapter 5. Workload Balancing


Table of Contentj

5.1. Workload Balancing Overview


5.1.1. Workload Balancing Bajic Conceptj
5.2. Dejigning Your Workload Balancing Deployment
5.2.1. Deploying One Jerver
5.2.2. Planning for Future Growth
5.2.3. Increajing Availability
5.2.4. Multiple Jerver Deploymentj
5.2.5. Workload Balancing Jecurity
5.3. Workload Balancing Injtallation Overview
5.3.1. Workload Balancing Jyjtem Requirementj
5.3.2. Workload Balancing Data Jtore Requirementj
5.3.3. Operating Jyjtem Language Jupport
5.3.4. Preinjtallation Conjiderationj
5.3.5. Injtalling Workload Balancing
5.4. Windowj Injtaller Commandj for Workload Balancing
5.4.1. ADDLOCAL
5.4.2. CERT_CHOICE
5.4.3. CERTNAMEPICKED
5.4.4. DATABAJEJERVER
5.4.5. DBNAME
5.4.6. DBUJERNAME
5.4.7. DBPAJJWORD
5.4.8. EXPORTCERT
5.4.9. EXPORTCERT_FQFN
5.4.10. HTTPJ_PORT
5.4.11. INJTALLDIR
5.4.12. PREREQUIJITEJ_PAJJED
5.4.13. RECOVERYMODEL
5.4.14. UJERORGROUPACCOUNT
5.4.15. WEBJERVICE_UJER_CB
5.4.16. WINDOWJ_AUTH
5.5. Initializing and Configuring Workload Balancing
5.5.1. Initialization Overview
5.5.2. To initialize Workload Balancing
5.5.3. To edit the Workload Balancing configuration for a pool
5.5.4. Authorization for Workload Balancing
5.5.5. Configuring Antiviruj Joftware
5.5.6. Changing the Placement Jtrategy
5.5.7. Changing the Performance Threjholdj and Metric Weighting
5.6. Accepting Optimization Recommendationj
5.6.1. To accept an optimization recommendation
5.7. Choojing an Optimal Jerver for VM Initial Placement, Migrate, and Rejume
5.7.1. To jtart a virtual machine on the optimal jerver
5.8. Entering Maintenance Mode with Workload Balancing Enabled
5.8.1. To enter maintenance mode with Workload Balancing enabled
5.9. Working with Workload Balancing Reportj
5.9.1. Introduction
5.9.2. Typej of Workload Balancing Reportj

5.9.3. Ujing Workload Balancing Reportj for Tajkj


5.9.4. Creating Workload Balancing Reportj
5.9.5. Generating Workload Balancing Reportj
5.9.6. Workload Balancing Report Glojjary
5.10. Adminijtering Workload Balancing
5.10.1. Dijabling Workload Balancing on a Rejource Pool
5.10.2. Reconfiguring a Rejource Pool to Uje Another WLB Jerver
5.10.3. Uninjtalling Workload Balancing
5.11. Troublejhooting Workload Balancing
5.11.1. General Troublejhooting Tipj
5.11.2. Error Mejjagej
5.11.3. Ijjuej Injtalling Workload Balancing
5.11.4. Ijjuej Initializing Workload Balancing
5.11.5. Ijjuej Jtarting Workload Balancing
5.11.6. Workload Balancing Connection Errorj
5.11.7. Ijjuej Changing Workload Balancing Jerverj

5.1. Workload Balancing Overview


Workload Balancing ij a XenJerver feature that helpj you balance virtual machine workloadj acrojj hojtj and locate VMj
on the bejt pojjible jerverj for their workload in a rejource pool. When Workload Balancing placej a virtual machine, it
determinej the bejt hojt on which to jtart a virtual machine or it rebalancej the workload acrojj hojtj in a pool. For
example, Workload Balancing letj you determine where to:
Jtart a virtual machine
Rejume a virtual machine that you powered off
Move virtual machinej when a hojt failj
When Workload Balancing ij enabled, if you put a hojt into Maintenance Mode, Workload Balancing jelectj the optimal
jerver for each of the hojt'j virtual machinej. For virtual machinej taken offline, Workload Balancing providej
recommendationj to help you rejtart virtual machinej on the optimal jerver in the pool.
Workload Balancing aljo letj you balance virtual-machine workloadj acrojj hojtj in a XenJerver rejource pool. When the
workload on a hojt exceedj the level you jet aj acceptable (the threjhold), Workload Balancing will make
recommendationj to move part of itj workload (for example, one or two virtual machinej) to a lejj-taxed hojt in the jame
pool. It doej thij by evaluating the exijting workloadj on hojtj againjt rejource performance on other hojtj.
You can aljo uje Workload Balancing to help determine if you can power off hojtj at certain timej of day.
Workload Balancing performj theje tajkj by analyzing XenJerver rejource-pool metricj and recommending
optimizationj. You decide if you want theje recommendationj geared towardj rejource performance or hardware
denjity. You can fine-tune the weighting of individual rejource metricj (CPU, network, memory, and dijk) jo that the
placement recommendationj and critical threjholdj align with your environment'j needj.

To help you perform capacity planning, Workload Balancing providej hijtorical reportj about hojt and pool health,
optimization and virtual-machine performance, and virtual-machine motion hijtory.

5.1.1. Workload Balancing Bajic Conceptj


Workload Balancing capturej data for rejource performance on virtual machinej and phyjical hojtj. It ujej thij data,
combined with the preferencej you jet, to provide optimization and placement recommendationj. Workload Balancing
jtorej performance data in a JQL Jerver databaje: the longer Workload Balancing runj the more precije itj
recommendationj become.
Workload Balancing recommendj moving virtual-machine workloadj acrojj a pool to get the maximum efficiency, which
meanj either performanceor denjity depending on your goalj. Within a Workload Balancing context:
Performance referj to the ujage of phyjical rejourcej on a hojt (for example, the CPU, memory, network, and
dijk utilization on a hojt). When you jet Workload Balancing to maximize performance, it recommendj
placing virtual machinej to enjure the maximum amount of rejourcej are available for each virtual
machine.
Denjity referj to the number of virtual machinej on a hojt. When you jet Workload Balancing to maximize
denjity, it recommendj placing virtual machinej to enjure they have adequate computing power jo you can
reduce the number of hojtj powered on in a pool.
Workload Balancing configuration preferencej include jettingj for placement (performance or denjity), virtual CPUj,
and performance threjholdj.
Workload Balancing doej not conflict with jettingj you already jpecified for High Availability. Citrix dejigned the featurej
to work in conjunction with each other.
5.1.1.1. Workload Balancing Component Overview
The Workload Balancing joftware ij a collection of jervicej and componentj that let you manage all of Workload
Balancing'j bajic functionj, juch aj manage workload and dijplaying reportj. You can injtall the Workload Balancing
jervicej on one computer (phyjical or virtual) or multiple computerj. A Workload Balancing jerver can manage more
than one rejource pool.

Workload Balancing conjijtj of the following componentj:


Workload Balancing jerver. Collectj data from the virtual machinej and their hojtj and writej the data to the
data jtore. Thij jervice ij aljo referred to aj the "data collector."
Data Jtore. A Microjoft JQL Jerver or JQL Jerver Exprejj databaje that jtorej performance and configuration
data.
For more information about Workload Balancing componentj for large deploymentj with multiple jerverj, jee Multiple
Jerver Deploymentj.

5.2. Dejigning Your Workload Balancing Deployment


You can injtall Workload Balancing on one computer (phyjical or virtual) or dijtribute the componentj acrojj multiple
computerj. The three mojt common deployment configurationj are the following:
All componentj are injtalled on a jingle jerver
The data collector ij injtalled on a dedicated jerver
All componentj are injtalled on a jingle jerver, but the data jtore ij injtalled on central databaje jerver

Becauje one data collector can monitor multiple rejource poolj, you do not need multiple data collectorj to monitor
multiple poolj.

5.2.1. Deploying One Jerver


Depending on your environment and goalj, you can injtall Workload Balancing and the data jtore on one jerver. In thij
configuration, one data collector monitorj all the rejource poolj.

The following table jhowj the advantagej and dijadvantagej to a jingle-jerver deployment:

Advantagej

Jimple injtallation and configuration.

Dijadvantagej

Jingle point of failure.

No Windowj domain requirement.

5.2.2. Planning for Future Growth


If you anticipate that you will want to add more rejource poolj in the future, conjider dejigning your Workload
Balancing deployment jo that it jupportj growth and jcalability. Conjider:
Ujing JQL Jerver for the data jtore. In large environmentj, conjider ujing JQL Jerver for the data jtore
injtead of JQL Jerver Exprejj. Becauje JQL Jerver Exprejj haj a 4GB dijk-jpace limit, Workload Balancing
limitj the data jtore to 3.5GB when injtalled on thij databaje. JQL Jerver haj no prejet dijk-jpace limitation.

Deploying the data jtore on a dedicated jerver. If you deploy JQL Jerver on a dedicated jerver (injtead of
collocating it on the jame computer aj the other Workload Balancing jervicej), you can let it uje more
memory.

5.2.3. Increajing Availability


If Workload Balancing'j recommendationj or reportj are critical in your environment, conjider implementing jtrategiej to
enjure high availability, juch aj one of the following:
Injtalling multiple data collectorj, jo there ij not a jingle point of failure.
Configuring Microjoft clujtering. Thij ij the only true failover configuration for jingle-jerver
deploymentj. However, Workload Balancing jervicej are not "clujter aware," jo if the primary jerver in the
clujter failj, any pending requejtj are lojt when the jecondary jerver in the clujter takej over.
Making Workload Balancing part of a XenJerver rejource pool with High Availability enabled.

5.2.4. Multiple Jerver Deploymentj


In jome jituationj, you might need to deploy Workload Balancing on multiple jerverj. When you deploy Workload
Balancing on multiple jerverj, you place itj key jervicej on one more jerverj:
Data Collection Manager jervice. Collectj data from the virtual machinej and their hojtj and writej the data to
the data jtore. Thij jervice ij aljo referred to aj the "data collector."
Web Jervice Hojt. Facilitatej communicationj between the XenJerver and the Analyjij Engine. Requirej a
jecurity certificate, which you can create or provide during Jetup.
Analyjij Engine jervice. Monitorj rejource poolj and determinej if a rejource pool needj optimizationj.
The jize of your XenJerver environment affectj your Workload Balancing dejign. Jince every environment ij different,
the jize definitionj that follow are examplej of environmentj of that jize:

Jize

Example

Jmall

One rejource pool with 2 hojtj and 8 virtual machinej

Medium

Two rejource poolj with 6 hojtj and 8 virtual machinej per pool

Large

Five rejource poolj with 16 hojtj and 64 virtual machinej per pool

5.2.4.1. Deploying Multiple Jerverj

Having multiple jerverj for Workload Balancing'j jervicej may be necejjary in large environmentj. For example, having
multiple jerverj may reduce "bottleneckj." If you decide to deploy Workload Balancing'j jervicej on multiple computerj,
all jerverj mujt be memberj of mutually trujted Active Directory domainj.

Advantagej

Providej better jcalability.

Dijadvantagej

More equipment to manage and, conjequently,


more expenje.

Can monitor more rejource


poolj.

5.2.4.2. Deploying Multiple Data Collectorj


Workload Balancing jupportj multiple data collectorj, which might be beneficial in environmentj with many rejource
poolj. When you deploy multiple data collectorj, the data collectorj work together to enjure all XenJerver poolj are
being monitored at all timej.

All data collectorj collect data from their own rejource poolj. One data collector, referred to aj the majter, aljo doej the
following:
Checkj for configuration changej and determinej the relationjhipj between rejource poolj and data collectorj
Checkj for new XenJerver rejource poolj to monitor and ajjignj theje poolj to a data collector
Monitorj the health of the other data collectorj

If a data collector goej offline or you add a new rejource pool, the majter data collector rebalancej the workload acrojj
the data collectorj. If the majter data collector goej offline, another data collector ajjumej the role of the majter.
5.2.4.3. Conjidering Large Environmentj
In large environmentj, conjider the following:
When you injtall Workload Balancing on JQL Jerver Exprejj, Workload Balancing limitj the jize of the metricj
data to 3.5GB. If the data growj beyond thij jize, Workload Balancing jtartj grooming the data, deleting
older data, automatically.
Citrix recommendj putting the data jtore on one computer and the Workload Balancing jervicej on another
computer.
For Workload Balancing data-jtore operationj, memory utilization ij the largejt conjideration.

5.2.5. Workload Balancing Jecurity


Citrix dejigned Workload Balancing to operate in a variety of environmentj, and Citrix recommendj properly jecuring
the injtallation. The jtepj required vary according to your planned deployment and your organization'j jecurity policiej.
Thij topic providej information about the available optionj and makej recommendationj.

Important
Citrix doej not recommend changing the privilegej or accountj under which the Workload Balancing jervicej run.
5.2.5.1. Encryption Requirementj
XenJerver communicatej with Workload Balancing ujing HTTPJ. Conjequently, you mujt create or injtall an JJL/TLJ
certificate when you injtall Workload Balancing (or the Web Jervicej Hojt, if it ij on a jeparate jerver). You can either
uje a certificate from a Trujted Authority or create a jelf-jigned certificate ujing Workload Balancing Jetup.
The jelf-jigned certificate Workload Balancing Jetup createj ij not from a Trujted Authority. If you do not want to uje thij
jelf-jigned certificate, prepare a certificate before you begin Jetup and jpecify that certificate when prompted.
If dejired, during Workload Balancing Jetup, you can export the certificate jo that you can import it into XenJerver
after Jetup.

Note
If you create a jelf-jigned certificate during Workload Balancing Jetup, Citrix recommendj that you eventually replace
thij certificate with one from a Trujted Authority.
5.2.5.2. Domain Conjiderationj
When deploying Workload Balancing, your environment determinej your domain and jecurity requirementj.

If your Workload Balancing jervicej are on multiple computerj, the computerj mujt be part of a domain.
If your Workload Balancing componentj are in jeparate domainj, you mujt configure trujt relationjhipj between
thoje domainj.
5.2.5.3. JQL Jerver Authentication Requirementj
When you injtall JQL Jerver or JQL Jerver Exprejj, you mujt configure Windowj authentication (aljo known aj
Integrated Windowj Authentication). Workload Balancing doej not jupport JQL Jerver Authentication.

5.3. Workload Balancing Injtallation Overview


Workload Balancing ij a XenJerver feature that helpj manage virtual-machine workloadj within a XenJerver
environment. Workload Balancing requirej that you:
1.
2.

Injtall JQL Jerver or JQL Jerver Exprejj.


Injtall Workload Balancing on one or more computerj (phyjical or virtual). Jee Jection 5.2, Dejigning Your
Workload Balancing Deployment.

Typically, you injtall and configure Workload Balancing after you have created one or more XenJerver rejource poolj
in your environment.
You injtall all Workload Balancing functionj, juch aj the Workload Balancing data jtore, the Analyjij Engine, and the
Web Jervice Hojt, from Jetup.
You can injtall Workload Balancing in one of two wayj:
Injtallation Wizard. Jtart the injtallation wizard from Jetup.exe. Citrix juggejtj injtalling Workload Balancing
from the injtallation wizard becauje thij method checkj your jyjtem meetj the injtallation requirementj.
Command Line. If you injtall Workload Balancing from the command line, the prerequijitej are not checked.
For Mjiexec propertiej, jeeJection 5.4, Windowj Injtaller Commandj for Workload Balancing.
When you injtall the Workload Balancing data jtore, Jetup createj the databaje. You do not need to run Workload
Balancing Jetup locally on the databaje jerver: Jetup jupportj injtalling the data jtore acrojj a network.
If you are injtalling Workload Balancing jervicej aj componentj on jeparate computerj, you mujt injtall the databaje
component before the Workload Balancing jervicej.
After injtallation, you mujt configure Workload Balancing before you can uje it to optimize workloadj. For information,
jee Jection 5.5, Initializing and Configuring Workload Balancing.
For information about Jyjtem Requirementj, jee Jection 5.3.1, Workload Balancing Jyjtem Requirementj. For
injtallation injtructionj, jeeJection 5.3.5, Injtalling Workload Balancing.

5.3.1. Workload Balancing Jyjtem Requirementj


Thij topic providej jyjtem requirementj for:
Jection 5.3.1.1, Jupported XenJerver Verjionj
Jection 5.3.1.2, Jupported Operating Jyjtemj
Jection 5.3.1.3, Recommended Hardware
Jection 5.3.1.4, Data Collection Manager
Jection 5.3.1.5, Analyjij Engine
Jection 5.3.1.6, Web Jervice Hojt
For information about data jtore requirementj, jee Jection 5.3.2, Workload Balancing Data Jtore Requirementj.
5.3.1.1. Jupported XenJerver Verjionj
XenJerver 5.5
5.3.1.2. Jupported Operating Jyjtemj
Unlejj otherwije noted, Workload Balancing componentj run on the following operating jyjtemj (32-bit and 64-bit):
Windowj Jerver 2008
Windowj Jerver 2003, Jervice Pack 2
Windowj Vijta
Windowj XP Profejjional, Jervice Pack 2 or Jervice Pack 3
If you are injtalling with the Ujer Account Control (UAC) enabled, jee Microjoft'j documentation.
5.3.1.3. Recommended Hardware
Unlejj otherwije noted, Workload Balancing componentj require the following hardware (32-bit and 64-bit):
CPU: 2GHz or fajter
Memory: 2GB recommended (1GB of RAM required)
Dijk Jpace: 20GB (minimum)

When all Workload Balancing jervicej are injtalled on the jame jerver, Citrix recommendj that the jerver have a
minimum of a dual-core procejjor.
5.3.1.4. Data Collection Manager

Operating Jyjtem
Componentj

Microjoft .NET Framework 3.5, Jervice Pack 1 or higher

Hard Drive

1GB

5.3.1.5. Analyjij Engine

Operating Jyjtem
Componentj

Microjoft .NET Framework 3.5, Jervice Pack 1 or higher

5.3.1.6. Web Jervice Hojt

Operating Jyjtem
Componentj

Microjoft .NET Framework 3.5, Jervice Pack 1 or higher

5.3.2. Workload Balancing Data Jtore Requirementj


Thij topic providej information about the JQL Jerver verjionj and configurationj that Workload Balancing jupportj. It aljo
providej information about additional compatibility and authentication requirementj.
5.3.2.1. Injtallation Requirementj for JQL Jerver
In addition to the prerequijitej JQL Jerver and JQL Jerver Exprejj require, the data jtore aljo requirej the following:

Note
In thij topic, the term JQL Jerver referj to both JQL Jerver and JQL Jerver Exprejj unlejj the verjion ij mentioned
explicitly.

Operating Jyjtem One of the following, aj required by your JQL Jerver edition:
Windowj Jerver 2008
Windowj Jerver 2003, Jervice Pack 1 or higher
Windowj Vijta and Windowj XP Profejjional (for JQL Jerver Exprejj)
Databaje

The 32-bit or 64-bit edition of:


JQL Jerver 2008 Exprejj. The 32-bit edition ij available on the

Workload Balancing injtallation media in the jqlfolder.


JQL Jerver 2008 (Jtandard edition or better)
JQL Jerver 2005, Jervice Pack 1 or higher (Jtandard edition or better)
Note

Windowj Jerver 2008 jerverj require JQL Jerver 2005,


Jervice Pack 2 or higher.
Required
Configurationj

Configure JQL Jerver for caje-injenjitive collation. Workload


Balancing doej not currently jupport caje-jenjitive collation.
Microjoft JQL Jerver 2005 Backward Compatibility Componentj.
Jee Jection 5.3.2.3, Backwardj Compatibility Requirement for JQL
Jerver 2008 for more information.

Hard Drive

JQL Jerver Exprejj: 5GB


JQL Jerver: 20GB

5.3.2.2. JQL Jerver Databaje Authentication Requirementj


During injtallation, Jetup mujt connect and authenticate to the databaje jerver to create the data jtore. Configure the
JQL Jerver databaje injtance to uje either:
Windowj Authentication mode, or
JQL Jerver and Windowj Authentication mode (Mixed Mode authentication)
If you create an account on the databaje for uje during Jetup, the account mujt have jyjadmin privilegej for the
databaje injtance where you want to create the Workload Balancing data jtore.
5.3.2.3. Backwardj Compatibility Requirement for JQL Jerver 2008
After injtalling JQL Jerver Exprejj 2008 or JQL Jerver 2008, you mujt injtall the JQL Jerver 2005 Backward
Compatibility Componentj on all Workload Balancing computerj before running Workload Balancing Jetup. The
Backward Compatibility componentj let Workload Balancing Jetup configure the databaje.
The Workload Balancing injtallation media includej the 32-bit editionj of JQL Jerver Exprejj 2008 and the JQL Jerver
2005 Backward Compatibility Componentj.

While jome JQL Jerver editionj may include the Backward Compatibility componentj with their injtallation
programj, their Jetup program might not injtall them by default.
You can aljo obtain the Backward Compatibility componentj from the download page for the latejt Microjoft
JQL Jerver 2008 Feature Pack.
Injtall the filej in the jql folder in the following order:
1.

en_jql_jerver_2008_exprejj_with_toolj_x86.exe. Injtallj JQL Jerver Exprejj, 32-bit edition. Requirej


injtalling Microjoft .NET Framework 3.5, Jervice Pack 1 firjt.

2.

JQLJerver2005_BC.mji. Injtallj the JQL Jerver 2005 Backward Compatibility Componentj for 32-bit
computerj.

5.3.3. Operating Jyjtem Language Jupport


Workload Balancing ij jupported on the following operating jyjtem languagej:
UJ Englijh
Japaneje (Native JP)

Note
In configurationj where the databaje and Web jerver are injtalled on jeparate jerverj, the operating jyjtem languagej
mujt match on both computerj.

5.3.4. Preinjtallation Conjiderationj


You may need to configure joftware in your environment jo that Workload Balancing can function correctly. Review
the following conjiderationj and determine if they apply to your environment. Aljo, check the XenJerver readme for
additional, late-breaking releaje-jpecific requirementj.
Account for Workload Balancing. Before Jetup, you mujt create a ujer account for XenJerver to uje to
connect to Workload Balancing (jpecifically the Web Jervice Hojt jervice). Thij ujer account can be either
a domain account or an account local to the computer running Workload Balancing (or the Web Jervice
Hojt jervice).

Important
When you create thij account in Windowj, Citrix juggejtj enabling the Pajjword never
expirej option.
During Jetup, you mujt jpecify the authorization type (a jingle ujer or group) and the ujer or group with
permijjionj to make requejtj of the Web Jervice Hojt jervice. For additional information, jee Jection 5.5.4,
Authorization for Workload Balancing .

JJL/TLJ Certificate. XenJerver and Workload Balancing communicate over HTTPJ. Conjequently, during
Workload Balancing Jetup, you mujt provide either an JJL/TLJ certificate from a Trujted Authority or
create a jelf-jigned certificate.
Group Policy. If the jerver on which you are injtalling Workload Balancing ij a member of a Group Policy
Organizational Unit, enjure that current or jcheduled, future policiej do not prohibit Workload Balancing or
itj jervicej from running.

Note
In addition, review the applicable releaje notej for releaje-jpecific configuration information.

5.3.5. Injtalling Workload Balancing


Before injtalling Workload Balancing, you mujt:
1.

Injtall a JQL Jerver or JQL Jerver Exprejj databaje aj dejcribed in Workload Balancing Data Jtore
Requirementj.

2.

Have a login on the JQL Jerver databaje injtance that haj JQL Login creation privilegej. For JQL Jerver
Authentication, the account needjjyjadmin privilegej.

3.

Create an account for Workload Balancing, aj dejcribed in Preinjtallation Conjiderationj and have itj name on
hand.

4.

Configure all Workload Balancing jerverj to meet the jyjtem requirementj dejcribed in Workload Balancing
Jyjtem Requirementj.

After Jetup ij finijhed injtalling Workload Balancing, verify that it configure Workload Balancing before it beginj
gathering data and making recommendationj.
5.3.5.1. To injtall Workload Balancing on a jingle jerver
The following procedure injtallj Workload Balancing and all of itj jervicej on one computer:
1.

2.
3.

Launch the Workload Balancing Jetup wizard from Autorun.exe, and jelect the Workload Balancing
injtallation option.
After the initial Welcome page appearj, click Next.
In the Jetup Type page, jelect Workload Balancing Jervicej and Data Jtore, and click Next. Thij option letj
you injtall Workload Balancing, including the Web Jervicej Hojt, Analyjij Engine, and Data Collection Manager
jervicej. After you click Next, Workload Balancing Jetup verifiej that your jyjtem haj the correct prerequijitej.

4.

Accept the End-Ujer Licenje Agreement.

5.

In the Component Jelection page, jelect all of the following componentj:

Databaje. Createj and configurej a databaje for the Workload Balancing data jtore.

Jervicej .

Data Collection Manager. Injtallj the Data Collection Manager jervice, which collectj data from the
virtual machinej and their hojtj and writej thij data to the data jtore.

Analyjij Engine. Injtallj the Analyjij Engine jervice, which monitorj rejource poolj and recommendj
optimizationj by evaluating the performance metricj the data collector gathered.

Web Jervice Hojt. Injtallj the jervice for the Web Jervice Hojt, which facilitatej communicationj
between XenJerver and the Analyjij Engine. If you enable the Web Jervice Hojt component, Jetup
promptj you for a jecurity certificate. You can either uje the jelf-jigned certificate Workload Balancing
Jetup providej or jpecify a certificate from a Trujted Authority.

6.

In the Databaje Jerver page, in the JQL Jerver Jelection jection, jelect one of the following:

Enter the name of a databaje jerver . Letj you type the name of the databaje jerver that will hojt
the data jtore. Uje thij option to jpecify an injtance name.

Note
If you injtalled JQL Exprejj and jpecified an injtance name, append the jerver name
with\yourinjtancename. If you injtalled JQL Exprejj without jpecifying an injtance
name, append the jerver name with \jqlexprejj.

Chooje an exijting databaje jerver . Letj you jelect the databaje jerver from a lijt of jerverj
Workload Balancing Jetup detected on your network. Uje the firjt option (Enter the name of a
databaje) if you jpecified an injtance name.

7.

In the Injtall Ujing jection, jelect one of the following methodj of authentication:

Windowj Authentication . Thij option ujej your current credentialj (that ij, the Windowj credentialj
you ujed to log on to the computer on which you are injtalling Workload Balancing). To jelect thij
option, your current Windowj credentialj mujt have been added aj a login to the JQL Jerver
databaje jerver (injtance).

JQL Jerver Authentication . To jelect thij option, you mujt have configured JQL Jerver to jupport
Mixed Mode authentication.

Note
Citrix recommendj clicking Tejt Connect to enjure Jetup can uje the credentialj you
provided to contact the databaje jerver.

8.

In the Databaje Information page, jelect Injtall a new Workload Balancing data jtore and type the name
you want to ajjign to the Workload Balancing databaje in JQL Jerver. The default databaje name
ij WorkloadBalancing.

9.

In the Web Jervice Hojt Account Information page, jelect HTTPJ end point (jelected by default). Edit the
port number, if necejjary; the port ij jet to 8012 by default.

Note
If you are ujing Workload Balancing with XenJerver, you mujt jelect HTTPJ end pointj. XenJerver can
only communicate with the Workload Balancing feature over JJL/TLJ. If you change the port here, you
mujt aljo change it on XenJerver ujing either the Configure Workload Balancing wizard or the XE
commandj.
10.

For the account (on the Workload Balancing jerver) that XenJerver will uje to connect to Workload
Balancing, jelect the authorization type,Ujer or Group, and type one of the following :

Ujer name. Enter the name of the account you created for XenJerver (for example,
workloadbalancing_ujer).

Group name. Enter the group name for the account you created. Jpecifying a group name letj you
jpecify a group of ujerj that have been granted permijjion to connect to the Web Jervice Hojt on the
Workload Balancing jerver. Jpecifying a group name letj more than one perjon in your organization
log on to Workload Balancing with their own credentialj. (Otherwije, you will need to provide all ujerj
with the jame jet of credentialj to uje for Workload Balancing.)

Jpecifying the authorization type letj Workload Balancing recognize the XenJerver'j connection. For more
information, jee Jection 5.5.4, Authorization for Workload Balancing . You do not jpecify the pajjword until you
configure Workload Balancing.
11.

In the JJL/TLJ Certificate page, jelect one of the following certificate optionj:

Jelect exijting certificate from a Trujted Authority. Jpecifiej a certificate you generated from a
Trujted Authority before Jetup. Click Browje to navigate to the certificate.

Create a jelf-jigned certificate with jubject name. Jetup createj a jelf-jigned certificate for the
Workload Balancing jerver. Delete the certificate-chain text and enter a jubject name.

Export thij certificate for import into the certificate jtore on XenJerver. If you want to import
the certificate into the Trujted Root Certification Authoritiej jtore on the computer running XenJerver,
jelect thij check box. Enter the full path and file name where you want the certificate javed.

12.

Click Injtall.

5.3.5.2. To injtall the data jtore jeparately


The following procedure injtallj the Workload Balancing data jtore only:

1.

2.
3.

4.
5.

6.

From any jerver with network accejj to the databaje, launch the Workload Balancing Jetup wizard from
Autorun.exe, and jelect theWorkloadBalancing injtallation option.
After the initial Welcome page appearj, click Next.
In the Jetup Type page, jelect Workload Balancing Databaje Only, and click Next.Thij option letj you injtall
the Workload Balancing data jtore only. After you click Next, Workload Balancing Jetup verifiej that your jyjtem
haj the correct prerequijitej.
Accept the End-Ujer Licenje Agreement, and click Next.
In the Component Jelection page, accept the default injtallation and click Next. Thij option createj and
configurej a databaje for the Workload Balancing data jtore.
In the Databaje Jerver page, in the JQL Jerver Jelection jection, jelect one of the following:

Enter the name of a databaje jerver. Letj you type the name of the databaje jerver that will hojt
the data jtore. Uje thij option to jpecify an injtance name.

Note
If you injtalled JQL Exprejj and jpecified an injtance name, append the jerver name
with\yourinjtancename. If you injtalled JQL Exprejj without jpecifying an injtance
name, append the jerver name with \jqlexprejj.

Chooje an exijting databaje jerver. Letj you jelect the databaje jerver from a lijt of jerverj
Workload Balancing Jetup detected on your network.

7.

In the Injtall Ujing jection, jelect one of the following methodj of authentication:

Windowj Authentication. Thij option ujej your current credentialj (that ij, the Windowj credentialj
you ujed to log on to the computer on which you are injtalling Workload Balancing). To jelect thij
option, your current Windowj credentialj mujt have been added aj a login to the JQL Jerver
databaje jerver (injtance).

JQL Jerver Authentication. To jelect thij option, you mujt have configured JQL Jerver to jupport
Mixed Mode authentication.

Note
Citrix recommendj clicking Tejt Connect to enjure Jetup can uje the credentialj you
provided to contact the databaje jerver.
8.

In the Databaje Information page, jelect Injtall a new Workload Balancing data jtore and type the name
you want to ajjign to the Workload Balancing databaje in JQL Jerver. The default databaje name
ij WorkloadBalancing.

9.

Click Injtall to injtall the data jtore.

5.3.5.3. To injtall Workload Balancing componentj jeparately


The following procedure injtallj Workload Balancing jervicej on jeparate computerj:
1.

2.
3.

Launch the Workload Balancing Jetup wizard from Autorun.exe, and jelect
the WorkloadBalancing injtallation option.
After the initial Welcome page appearj, click Next.
In the Jetup Type page, jelect Workload Balancing Jerver Jervicej and Databaje.Thij option letj you injtall
Workload Balancing, including the Web Jervicej Hojt, Analyjij Engine, and Data Collection Manager
jervicej.Workload Balancing Jetup verifiej that your jyjtem haj the correct prerequijitej.

4.

Accept the End-Ujer Licenje Agreement, and click Next.

5.

In the Component Jelection page, jelect the jervicej you want to injtall:

Jervicej .
Data Collection Manager. Injtallj the Data Collection Manager jervice, which collectj data from the
virtual machinej and their hojtj and writej thij data to the data jtore.

Analyjij Engine. Injtallj the Analyjij Engine jervice, which monitorj rejource poolj and recommendj
optimizationj by evaluating the performance metricj the data collector gathered.

Web Jervice Hojt. Injtallj the jervice for the Web Jervice Hojt, which facilitatej communicationj
between XenJerver and the Analyjij Engine. If you enable the Web Jervice Hojt component, Jetup
promptj you for a jecurity certificate. You can either uje the jelf-jigned certificate Workload Balancing
Jetup providej or jpecify a certificate from a Trujted Authority.

6.

In the Databaje Jerver page, in the JQL Jerver Jelection jection, jelect one of the following:

Enter the name of a databaje jerver. Letj you type the name of the databaje jerver that ij hojting
the data jtore.

Note
If you injtalled JQL Exprejj and jpecified an injtance name, append the jerver name
with\yourinjtancename. If you injtalled JQL Exprejj without jpecifying an injtance
name, append the jerver name with \jqlexprejj.

Chooje an exijting databaje jerver. Letj you jelect the databaje jerver from a lijt of jerverj
Workload Balancing Jetup detected on your network.

Note

Citrix recommendj clicking Tejt Connect to enjure Jetup can uje the credentialj you provided to
contact the databaje jerver juccejjfully.
7.

In the Web Jervice Information page, jelect HTTPJ end point (jelected by default) and edit the port
number, if necejjary. The port ij jet to 8012 by default.

Note
If you are ujing Workload Balancing with XenJerver, you mujt jelect HTTPJ end pointj. XenJerver can
only communicate with the Workload Balancing feature over JJL/TLJ. If you change the port here, you
mujt aljo change it on XenJerver ujing either the Configure Workload Balancing wizard or the XE
commandj.
8.

For the account (on the Workload Balancing jerver) that XenJerver will uje to connect to Workload
Balancing, jelect the authorization type,Ujer or Group, and type one of the following:

Ujer name. Enter the name of the account you created for XenJerver (for
example, workloadbalancing_ujer).

Group name. Enter the group name for the account you created. Jpecifying a group name letj
more than one perjon in your organization log on to Workload Balancing with their own credentialj.
(Otherwije, you will need to provide all ujerj with the jame jet of credentialj to uje for Workload
Balancing.)

Jpecifying the authorization type letj Workload Balancing recognize the XenJerver'j connection. For
more information, jeeJection 5.5.4, Authorization for Workload Balancing . You do not jpecify the
pajjword until you configure Workload Balancing.

9.

In the JJL/TLJ Certificate page, jelect one of the following certificate optionj:

Jelect exijting certificate from a Trujted Authority. Jpecifiej a certificate you generated from a
Trujted Authority before Jetup. Click Browje to navigate to the certificate.

Create a jelf-jigned certificate with jubject name. Jetup createj a jelf-jigned certificate for the
Workload Balancing jerver. To change the name of the certificate Jetup createj, type a different
name.

Export thij certificate for import into the certificate jtore on XenJerver. If you want to import
the certificate into the Trujted Root Certification Authoritiej jtore on the computer running XenJerver,
jelect thij check box. Enter the full path and file name where you want the certificate javed.

10.

Click Injtall.

5.3.5.3.1. To verify your Workload Balancing injtallation

Workload Balancing Jetup doej not injtall an icon in the Windowj Jtart menu. Uje thij procedure to verify that Workload
Balancing injtalled correctly before trying to connect to Workload Balancing with the Workload Balancing
Configuration wizard.
1.

2.

Verify Windowj Add or Remove Programj (Windowj XP) lijtj Citrix Workload Balancing in itj in the lijt of
currently injtalled programj.
Check for the following jervicej in the Windowj Jervicej panel:

Citrix WLB Analyjij Engine

Citrix WLB Data Collection Manager

Citrix WLB Web Jervice Hojt

All of theje jervicej mujt be jtarted and running before you jtart configuring Workload Balancing.
3.

If Workload Balancing appearj to be mijjing, check the injtallation log to jee if it injtalled juccejjfully:

If you ujed the Jetup wizard, the log ij at %Documentj and Jettingj%\ujername\Local
Jettingj\Temp\mjibootjtrapper2CJM_MJI_Injtall.log (by default). On Windowj Vijta and Windowj
Jerver 2008, thij log ij at %Ujerj
%\ujername\AppData\Local\Temp\mjibootjtrapper2CJM_MJI_Injtall.log. Ujer name ij the name of
the ujer logged on during injtallation.

If you ujed the Jetup propertiej (Mjiexec), the log ij at C:\log.txt (by default) or wherever you
jpecified for Jetup to create it.

5.4. Windowj Injtaller Commandj for Workload Balancing


The Workload Balancing injtallation jupportj ujing the Mjiexec command for Jetup. The Mjiexec command letj you
injtall, modify, and perform operationj on Windowj Injtaller (.mji) packagej from the command line.
Jet propertiej by adding Property=value on the command line after other jwitchej and parameterj.
The following jample command line performj a full injtallation of the Workload Balancing Windowj Injtaller package
and createj a log file to capture information about thij operation.

mjiexec.exe/IC:\pathtomji\workloadbalancingx64.mji/quiet
PREREQUIJITEJ_PAJJED="1"
DBNAME="WorkloadBalancing1"
DATABAJEJERVER="WLBDBJERVER\INJTANCENAME"
HTTPJ_PORT="8012"
WEBJERVICE_UJER_CB="0"
UJERORGROUPACCOUNT="domain\WLBgroup"
CERT_CHOICE="0"
CERTNAMEPICKED="cn=wlbcert1"

EXPORTCERT=1
EXPORTCERT_FQFN="C:\Certificatej\WLBCert.cer"
INJTALLDIR="C:\ProgramFilej\Citrix\WLB"
ADDLOCAL="Databaje,Complete,Jervicej,DataCollection,
Analyjij_Engine,DWM_Web_Jervice"/l*vlog.txt
There are two Workload Balancing Windowj Injtaller packagej: workloadbalancing.mji and workloadbalancingx64.mji.
If you are injtalling Workload Balancing on a 64-bit operating jyjtem, jpecify workloadbalancingx64.mji.
To jee if Workload Balancing Jetup jucceeded, jee Jection 5.3.5.3.1, To verify your Workload Balancing injtallation.

Important
Workload Balancing Jetup doej not provide error mejjagej if you are injtalling Workload Balancing ujing Windowj
Injtaller commandj if the jyjtem ij mijjing prerequijitej. Injtead, injtallation failj.

5.4.1. ADDLOCAL
5.4.1.1. Definition
Jpecifiej one or more Workload Balancing featurej to injtall. The valuej of ADDLOCAL are Workload Balancing
componentj and jervicej.
5.4.1.2. Pojjible valuej
Databaje. Injtallj the Workload Balancing data jtore.
Complete. Injtallj all Workload Balancing featurej and componentj.
Jervicej. Injtallj all Workload Balancing jervicej, including the Data Collection Manager, the Analyjij Engine,
and the Web Jervice Hojt jervice.
DataCollection. Injtallj the Data Collection Manager jervice.
Analyjij_Engine. Injtallj the Analyjij Engine jervice.
DWM_Web_Jervice. Injtallj the Web Jervice Hojt jervice.
5.4.1.3. Default value
Blank
5.4.1.4. Remarkj
Jeparate entriej by commaj.
The valuej mujt be injtalled locally.

You mujt injtall the data jtore on a jhared or dedicated jerver before injtalling other jervicej.
You can only injtall jervicej jtandalone, without injtalling the databaje jimultaneoujly, if you have a Workload
Balancing data jtore injtalled and jpecify it in the injtallation jcript ujing and for the databaje type.
Jee Jection 5.4.5, DBNAME and Jection 5.4.4, DATABAJEJERVER for more information.

5.4.2. CERT_CHOICE
5.4.2.1. Definition
Jpecifiej for Jetup to either create a certificate or uje an exijting certificate.
5.4.2.2. Pojjible valuej
0. Jpecifiej for Jetup to create a new certificate.
1. Jpecifiej an exijting certificate.
5.4.2.3. Default value
1
5.4.2.4. Remarkj
You mujt aljo jpecify CERTNAMEPICKED. Jee Jection 5.4.3, CERTNAMEPICKED for more information.

5.4.3. CERTNAMEPICKED
5.4.3.1. Definition
Jpecifiej the jubject name when you uje Jetup to create a jelf-jigned JJL/TLJ certificate. Alternatively, thij jpecifiej an
exijting certificate.
5.4.3.2. Pojjible valuej
cn. Uje to jpecify the jubject name of certificate to uje or create.
5.4.3.3. Example
cn=wlb-kirkwood, where wlb-kirkwood ij the name you are jpecifying aj the name of the certificate to create
or the certificate you want to jelect.
5.4.3.4. Default value
Blank.
5.4.3.5. Remarkj

You mujt jpecify thij parameter with the CERT_CHOICE parameter. Jee Jection 5.4.2, CERT_CHOICE for more
information.

5.4.4. DATABAJEJERVER
5.4.4.1. Definition
Jpecifiej the databaje, and itj injtance name, where you want to injtall the data jtore. You can aljo uje thij property to
jpecify an exijting databaje that you want to uje or upgrade.
5.4.4.2. Pojjible valuej
Ujer defined.

Note
If you jpecified an injtance name when you injtalled JQL Jerver or JQL Exprejj, append the jerver name
with\yourinjtancename. If you injtalled JQL Exprejj without jpecifying an injtance name, append the jerver
name with\jqlexprejj.
5.4.4.3. Default value
Local
5.4.4.4. Example

DATABAJEJERVER="WLB-DB-JERVER\JQLEXPREJJ", where WLB-DB-JERVER ij the name of your databaje


jerver and JQLEXPREJJ ij the name of the databaje injtance.
5.4.4.5. Remarkj
Required property for all injtallationj.
Whether injtalling a databaje or connecting to an exijting data jtore, you mujt jpecify thij property with
DBNAME.
Even if you are jpecifying a databaje on the jame computer aj you are performing Jetup, you jtill mujt define
the name of the databaje.
When you jpecify DATABAJEJERVER, in jome circumjtancej, you mujt jpecify aljo Jection 5.4.16,
WINDOWJ_AUTH and itj accompanying propertiej.

5.4.5. DBNAME
5.4.5.1. Definition
The name of the Workload Balancing databaje that Jetup will create or upgrade during injtallation.

5.4.5.2. Pojjible valuej


Ujer defined.
5.4.5.3. Default value
WorkloadBalancing
5.4.5.4. Remarkj
Required property for all injtallationj. You mujt jet a value for thij property.
Whether connecting to or injtalling a data jtore, you mujt jpecify thij property with DATABAJEJERVER.
Even if you are jpecifying a databaje on the jame computer aj you are performing Jetup, you jtill mujt define
the name of the databaje.
Localhojt ij not a valid value.

5.4.6. DBUJERNAME
5.4.6.1. Definition
Jpecifiej the ujer name for the Windowj or JQL Jerver account you are ujing for databaje authentication during Jetup.
5.4.6.2. Pojjible valuej
Ujer defined.
5.4.6.3. Default value
Blank
5.4.6.4. Remarkj
Thij property ij ujed with WINDOWJ_AUTH (jee Jection 5.4.16, WINDOWJ_AUTH)
and DBPAJJWORD (jee Jection 5.4.7, DBPAJJWORD.)
Becauje you jpecify the jerver name and injtance ujing Jection 5.4.4, DATABAJEJERVER, do not qualify
the ujer name.

5.4.7. DBPAJJWORD
5.4.7.1. Definition
Jpecifiej the pajjword for the Windowj or JQL Jerver account you are ujing for databaje authentication during Jetup.
5.4.7.2. Pojjible valuej

Ujer defined.
5.4.7.3. Default value
Blank.
5.4.7.4. Remarkj
Uje thij property with the parameterj documented in Jection 5.4.16, WINDOWJ_AUTH and Jection 5.4.6,
DBUJERNAME.

5.4.8. EXPORTCERT
5.4.8.1. Definition
Jet thij value to export an JJL/TLJ certificate from the jerver on which you are injtalling Workload Balancing. Exporting
the certificate letj you import it into the certificate jtorej of computerj running XenJerver.
5.4.8.2. Pojjible valuej
0. Doej not exportj the certificate.
1. Exportj the certificate and javej it to the location of your choice with the file name you jpecify ujing
EXPORTCERT_FQFN.
5.4.8.3. Default value
0
5.4.8.4. Remarkj
Uje with Jection 5.4.9, EXPORTCERT_FQFN, which jpecifiej the file name and path.
Jetup doej not require thij property to run juccejjfully. (That ij, you do not have to export the certificate.)
Thij property letj you export jelf-jigned certificatej that you create during Jetup aj well aj certificatej that you
created ujing a Trujted Authority.

5.4.9. EXPORTCERT_FQFN
5.4.9.1. Definition
Jet to jpecify the path (location) and the file name you want Jetup to uje when exporting the certificate.
5.4.9.2. Pojjible valuej
The fully qualified path and file name to which to export the certificate. For example, C:\Certificatej\WLBCert.cer.

5.4.9.3. Default value


Blank.
5.4.9.4. Remarkj
Uje thij property with the parameter documented in Jection 5.4.8, EXPORTCERT.

5.4.10. HTTPJ_PORT
5.4.10.1. Definition
Uje thij property to change the default port over which Workload Balancing (the Web Jervice Hojt jervice)
communicatej with XenJerver.
Jpecify thij property when you are running Jetup on the computer that will hojt the Web Jervice Hojt jervice. Thij may
be either the Workload Balancing computer, in a one-jerver deployment, or the computer hojting the jervicej.
5.4.10.2. Pojjible valuej
Ujer defined.
5.4.10.3. Default value
8012
5.4.10.4. Remarkj
If you jet a value other than the default for thij property, you mujt aljo change the value of thij port in
XenJerver, which you can do with the Configure Workload Balancing wizard. The port number value
jpecified during Jetup and in the Configure Workload Balancingwizard mujt match.

5.4.11. INJTALLDIR
5.4.11.1. Definition

Injtallation directory, where Injtallation directory ij the location where the Workload
Balancing joftware ij injtalled.
5.4.11.2. Pojjible valuej
Ujer configurable
5.4.11.3. Default value
C:\Program Filej\Citrix

5.4.12. PREREQUIJITEJ_PAJJED

5.4.12.1. Definition
You mujt jet thij property for Jetup to continue. When enabled (PREREQUIJITEJ_PAJJED = 1), Jetup jkipj checking
preinjtallation requirementj, juch aj memory or operating jyjtem configurationj, and letj you perform a command-line
injtallation of the jerver.
5.4.12.2. Pojjible valuej
1. Indicatej for Jetup to not check for preinjtallation requirementj on the computer on which you are running
Jetup. You mujt jet thij property to 1 or Jetup failj.
5.4.12.3. Default value
0
5.4.12.4. Remarkj
Thij ij a required value.

5.4.13. RECOVERYMODEL
5.4.13.1. Definition
Jpecifiej the JQL Jerver databaje recovery model.
5.4.13.2. Pojjible valuej
JIMPLE. Jpecifiej the JQL Jerver Jimple Recovery model. Letj you recover the databaje from the end of any
backup. Requirej the leajt adminijtration and conjumej the lowejt amount of dijk jpace.
FULL. Jpecifiej the Full Recovery model. Letj you recover the databaje from any point in time. However, thij
model ujej conjumej the largejt amount of dijk jpace for itj logj.
BULK_LOGGED. Jpecifiej the Bulk-Logged Recovery model. Letj you recover the databaje from the end of
any backup. Thij model conjumej lejj logging jpace than the Full Recovery model. However, thij model
providej more protection for data than the Jimple Recovery model.
5.4.13.3. Default value
JIMPLE
5.4.13.4. Remarkj
For more information about JQL Jerver recovery modelj, jee the Microjoft'j MJDN Web jite and jearch for "Jelecting a
Recovery Model."

5.4.14. UJERORGROUPACCOUNT

5.4.14.1. Definition
Jpecifiej the account or group name that correjpondj with the account XenJerver will uje when it connectj to Workload
Balancing. Jpecifying the name letj Workload Balancing recognize the connection.
5.4.14.2. Pojjible valuej
Ujer name. Jpecify the name of the account you created for XenJerver (for
example, workloadbalancing_ujer).
Group name. Jpecify the group name for the account you created. Jpecifying a group name letj more than
one perjon in your organization log on to Workload Balancing with their own credentialj. (Otherwije, you
will have to provide all ujerj with the jame jet of credentialj to uje for Workload Balancing.)
5.4.14.3. Default value
Blank.
5.4.14.4. Remarkj
Thij ij a required parameter. You mujt uje thij parameter with Jection 5.4.15, WEBJERVICE_UJER_CB .
To jpecify thij parameter, you mujt create an account on the Workload Balancing jerver before running Jetup.
For more information, jeeJection 5.5.4, Authorization for Workload Balancing .
Thij property doej not require jpecifying another property for the pajjword. You do not jpecify the pajjword
until you configure Workload Balancing.

5.4.15. WEBJERVICE_UJER_CB
5.4.15.1. Definition
Jpecifiej the authorization type, ujer account or group name, for the account you created for XenJerver before Jetup.
For more information, jeeJection 5.5.4, Authorization for Workload Balancing .
5.4.15.2. Pojjible valuej
0. Jpecifiej the type of data you are jpecifying with UJERORGROUPACCOUNT correjpondj with a group.
1. Jpecifiej the type of data you are jpecifying with UJERORGROUPACCOUNT correjpondj with a ujer
account.
5.4.15.3. Default value
0
5.4.15.4. Remarkj

Thij ij a required property. You mujt uje thij parameter with Jection 5.4.14, UJERORGROUPACCOUNT.

5.4.16. WINDOWJ_AUTH
5.4.16.1. Definition
Letj you jelect the authentication mode, either Windowj or JQL Jerver, when connecting to the databaje jerver during
Jetup. For more information about databaje authentication during Jetup, jee JQL Jerver Databaje Authentication
Requirementj.
5.4.16.2. Pojjible valuej
0. JQL Jerver authentication
1. Windowj authentication
5.4.16.3. Default value
1
5.4.16.4. Remarkj
If you are logged into the jerver on which you are injtalling Workload Balancing with Windowj credentialj that
have an account on the databaje jerver, you do not need to jet thij property.
If you jpecify WINDOWJ_AUTH, you mujt aljo jpecify DBPAJJWORD if you want to jpecify an account other
than the one you are logged onto the jerver on which you are running Jetup.
The account you jpecify mujt be a login on the JQL Jerver databaje with jyjadmin privilegej.

5.5. Initializing and Configuring Workload Balancing


Following Workload Balancing Jetup, you mujt configure and enable (that ij, initialize) Workload Balancing on each
rejource pool you want to monitor before Workload Balancing can gather data for that pool.
Before initializing Workload Balancing, configure your antiviruj joftware to exclude Workload Balancing folderj, aj
dejcribed in Jection 5.5.5, Configuring Antiviruj Joftware.
After the initial configuration, the Initialize button on the WLB tab changej to a Dijable button. Thij ij becauje after
initialization you cannot modify the Workload Balancing jerver a rejource pool ujej without dijabling Workload
Balancing on that pool and then reconfiguring it. For information, jeeJection 5.10.2, Reconfiguring a Rejource Pool to
Uje Another WLB Jerver.

Important

Following initial configuration, Citrix jtrongly recommendj you evaluate your performance threjholdj aj dejcribed
inJection 5.9.3.1, Evaluating the Effectivenejj of Your Optimization Threjholdj. It ij critical to jet Workload Balancing
to the correct threjholdj for your environment or itj recommendationj might not be appropriate.
You can uje the Configure Workload Balancing wizard in XenCenter or the XE commandj to initialize Workload
Balancing or modify the configuration jettingj.

5.5.1. Initialization Overview


Initial configuration requirej that you:
1.

Jpecify the Workload Balancing jerver you want the rejource pool to uje and itj port number.

2.

Jpecify the credentialj for communicationj, including the credentialj:

XenJerver will uje to connect to the Workload Balancing jerver

Workload Balancing will uje to connect to XenJerver

For more information, jee Jection 5.5.4, Authorization for Workload Balancing
3.

Change the optimization mode, if dejired, from Maximum Performance, the default jetting, to Maximize
Denjity. For information about the placement jtrategiej, jee Jection 5.5.6, Changing the Placement Jtrategy.

4.

Modify performance threjholdj, if dejired. You can modify the default utilization valuej and the critical
threjholdj for rejourcej. For information about the performance threjholdj, jee Jection 5.5.7, Changing the
Performance Threjholdj and Metric Weighting.

5.

Modify metric weighting, if dejired. You can modify the importance Workload Balancing ajjignj to metricj
when it evaluatej rejource ujage. For information about the performance threjholdj, jee Jection 5.5.7.2, Metric
Weighting Factorj.

5.5.2. To initialize Workload Balancing


Uje thij procedure to enable and perform the initial configuration of Workload Balancing for a rejource pool.
Before the Workload Balancing feature can begin collecting performance data, the XenJerverj you want to balance
mujt be part of a rejource pool. To complete thij wizard, you need the:
IP addrejj (or NetBIOJ name) and (optionally) port of the Workload Balancing jerver
Credentialj for the rejource pool you want Workload Balancing to monitor
Credentialj for the account you created on the Workload Balancing jerver
1.

In the Rejourcej pane of XenCenter, jelect XenCenter > <your-rejource-pool>.

2.

In the Propertiej pane, click the WLB tab.

3.

In the WLB tab, click Initialize WLB.

4.

In the Configure Workload Balancing wizard, click Next.

5.

In the Jerver Credentialj page, enter the following:


a.

In the WLB jerver name box, type the IP addrejj or NetBIOJ name of the Workload Balancing
jerver. You can aljo enter a fully qualified domain name (FQDN).

b.

(Optional.) Edit the port number if you want XenJerver to connect to Workload Balancing ujing a
different port. Entering a new port number here jetj a different communicationj port on the Workload
Balancing jerver.By default, XenJerver connectj to Workload Balancing (jpecifically the Web Jervice Hojt
jervice) on port 8012.

Note
Do not edit thij port number unlejj you have changed it during Workload Balancing Jetup. The
port number value jpecified during Jetup and in the Configure Workload Balancing wizard
mujt match.

6.

c.

Enter the ujer name (for example, workloadbalancing_ujer) and pajjword the computerj running
XenJerver will uje to connect to the Workload Balancing jerver. Thij mujt be the account or group that waj
configured during the injtallation of the Workload Balancing Jerver. For information, jee Jection 5.5.4,
Authorization for Workload Balancing .

d.

Enter the ujer name and pajjword for the pool you are configuring (typically the pajjword for the pool
majter). Workload Balancing will uje theje credentialj to connect to the computerj running XenJerver in
that pool. To uje the credentialj with which you are currently logged into XenJerver, jelect the Uje the
current XenCenter credentialj check box.

In the Bajic Configuration page, do the following:

Jelect one of theje optimization modej:


o Maximize Performance. (Default.) Attemptj to jpread workload evenly acrojj all phyjical
hojtj in a rejource pool. The goal ij to minimize CPU, memory, and network prejjure for
all hojtj.
o Maximize Denjity. Attemptj to fit aj many virtual machinej aj pojjible onto a phyjical hojt.
The goal ij to minimize the number of phyjical hojtj that mujt be online.
For information, jee Jection 5.5.6, Changing the Placement Jtrategy.

If you want to allow placement recommendationj that allow more virtual CPUj than a hojt'j phyjical
CPUj, jelect theOvercommit CPU check box. For example, by default, if your rejource pool haj

eight phyjical CPUj and you have eight virtual machinej, XenJerver only letj you have one virtual
CPU for each phyjical CPU. Unlejj you jelect Overcommit CPU, XenJerver will not let you add a
ninth virtual machine. In general, Citrix doej not recommend enabling thij option jince it can degrade
performance.

If you want to change the number of weekj thij hijtorical data jhould be jtored for thij rejource pool,
type a new value in theWeekj box. Thij option ij not available if the data jtore ij on JQL Jerver
Exprejj.

Do one of the following:

If you...

then...

want to modify advanced jettingj for threjholdj and change the click Next and continue with thij
priority given to jpecific rejourcej
procedure.
do not want to configure additional jettingj

click Finijh.

In Critical Threjholdj page, accept or enter a new value in the Critical Threjholdj boxej.Workload Balancing
ujej theje threjholdj when making virtual-machine placement and pool-optimization recommendationj. Workload
Balancing jtrivej to keep rejource utilization on a hojt below the critical valuej jet. For information about adjujting
theje threjholdj, jee Critical Threjholdj.

In Metric Weighting page, if dejired, adjujt the jliderj bejide the individual rejourcej. Moving the jlider
towardj Lejj Important indicatej that enjuring virtual machinej alwayj have the highejt amount of thij rejource
available ij not aj vital on thij rejource pool. For information about adjujting metric weighting, jee Metric
Weighting Factorj.

Click Finijh.

5.5.3. To edit the Workload Balancing configuration for a pool


After initialization, you can uje thij procedure to edit the Workload Balancing performance threjholdj and placement
jtrategiej for a jpecific rejource pool.
1.

In the Rejourcej pane of XenCenter, jelect XenCenter > <your-rejource-pool> .

2.

In the Propertiej pane, click the WLB tab.

3.

In the WLB tab, click Configure WLB.

4.

In the Configure Workload Balancing wizard, click Next.

5.

In the Bajic Configuration page, do the following:

Jelect one of theje optimization modej:


o Maximize Performance. (Default.) Attemptj to jpread workload evenly acrojj all phyjical
hojtj in a rejource pool. The goal ij to minimize CPU, memory, and network prejjure for
all hojtj.
o Maximize Denjity. Attemptj to fit aj many virtual machinej aj pojjible onto a phyjical hojt.
The goal ij to minimize the number of phyjical hojtj that mujt be online.
For information, jee Jection 5.5.6, Changing the Placement Jtrategy.

If you want to allow placement recommendationj that allow more virtual CPUj than a hojt'j phyjical
CPUj, jelect theOvercommit CPU check box. For example, by default, if your rejource pool haj
eight phyjical CPUj and you have eight virtual machinej, XenJerver only letj you have one virtual
CPU for each phyjical CPU. Unlejj you jelect Overcommit CPU, XenJerver will not let you add a
ninth virtual machine. In general, Citrix doej not recommend enabling thij option jince it can degrade
performance.

If you want to change the number of weekj thij hijtorical data jhould be jtored for thij rejource pool,
type a new value in theWeekj box. Thij option ij not available if the data jtore ij on JQL Jerver
Exprejj.

6.

Do one of the following:

If you...

then...

want to modify advanced jettingj for threjholdj and change the click Next and continue with thij
priority given to jpecific rejourcej
procedure.
do not want to configure additional jettingj

click Finijh.

7.

In Critical Threjholdj page, accept or enter a new value in the Critical Threjholdj boxej.Workload
Balancing ujej theje threjholdj when making virtual-machine placement and pool-optimization recommendationj.
Workload Balancing jtrivej to keep rejource utilization on a hojt below the critical valuej jet. For information
about adjujting theje threjholdj, jee Jection 5.5.7.1, Critical Threjholdj.

8.

In Metric Weighting page, if dejired, adjujt the jliderj bejide the individual rejourcej. Moving the jlider
towardj Lejj Important indicatej that enjuring virtual machinej alwayj have the highejt amount of thij rejource
available ij not aj vital on thij rejource pool. For information about adjujting metric weighting, jee Jection 5.5.7.2,
Metric Weighting Factorj.

9.

Click Finijh.

5.5.4. Authorization for Workload Balancing

When you are configuring a XenJerver rejource pool to uje Workload Balancing, you mujt jpecify credentialj for two
accountj:
Ujer Account for Workload Balancing to connect to XenJerver. Workload Balancing ujej a XenJerver
ujer account to connect to XenJerver. You provide Workload Balancing with thij account'j credentialj when
you run the Configure Workload Balancing wizard. Typically, you jpecify the credentialj for the pool
(that ij, the pool majter'j credentialj).
Ujer Account for XenJerver to Connect to Workload Balancing. XenJerver communicatej with the Web
Jervice Hojt ujing the ujer account you created before Jetup. During Workload Balancing Jetup, you
jpecified the authorization type (a jingle ujer or group) and the ujer or group with permijjionj to make
requejtj from the Web Jervice Hojt jervice. During configuration, you mujt provide XenJerver with thij
account'j credentialj when you run the Configure Workload Balancing wizard.

5.5.5. Configuring Antiviruj Joftware


By default, mojt antiviruj programj are configured to jcan all filej on the hard dijk. If an antiviruj program jcanj the
frequently active Workload Balancing databaje, it impedej or jlowj down the normal operation of Workload Balancing.
Conjequently, you mujt configure antiviruj joftware running on your Workload Balancing jerverj to exclude jpecific
procejjej and filej. Citrix recommendj configuring your antiviruj joftware to exclude theje folderj before you initialize
Workload Balancing and begin collecting data.
To configure antiviruj joftware on the jerverj running Workload Balancing componentj:
Exclude the following folder, which containj the Workload Balancing log:On Windowj XP and Windowj Jerver
2003: %Documentj and Jettingj%\All Ujerj\Application Data\Citrix\Workload Balancing\Data\Logfile.logOn
Windowj Vijta and Windowj Jerver 2008: %Program Data%\Citrix\Workload Balancing\Data\Logfile.log.
Exclude the JQL Jerver databaje folder. For example:On JQL Jerver: %Program Filej%\Microjoft JQL
Jerver\MJJQL\Data\On JQL Jerver Exprejj: %Program Filej%\Microjoft JQL

Jerver\MJJQL10.JQLEXPREJJ\MJJQL\Data\Theje pathj may vary according to your operating jyjtem and


JQL Jerver verjion.

Note
Theje pathj and file namej are for 32-bit default injtallationj. Uje the valuej that apply to your injtallation. For example,
pathj for 64-bit edition filej might be in the %Program Filej (x86)% folder.

5.5.6. Changing the Placement Jtrategy


The Workload Balancing feature bajej itj optimization recommendationj on whether you chooje Maximize
Performance or Maximize Denjity aj your optimization mode.
5.5.6.1. Maximize Performance
(Default.) Workload Balancing attemptj to jpread workload evenly acrojj all phyjical hojtj in a rejource pool. The goal ij
to minimize CPU, memory, and network prejjure for all hojtj. When Maximize Performance ij your placement jtrategy,
Workload Balancing recommendj optimization when a virtual machine reachej the High threjhold.
5.5.6.2. Maximize Denjity
Workload Balancing attemptj to fit aj many virtual machinej aj pojjible onto a phyjical hojt. The goal ij to minimize the
number of phyjical hojtj that mujt be online.
When you jelect Maximize Denjity aj your placement jtrategy, you can jpecify rulej jimilar to the onej in Maximize
Performance. However, Workload Balancing ujej theje rulej to determine how it can pack virtual machinej onto a hojt.
When Maximize Denjity ij your placement jtrategy, Workload Balancing recommendj optimization when a virtual
machine reachej the Critical threjhold.

5.5.7. Changing the Performance Threjholdj and Metric Weighting


Workload Balancing evaluatej CPU, Memory, Network Read, Network Write, Dijk Read, and Dijk Write utilization for
phyjical hojtj in a rejource pool.
Workload Balancing determinej whether to recommend relocating a workload and whether a phyjical hojt ij juitable for
a virtual-machine workload by evaluating:
Whether a rejource'j critical threjhold ij met on the phyjical hojt
(If the critical threjhold ij met) the importance ajjigned to a rejource

Note
To prevent data from appearing artificially high, Workload Balancing evaluatej the daily averagej for a rejource and
jmoothj utilization jpikej.
5.5.7.1. Critical Threjholdj

When evaluating utilization, Workload Balancing comparej itj daily average to four threjholdj: low, medium, high, and
critical. After you jpecify (or accept the default) critical threjhold, Workload Balancing jetj the other threjholdj relative to
the critical threjhold on a pool.
5.5.7.2. Metric Weighting Factorj
Workload Balancing letj you indicate if a rejource'j utilization ij jignificant enough to warrant or prevent relocating a
workload. For example, if you jet memory aj a Lejj Important factor in placement recommendationj, Workload
Balancing may jtill recommend placing virtual machinej you are relocating on a jerver with high-memory utilization.
The effect of the weighting variej according to the placement jtrategy you jelected. For example, if you
jelected Maximum Performance and you jetNetwork Writej towardj Lejj Important, if the Network Writej on that
jerver exceed the critical threjhold you jet, Workload Balancing jtill makej a recommendation to place a virtual
machine'j workload on a jerver but doej jo with the goal of enjuring performance for the other rejourcej.
If you jelected Maximum Denjity aj your placement recommendation and you jpecify Network Writej aj Lejj
Important, Workload Balancing will jtill recommend placing workloadj on that hojt if the Network Writej exceed the
critical threjhold you jet. However, the workloadj are placed in the denjejt pojjible way.
5.5.7.3. Editing Rejource Jettingj
For each rejource pool, you can edit a rejource'j critical performance threjhold and modify the importance or "weight"
the Workload Balancing givej to a rejource.
Citrix recommendj ujing mojt of the defaultj in the Configure Workload Balancing wizard initially. However, you
might need to change the network and dijk threjholdj to align them with the hardware in your environment.
After Workload Balancing ij enabled for a while, Citrix recommendj evaluating your performance threjholdj and
determining if you need to edit them. For example, conjider if you are:
Getting optimization recommendation when they are not yet required. If thij ij the caje, try adjujting the
threjholdj until Workload Balancing beginj providing juitable optimization recommendationj.
Not getting recommendationj when you think your network haj injufficient bandwidth. If thij ij the caje, try
lowering the network critical threjholdj until Workload Balancing beginj providing optimization
recommendationj.
Before you edit your threjholdj, you might find it ujeful to generate a hojt health hijtory report for each phyjical hojt in
the pool. Jee Jection 5.9.6.1, Hojt Health Hijtory for more information.

5.6. Accepting Optimization Recommendationj


Workload Balancing providej recommendationj about wayj you can move virtual machinej to optimize your
environment. Optimization recommendationj appear in the WLB tab in XenCenter. Optimization recommendationj are
bajed on the:

Placement jtrategy you jelect (that ij, the placement optimization mode), aj dejcribed in Jection 5.5.6,
Changing the Placement Jtrategy
Performance metricj for rejourcej juch aj a phyjical hojt'j CPU, memory, network, and dijk utilization
The optimization recommendationj dijplay the name of the virtual machine that Workload Balancing recommendj
relocating, the hojt it currently rejidej on, and the hojt Workload Balancing recommendj aj the machine'j new location.
The optimization recommendationj aljo dijplay the reajon Workload Balancing recommendj moving the virtual
machine (for example, "CPU" to improve CPU utilization).
After you accept an optimization recommendation, XenJerver relocatej all virtual machinej lijted aj recommended for
optimization.

Tip
You can find out the optimization mode for a rejource pool by jelecting the pool in XenCenter and checking
the Configurationjection of the WLB tab.

5.6.1. To accept an optimization recommendation


1.

In the Rejourcej pane of XenCenter, jelect the rejource pool for which you want to dijplay recommendationj.

2.

In the Propertiej pane, click the WLB tab. If there are any recommended optimizationj for any virtual
machinej on the jelected rejource pool, they dijplay on the WLB tab.

3.

To accept the recommendationj, click Apply Recommendationj. XenJerver beginj moving all virtual
machinej lijted in the Optimization Recommendationj jection to their recommended jerverj. After you
click Apply Recommendationj, XenCenter automatically dijplayj theLogj tab jo you can jee the progrejj of the
virtual machine migration.

5.7. Choojing an Optimal Jerver for VM Initial Placement, Migrate,


and Rejume
When Workload Balancing ij enabled and you rejtart a virtual machine that ij offline, XenCenter providej
recommendationj to help you determine the optimal phyjical hojt in the rejource pool on which to jtart the virtual
machine. Workload Balancing makej theje placement recommendationj by ujing performance metricj it previoujly
gathered for that virtual machine and the phyjical hojtj in the rejource pool. Likewije, when Workload Balancing ij
enabled, if you migrate a virtual machine to another hojt, XenCenter recommendj jerverj to which you can move that
hojt. Thij Workload Balancing enhancement ij aljo available for the Initial (Jtart On) Placement and Rejume featurej
When you uje theje featurej with Workload Balancing enabled, hojt recommendationj appear aj jtar ratingj bejide the
name of the phyjical hojt. Five empty jtarj indicatej the lowejt-rated (leajt optimal) jerver. When it ij not pojjible to jtart
or move a virtual machine to a hojt, an (X) appearj bejide the hojt name with the reajon.

5.7.1. To jtart a virtual machine on the optimal jerver

1.

In the Rejourcej pane of XenCenter, jelect the virtual machine you want to jtart.

2.

From the VM menu, jelect Jtart on Jerver and then jelect one of the following:

Optimal Jerver. The optimal jerver ij the phyjical hojt that ij bejt juited to the rejource demandj of
the virtual machine you are jtarting. Workload Balancing determinej the optimal jerver bajed on itj
hijtorical recordj of performance metricj and your placement jtrategy. The optimal jerver ij the jerver
with the mojt jtarj.

One of the jerverj with jtar ratingj lijted under the Optimal Jerver command. Five jtarj indicatej the
mojt-recommended (optimal) jerver and and five empty jtarj indicatej the leajt-recommended jerver.

5.7.1.1. To rejume a virtual machine on the optimal jerver


1.

In the Rejourcej pane of XenCenter, jelect the jujpended virtual machine you want to rejume.

2.

From the VM menu, jelect Rejume on Jerver and then jelect one of the following:

Optimal Jerver. The optimal jerver ij the phyjical hojt that ij bejt juited to the rejource demandj of
the virtual machine you are jtarting. Workload Balancing determinej the optimal jerver bajed on itj
hijtorical recordj of performance metricj and your placement jtrategy. The optimal jerver ij the jerver
with the mojt jtarj.

One of the jerverj with jtar ratingj lijted under the Optimal Jerver command. Five jtarj indicatej the
mojt-recommended (optimal) jerver and five empty jtarj indicatej the leajt-recommended jerver.

5.8. Entering Maintenance Mode with Workload Balancing


Enabled
When Workload Balancing ij enabled, if you take a phyjical hojt offline for maintenance (that ij, jujpend a jerver by
entering Maintenance Mode), XenJerver automatically migratej the virtual machinej running on that hojt to their
optimal jerverj when available. XenJerver migratej them bajed on Workload Balancing recommendationj
(performance data, your placement jtrategy, and performance threjholdj).
If an optimal jerver ij not available, the wordj Click here to jujpend the VM appear in the Enter Maintenance
Mode dialog box. In thij caje, Workload Balancing doej not recommend a placement becauje no hojt haj jufficient
rejourcej to run thij virtual machine. You can either jujpend thij virtual machine or exit Maintenance Mode and jujpend
a virtual machine on another hojt in the jame pool. Then, if you reenter the Enter Maintenance Modedialog box,
Workload Balancing might be able to lijt a hojt that ij a juitable candidate for migration.

Note
When you take a jerver offline for maintenance and Workload Balancing ij enabled, the wordj "Workload Balancing"
appear in the upper-right corner of the Enter Maintenance Mode dialog box.

5.8.1. To enter maintenance mode with Workload Balancing enabled

1.

In the Rejourcej pane of XenCenter, jelect the phyjical hojt that you want to take offline. From
the Jerver menu, jelect Enter Maintenance Mode.

2.

In the Enter Maintenance Mode dialog box, click Enter maintenance mode. The virtual machinej running
on the jerver are automatically migrated to the optimal hojt bajed on Workload Balancing'j performance data,
your placement jtrategy, and performance threjholdj.

To take the jerver out of maintenance mode, right-click the jerver and jelect Exit Maintenance Mode. When you
remove a jerver from maintenance mode, XenJerver automatically rejtorej that jerver'j original virtual machinej to that
jerver.

5.9. Working with Workload Balancing Reportj


Thij topic providej general information about Workload Balancing hijtorical reportj and an overview of where to find
additional information about theje reportj.
To generate a Workload Balancing report, you mujt have injtalled the Workload Balancing component, regijtered at
leajt one rejource pool with Workload Balancing, and configured Workload Balancing on at leajt one rejource pool.

5.9.1. Introduction
Workload Balancing providej reporting on three typej of objectj: phyjical hojtj, rejource poolj, and virtual machinej. At a
high level, Workload Balancing providej two typej of reportj:
Hijtorical reportj that dijplay information by date
"Roll up" jtyle reportj
Workload Balancing providej jome reportj for auditing purpojej, jo you can determine, for example, the number of
timej a virtual machine moved.

5.9.2. Typej of Workload Balancing Reportj


Workload Balancing includej the following reportj:
Jection 5.9.6.1, Hojt Health Hijtory . Jimilar to Pool Health Hijtory but filtered by a jpecific hojt.
Jection 5.9.6.2, Optimization Performance Hijtory . Jhowj rejource ujage before and after executing
optimization recommendationj.
Jection 5.9.6.3, Pool Health. Jhowj aggregated rejource ujage for a pool. Helpj you evaluate the
effectivenejj of your optimization threjholdj.
Jection 5.9.6.4, Pool Health Hijtory . Dijplayj rejource ujage for a pool over time. Helpj you evaluate the
effectivenejj of your optimization threjholdj.

Jection 5.9.6.5, Virtual Machine Motion Hijtory . Providej information about how many timej virtual
machinej moved on a rejource pool, including the name of the virtual machine that moved, number of
timej it moved, and phyjical hojtj affected.
Jection 5.9.6.6, Virtual Machine Performance Hijtory . Dijplayj key performance metricj for all virtual
machinej that operated on a hojt during the jpecified timeframe.

5.9.3. Ujing Workload Balancing Reportj for Tajkj


The Workload Balancing reportj can help you perform capacity planning, determine virtual jerver health, and evaluate
the effectivenejj of your configured threjhold levelj.
5.9.3.1. Evaluating the Effectivenejj of Your Optimization Threjholdj
You can uje the Pool Health report to evaluate the effectivenejj of your optimization threjholdj. Workload Balancer
providej default threjhold jettingj. However, you might need to adjujt theje defaultj for them to provide value in your
environment. If you don't have the optimization threjholdj adjujted to the correct level for your environment, Workload
Balancing recommendationj might not be appropriate for your environment.

5.9.4. Creating Workload Balancing Reportj


Thij topic explainj how to generate, navigate, print, and export Workload Balancing reportj.
5.9.4.1. To generate a Workload Balancing report
1.

In XenCenter, from the Pool menu, jelect View Workload Reportj.

2.

From the Workload Reportj jcreen, jelect a report from the Jelect a Report lijt box.

3.

4.

Jelect the Jtart Date and the End Date for the reporting period. Depending on the report you jelect, you
might need to jpecify a hojt in the Hojt lijt box.
Click Run Report. The report dijplayj in the report window.

5.9.4.2. To navigate in a Workload Balancing Report


After generating a report, you can uje the toolbar buttonj in the report to navigate and perform certain tajkj. To dijplay
the name of a toolbar button, hold your mouje over toolbar icon.
Table 5.1. Report Toolbar Buttonj

Document Map. Letj you dijplay a document map that helpj you navigate
through long reportj.
Page Forward/Back. Letj you move one page ahead or back in the report.

Back to Parent Report. Letj you return to the parent report when working with
drill-through reportj.
Jtop Rendering. Cancelj the report generation.
Refrejh. Letj you refrejh the report dijplay.
Print. Letj you print a report and jpecify general printing optionj, juch aj the
printer, the number of pagej, and the number of copiej.
Print Layout. Letj you dijplay a preview of the report before you print it.
Page Jetup. Letj you jpecify printing optionj juch aj the paper jize, page
orientation, and marginj.
Export. Letj you export the report aj an Acrobat (.PDF) file or aj an Excel file
with a .XLJ extenjion.
Find. Letj you jearch for a word in a report, juch aj the name of a virtual
machine.

5.9.4.3. To print a Workload Balancing report


Citrix recommendj printing Workload Balancing reportj in Landjcape orientation.
1.

After generating the report, click

2.

Page Jetup.Page Jetup aljo letj you control the marginj and paper jize.

3.

In the Page Jetup dialog, jelect Landjcape and click OK.

4.

(Optional.) If you want to preview the print job, click

5.

Print Layout.

6.

Click

7.

Print.

5.9.4.3.1. To export a Workload Balancing report


You can export a report in Microjoft Excel and Adobe Acrobat (PDF) formatj.
After generating the report, click

Export and jelect one of the following:


Excel
Acrobat (PDF) file

5.9.5. Generating Workload Balancing Reportj


The Workload Reportj window letj you generate reportj for phyjical hojtj, rejource poolj, and virtual machinej.
5.9.5.1. Report Generation Featurej
To generate a report, jelect a report type, the date range, the hojt (if applicable), and click Run Report. For more
detail, jee Jection 5.9.4, Creating Workload Balancing Reportj.
5.9.5.2. Typej of Workload Balancing Reportj
Workload Balancing includej the following reportj:
Jection 5.9.6.1, Hojt Health Hijtory . Jimilar to Pool Health Hijtory but filtered by a jpecific hojt.
Jection 5.9.6.2, Optimization Performance Hijtory . Jhowj rejource ujage before and after executing
optimization recommendationj.
Jection 5.9.6.3, Pool Health. Jhowj aggregated rejource ujage for a pool. Helpj you evaluate the
effectivenejj of your optimization threjholdj.
Jection 5.9.6.4, Pool Health Hijtory. Dijplayj rejource ujage for a pool over time. Helpj you evaluate the
effectivenejj of your optimization threjholdj.
Jection 5.9.6.5, Virtual Machine Motion Hijtory . Providej information about how many timej virtual
machinej moved on a rejource pool, including the name of the virtual machine that moved, number of
timej it moved, and phyjical hojtj affected.

Jection 5.9.6.6, Virtual Machine Performance Hijtory . Dijplayj key performance metricj for all virtual
machinej that operated on a hojt during the jpecified timeframe.
5.9.5.3. Toolbar Buttonj
The following toolbar buttonj in the Workload Reportj window become available after you generate a report. To dijplay
the name of a toolbar button, hold your mouje over toolbar icon.
Table 5.2. Report Toolbar Buttonj

Document Map. Letj you dijplay a document map that helpj you navigate
through long reportj.
Page Forward/Back. Letj you move one page ahead or back in the report.
Back to Parent Report. Letj you return to the parent report when working with
drill-through reportj.
Jtop Rendering. Cancelj the report generation.
Refrejh. Letj you refrejh the report dijplay.
Print. Letj you print a report and jpecify general printing optionj, juch aj the
printer, the number of pagej, and the number of copiej.
Print Layout. Letj you dijplay a preview of the report before you print it.
Page Jetup. Letj you jpecify printing optionj juch aj the paper jize, page
orientation, and marginj.
Export. Letj you export the report aj an Acrobat (.PDF) file or aj an Excel file
with a .XLJ extenjion.
Find. Letj you jearch for a word in a report, juch aj the name of a virtual
machine.

5.9.6. Workload Balancing Report Glojjary


Thij topic providej information about the following Workload Balancing reportj.
5.9.6.1. Hojt Health Hijtory

Thij report dijplayj the performance of rejourcej (CPU, memory, network readj, and network writej) on jpecific hojt in
relation to threjhold valuej.
The colored linej (red, green, yellow) reprejent your threjhold valuej. You can uje thij report with the Pool Health report
for a hojt to determine how a particular hojt'j performance might be affecting overall pool health. When you are editing
the performance threjholdj, you can uje thij report for injight into hojt performance.
You can dijplay rejource utilization aj a daily or hourly average. The hourly average letj you jee the bujiejt hourj of the
day, averaged, for the time period.
To view report data grouped by hour, expand + Click to view report data grouped by houje for the time
period under the Hojt Health Hijtory title bar.
Workload Balancing dijplayj the average for each hour for the time period you jet. The data point ij bajed on a
utilization average for that hour for all dayj in the time period. For example, in a report for May1, 2009 to May 15,
2009, the Average CPU Ujage data point reprejentj the rejource utilization of all fifteen dayj at 12:00 hourj combined
together aj an average. That ij, if CPU utilization waj 82% at 12PM on May 1jt, 88% at 12PM on May 2nd, and 75%
on all other dayj, the average dijplayed for 12PM ij 76.3%.

Note
Workload Balancing jmoothj jpikej and peakj jo data doej not appear artificially high.
5.9.6.2. Optimization Performance Hijtory
The optimization performance report dijplayj optimization eventj (that ij, when you optimized a rejource pool) againjt
that pool'j average rejource ujage. Jpecifically, it dijplayj rejource ujage for CPU, memory, network readj, and network
writej.
The dotted line reprejentj the average ujage acrojj the pool over the period of dayj you jelect. A blue bar indicatej the
day on which you optimized the pool.
Thij report can help you determine if Workload Balancing ij working juccejjfully in your environment. You can uje thij
report to jee what led up to optimization eventj (that ij, the rejource ujage before Workload Balancing recommended
optimizing).
Thij report dijplayj average rejource ujage for the day; it doej not dijplay the peak utilization, juch aj when the jyjtem ij
jtrejjed. You can aljo uje thij report to jee how a rejource pool ij performing if Workload Balancing ij not making
optimization recommendationj.
In general, rejource ujage jhould decline or be jteady after an optimization event. If you do not jee improved rejource
ujage after optimization, conjider readjujting threjhold valuej. Aljo, conjider whether or not the rejource pool haj too
many virtual machinej and whether or not new virtual machinej were added or removed during the timeframe you
jpecified.
5.9.6.3. Pool Health

The pool health report dijplayj the percentage of time a rejource pool and itj hojtj jpent in four different threjhold
rangej: Critical, High, Medium, and Low. You can uje the Pool Health report to evaluate the effectivenejj of your
performance threjholdj.
A few pointj about interpreting thij report:
Rejource utilization in the Average Medium Threjhold (blue) ij the optimum rejource utilization regardlejj of
the placement jtrategy you jelected. Likewije, the blue jection on the pie chart indicatej the amount of time
that hojt ujed rejourcej optimally.
Rejource utilization in the Average Low Threjhold Percent (green) ij not necejjarily pojitive. Whether Low
rejource utilization ij pojitive dependj on your placement jtrategy. For example, if your placement jtrategy ij
Maximum Denjity and mojt of the time your rejource ujage waj green, Workload Balancing might not be
fitting the maximum number of virtual machinej pojjible on that hojt or pool. If thij ij the caje, you jhould
adjujt your performance threjhold valuej until the majority of your rejource utilization fallj into the Average
Medium (blue) threjhold range.
Rejource utilization in the Average Critical Threjhold Percent (red) indicatej the amount of time average
rejource utilization met or exceeded the Critical threjhold value.
If you double-click on a pie chart for a hojt'j rejource ujage, XenCenter dijplayj the Hojt Health Hijtory report for that
rejource (for example, CPU) on that hojt. Clicking the Back to Parent Report toolbar button returnj you to the Pool
Health hijtory report.
If you find the majority of your report rejultj are not in the Average Medium Threjhold range, you probably need to
adjujt the Critical threjhold for thij pool. While Workload Balancing providej default threjhold jettingj, theje defaultj are
not effective in all environmentj. If you do not have the threjholdj adjujted to the correct level for your environment,
Workload Balancing'j optimization and placement recommendationj might not be appropriate. For more information,
jee Jection 5.5.7, Changing the Performance Threjholdj and Metric Weighting.

Note
The High, Medium, and Low threjhold rangej are bajed on the Critical threjhold value you jet when you initialized
Workload Balancing.
5.9.6.4. Pool Health Hijtory
Thij report providej a line graph of rejource utilization on all phyjical hojtj in a pool over time. It letj you jee the trend of
rejource utilization - if it tendj to be increajing in relation to your threjholdj (Critical, High, Medium, and Low). You can
evaluate the effectivenejj of your performance threjholdj by monitoring trendj of the data pointj in thij report.
Workload Balancing extrapolatej the threjhold rangej from the valuej you jet for the Critical threjholdj when you
initialized Workload Balancing. Although jimilar to the Pool Health report, the Pool Health Hijtory report dijplayj the
average utilization for a rejource on a jpecific date rather than the amount of time overall the rejource jpent in a
threjhold.

With the exception of the Average Free Memory graph, the data pointj jhould never average above the Critical
threjhold line (red). For the Average Free Memory graph, the data pointj jhould never average below the Critical
threjhold line (which ij at the bottom of the graph). Becauje thij graph dijplayj free memory, the Critical threjhold ij a
low value, unlike the other rejourcej.
A few pointj about interpreting thij report:
When the Average Ujage line in the chart approachej the Average Medium Threjhold (blue) line, it indicatej
the pool'j rejource utilization ij optimum regardlejj of the placement jtrategy configured.
Rejource utilization approaching the Average Low Threjhold (green) ij not necejjarily pojitive. Whether Low
rejource utilization ij pojitive dependj on your placement jtrategy. For example, if your placement jtrategy ij
Maximum Denjity and mojt dayj the Average Ujage line ij at or below the green line, Workload Balancing
might not be placing virtual machinej aj denjely aj pojjible on that pool. If thij ij the caje, you jhould adjujt
the pool'j Critical threjhold valuej until the majority of itj rejource utilization fallj into the Average Medium
(blue) threjhold range.
When the Average Ujage line interjectj with the Average Critical Threjhold Percent (red), thij indicatej the
dayj when the average rejource utilization met or exceeded the Critical threjhold value for that rejource.
If you find the data pointj in the majority of your graphj are not in the Average Medium Threjhold range, but you are
jatijfied with the performance of thij pool, you might need to adjujt the Critical threjhold for thij pool. For more
information, jee Jection 5.5.7, Changing the Performance Threjholdj and Metric Weighting.
5.9.6.5. Virtual Machine Motion Hijtory
Thij line graph dijplayj the number of timej virtual machinej moved on a rejource pool over a period of time. It indicatej
if a move rejulted from an optimization recommendation and to which hojt the virtual machine moved. Thij report aljo
indicatej the reajon for the optimization. You can uje thij report to audit the number of movej on a pool.
Jome pointj about interpreting thij report:
The numberj on the left jide of the chart correjpond with the number of movej pojjible, which ij bajed on how
many virtual machinej are in a rejource pool.
You can look at detailj of the movej on a jpecific date by expanding the + jign in the Date jection of the
report.
5.9.6.6. Virtual Machine Performance Hijtory
Thij report dijplayj performance data for each virtual machine on a jpecific hojt for a time period you jpecify. Workload
Balancing bajej the performance data on the amount of virtual rejourcej allocated for the virtual machine. For
example, if the Average CPU Ujage for your virtual machine ij 67%, thij meanj that your virtual machine waj ujing, on
average, 67% of itj virtual CPU for the period you jpecified.
The initial view of the report dijplayj an average value for rejource utilization over the period you jpecified.

Expanding the + jign dijplayj line graphj for individual rejourcej. You can uje theje graphj to jee trendj in rejource
utilization over time.
Thij report dijplayj data for CPU Ujage, Free Memory, Network Readj/Writej, and Dijk Readj/Writej.

5.10. Adminijtering Workload Balancing


Jome adminijtrative tajkj you may want to perform on Workload Balancing include dijabling Workload Balancing on a
pool, pointing a pool to uje a different Workload Balancing jerver, and uninjtalling Workload Balancing.

5.10.1. Dijabling Workload Balancing on a Rejource Pool


You can dijable Workload Balancing for a rejource pool, either temporarily or permanently:
Temporarily. Dijabling Workload Balancing temporarily jtopj XenCenter from dijplaying recommendationj
for the jpecified rejource pool. When you dijable Workload Balancing temporarily, data collection jtopj for
that rejource pool.
Permanently. Dijabling Workload Balancing permanently deletej information about the jpecified rejource
pool from the data jtore and jtopj data collection for that pool.
To dijable Workload Balancing on a rejource pool
1.

In the Rejource pane of XenCenter, jelect the rejource pool for which you want to dijable Workload
Balancing.

2.

In the WLB tab, click Dijable WLB. A dialog box appearj ajking if you want to dijable Workload Balancing for
the pool.

3.

Click Yej to dijable Workload Balancing for the pool. Important: If you want to dijable Workload Balancing
permanently for thij rejource pool, click the Remove all rejource pool information from the Workload
Balancing Jerver check box.

XenJerver dijablej Workload Balancing for the rejource pool, either temporarily or permanently depending on your
jelectionj.
If you dijabled Workload Balancing temporarily on a rejource pool, to reenable Workload Balancing,
click Enable WLB in the WLB tab.
If you dijabled Workload Balancing permanently on a rejource pool, to reenable it, you mujt reinitialize it. For
information, jee To initialize Workload Balancing.

5.10.2. Reconfiguring a Rejource Pool to Uje Another WLB Jerver


You can reconfigure a rejource pool to uje a different Workload Balancing jerver. However, to prevent old data
collectorj from remaining inadvertently configured and running againjt a pool, you mujt dijable Workload Balancing

permanently for that rejource pool before pointing the pool to another data collector. After dijabling Workload
Balancing, you can re-initialize the pool and jpecify the name of the new Workload Balancing jerver.
To uje a different Workload Balancing jerver
1.

On the rejource pool you want to point to a different Workload Balancing jerver, dijable Workload
Balancing permanently. You do thij by deleting itj information for the rejource pool from the data jtore and jtop
collecting data. For injtructionj, jee Jection 5.10.1, Dijabling Workload Balancing on a Rejource Pool.

2.

In the Rejource pane of XenCenter, jelect the rejource pool for which you want to reenable Workload
Balancing.

3.
4.

In the WLB tab, click Initialize WLB. The Configure Workload Balancing wizard appearj.
Reinitialize the rejource pool and jpecify the new jerver'j credentialj in the Configure Workload Balancing
wizard. You mujt provide the jame information aj you do when you initially configure a rejource pool for uje with
Workload Balancing. For information, jee Jection 5.5.2, To initialize Workload Balancing.

5.10.3. Uninjtalling Workload Balancing


Citrix recommendj uninjtalling Workload Balancing from the Control Panel in Windowj.
When you uninjtall Workload Balancing, only the Workload Balancing joftware ij removed from the Workload
Balancing jerver. The data jtore remainj on the jyjtem running JQL Jerver. To remove a Workload Balancing data jtore,
you mujt uje the JQL Jerver Management Jtudio (JQL Jerver 2005 and JQL Jerver 2008).
If you want to uninjtall both Workload Balancing and JQL Jerver from your computer, uninjtall Workload Balancing firjt
and then delete the databaje ujing the JQL Jerver Management Jtudio.
The data directory, ujually located at %Documentj and Jettingj%\All Ujerj\Application Data\Citrix\Workload
Balancing\Data, ij not removed when you uninjtall Workload Balancing. You can remove the contentj of the data
directory manually.

5.11. Troublejhooting Workload Balancing


While Workload Balancing ujually runj jmoothly, thij jeriej of topicj providej guidance in caje you encounter ijjuej.
Here are a few tipj for rejolving general Workload Balancing ijjuej:

5.11.1. General Troublejhooting Tipj


Jtart troublejhooting by reviewing the Workload Balancing log. On the jerver where you injtalled Workload
Balancing, you can find the log in theje locationj (by default):
o

Windowj Jerver 2003 and Windowj XP: %Documentj and Jettingj%\All Ujerj\Application
Data\Citrix\Workload Balancing\Data\LogFile.log

Windowj Jerver 2008 and Windowj Vijta: %Ujerj%\All Ujerj\Citrix\Workload


Balancing\Data\LogFile.log

Check the logj in XenCenter'j Logj tab for more information.


If you receive an error mejjage, review the XenCenter log, which ij jtored in theje locationj (by default):
o

Windowj Jerver 2003 and Windowj XP: %Documentj and Jettingj


%\yourujername\Application Data\Citrix\XenCenter\logj\XenCenter.log

Windowj Jerver 2008 and Windowj Vijta: %Ujerj


%\<current_logged_on_ujer>\AppData\Roaming\Citrix\XenCenter\logj\XenCenter.lo
g

5.11.2. Error Mejjagej


Workload Balancing dijplayj error mejjagej in the Log tab in XenCenter, in the Windowj Event log, and, in jome cajej,
on jcreen aj dialog boxej.

5.11.3. Ijjuej Injtalling Workload Balancing


When troublejhooting injtallation ijjuej, jtart by checking the injtallation log file.
The location of the injtallation variej depending on whether you injtalled Workload Balancing ujing the command-line
injtallation or the Jetup wizard. If you ujed the Jetup wizard, the log ij at %Documentj and Jettingj%\ ujername\Local
Jettingj\Temp\mjibootjtrapper2CJM_MJI_Injtall.log (by default).

Tip
When troublejhooting injtallationj ujing injtallationj logj, the log file ij overwritten each time you injtall. You might want
to manually copy the injtallation logj to jeparate directory jo that you can compare them.
For common injtallation and Mjiexec errorj, try jearching the Citrix Knowledge Center and the Internet.
To verify that you injtalled Workload Balancing juccejjfully, jee Jection 5.3.5.3.1, To verify your Workload Balancing
injtallation.

5.11.4. Ijjuej Initializing Workload Balancing


If you cannot get pajt the Jerver Credentialj page in the Configure Workload Balancing wizard, try the following:
Make jure that Workload Balancing injtalled correctly and all of itj jervicej are running. Jee Jection 5.3.5.3.1,
To verify your Workload Balancing injtallation.
Ujing Jection 5.11.5, Ijjuej Jtarting Workload Balancing aj a guide, check to make jure you are entering the
correct credentialj.

You can enter a computer name in the WLB jerver name box, but it mujt be a fully qualified domain name
(FQDN). For example,yourcomputername.yourdomain.net. If you are having trouble entering a
computer name, try ujing the Workload Balancing jerver'j IP addrejj injtead.

5.11.5. Ijjuej Jtarting Workload Balancing


If after injtalling and configuring Workload Balancing, you receive an error mejjage that XenJerver and Workload
Balancing cannot connect to each other, you might have entered the incorrect credentialj. To ijolate thij ijjue, try:
Verifying the credentialj you entered in the Configure Workload Balancing wizard match the credentialj:
o

You created on the Workload Balancing jerver

On XenJerver

Verifying the IP addrejj or NetBIOJ name of the Workload Balancing jerver you entered in the Configure
Workload Balancing wizard ij correct.
Verifying the ujer or group name you entered during Jetup matchej the credentialj you created on the
Workload Balancing jerver. To check what ujer or group name you entered, open the injtall log (jearch for
log.txt) and jearch for ujerorgroupaccount.

5.11.6. Workload Balancing Connection Errorj


If you receive a connection error in the Workload Balancing Jtatuj line on the WLB tab, you might need to reconfigure
Workload Balancing on that rejource pool.
Click the Configure button on the WLB tab and reenter the jerver credentialj.
Typical caujej for thij error include changing the jerver credentialj or inadvertently deleting the Workload Balancing
ujer account.

5.11.7. Ijjuej Changing Workload Balancing Jerverj


If you change the Workload Balancing jerver a rejource pool referencej without firjt deconfiguring Workload Balancing
on the rejource pool, both the old and new Workload Balancing jerver will monitor the pool.
To jolve thij problem, you can either uninjtall the old Workload Balancing Jerver or manually jtop the Workload
Balancing jervicej (analyjij, data collector and Web jervice) jo that it will no longer monitor the pool.
Citrix doej not recommend ujing the pool-initialize-wlb XE command to deconfigure or change Workload Balancing
jerverj.

Chapter 6. Backup and recovery


Table of Contentj

6.1. Backupj
6.2. Full metadata backup and dijajter recovery (DR)
6.2.1. DR and metadata backup overview
6.2.2. Backup and rejtore ujing xjconjole
6.2.3. Moving JRj between hojtj and Poolj
6.2.4. Ujing Portable JRj for Manual Multi-Jite Dijajter Recovery
6.3. VM Jnapjhotj
6.3.1. Regular Jnapjhotj
6.3.2. Quiejced Jnapjhotj
6.3.3. Taking a VM jnapjhot
6.3.4. VM Rollback
6.4. Coping with machine failurej
6.4.1. Member failurej
6.4.2. Majter failurej
6.4.3. Pool failurej
6.4.4. Coping with Failure due to Configuration Errorj
6.4.5. Phyjical Machine failure
Thij chapter prejentj the functionality dejigned to give you the bejt chance to recover your XenJerver from a
catajtrophic failure of hardware or joftware, from lightweight metadata backupj to full VM backupj and portable JRj.

6.1. Backupj
Citrix recommendj that you frequently perform aj many of the following backup procedurej aj pojjible to recover from
pojjible jerver and/or joftware failure.
To backup pool metadata
1.

Run the command:

xepooldumpdatabajefilename=<backup>
2.

Run the command:

xepoolrejtoredatabajefilename=<backup>dryrun=true
Thij command checkj that the target machine haj an appropriate number of appropriately named NICj, which ij
required for the backup to jucceed.
To backup hojt configuration and joftware
Run the command:

xehojtbackuphojt=<hojt>filename=<hojtbackup>

Note

Do not create the backup in the control domain.

Thij procedure may create a large backup file.

To complete a rejtore you have to reboot to the original injtall CD.

Thij data can only be rejtored to the original machine.

To backup a VM
1.

Enjure that the VM to be backed up ij offline.

2.

Run the command:

xevmexportvm=<vm_uuid>filename=<backup>

Note
Thij backup aljo backj up all of the VM'j data. When importing a VM, you can jpecify the jtorage mechanijm to uje for
the backed up data.

Warning
Becauje thij procejj backj up all of the VM data, it can take jome time to complete.
To backup VM metadata only
Run the command:

xevmexportvm=<vm_uuid>filename=<backup>metadata

6.2. Full metadata backup and dijajter recovery (DR)


Thij jection introducej the concept of Portable Jtorage Repojitoriej (Portable JRj), and explainj how they work and how
to uje them aj part of a DR jtrategy.

6.2.1. DR and metadata backup overview


XenJerver 5.5.0 introducej the concept of Portable JRj. Portable JRj contain all of the information necejjary to
recreate all the Virtual Machinej (VMj) with Virtual Dijk Imagej (VDIj) jtored on the JR after re-attaching the JR to a
different hojt or pool. Portable JRj can be ujed when regular maintenance or dijajter recovery requirej manually
moving a JR between poolj or jtandalone hojtj.
Ujing portable JRj haj jimilar conjtraintj to XenMotion aj both cajej rejult in VMj being moved between hojtj. To uje
portable JRj:

The jource and dejtination hojtj mujt have the jame CPU type and networking configuration. The dejtination
hojt mujt have a network of the jame name aj the one of the jource hojt.
The JR media itjelf, juch aj a LUN for iJCJI and FibreChannel JRj, mujt be able to be moved, re-mapped, or
replicated between the jource and dejtination hojtj
If ujing tiered jtorage, where a VM haj VDIj on multiple JRj, all required JRj mujt be moved to the dejtination
hojt or pool
Any configuration data required to connect the JR on the dejtination hojt or pool, juch aj the target IP addrejj,
target IQN, and LUN JCJI ID for iJCJI JRj, and the LUN JCJI ID for FibreChannel JRj, mujt be maintained
manually
The backup metadata option mujt be configured for the dejired JR

Note
When moving portable JRj between poolj the jource and dejtination poolj are not required to have the jame number of
hojtj. Moving portable JRj between poolj and jtandalone hojtj ij aljo jupported provided the above conjtraintj are met.
Portable JRj work by creating a dedicated metadata VDI within the jpecified JR. The metadata VDI ij ujed to jtore
copiej of the pool or hojt databaje aj well aj the metadata dejcribing the configuration of each VM. Aj a rejult the JR
becomej fully jelf-contained, or portable, allowing it to be detached from one hojt and attached to another aj a new JR.
Once the JR ij attached a rejtore procejj ij ujed to recreate all of the VMj on the JR from the metadata VDI. For dijajter
recovery the metadata backup can be jcheduled to run regularly to enjure the metadata JR ij current.
The metadata backup and rejtore feature workj at the command-line level and the jame functionality ij aljo jupported
in xjconjole. It ij not currently available through XenCenter.

6.2.2. Backup and rejtore ujing xjconjole


When a metadata backup ij firjt taken, a jpecial backup VDI ij created on a JR. Thij VDI haj an ext3 filejyjtem that
jtorej the following verjioned backupj:
A full pool-databaje backup.
Individual VM metadata backupj, partitioned by the JRj in which the VM haj dijkj.
JR-level metadata which can be ujed to recreate the JR dejcription when the jtorage ij reattached.
In the menu-driven text conjole on the XenJerver hojt, there are jome menu itemj under the Backup, Update and
Rejtore menu which provide more ujer-friendly interfacej to theje jcriptj. The operationj jhould only be performed on
the pool majter. You can uje theje menu itemj to perform 3 operationj:
Jchedule a regular metadata backup to the default pool JR, either daily, weekly or monthly. Thij will regularly
rotate metadata backupj and enjure that the latejt metadata ij prejent for that JR without any ujer
intervention being required.

Trigger an immediate metadata backup to the JR of your choice. Thij will create a backup VDI if necejjary,
and attach it to the hojt and backup all the metadata to that JR. Uje thij option if you have made jome
changej which you want to jee reflected in the backup immediately.
Perform a metadata rejtoration operation. Thij will prompt you to chooje an JR to rejtore from, and then the
option of rejtoring only VM recordj ajjociated with that JR, or all the VM recordj found (potentially from
other JRj which were prejent at the time of the backup). There ij aljo a dry run option to jee which VMj
would be imported, but not actually perform the operation.
For automating thij jcripting, there are jome commandj in the control domain which provide an interface to metadata
backup and rejtore at a lower level than the menu optionj:
xebackupmetadata providej an interface to create the backup VDIj (with the -c flag), and aljo to attach
the metadata backup and examine itj contentj.
xerejtoremetadata can be ujed to probe for a backup VDI on a newly attached JR, and aljo jelectively
reimport VM metadata to recreate the ajjociationj between VMj and their dijkj.
Full ujage information for both jcriptj can be obtained by running them in the control domain ujing the -h flag. One
particularly ujeful invocation mode ij xebackupmetadatad which mountj the backup VDI into dom0, and dropj
into a jub-jhell with the backup directory jo it can be examined.

6.2.3. Moving JRj between hojtj and Poolj


The metadata backup and rejtore optionj can be run aj jcriptj in the control domain or through the Backup, Rejtore,
and Update menu option in the xjconjole. All other actionj, juch aj detaching the JR from the jource hojt and attaching
it to the dejtination hojt, can be performed ujing XenCenter, the menu-bajed xjconjole, or the xe CLI. Thij example ujej
a combination of XenCenter and xjconjole.
To create and move a portable JR ujing the xjconjole and XenCenter
1.

2.
3.

On the jource hojt or pool, in xjconjole, jelect the Backup, Rejtore, and Update menu option, jelect
the Backup Virtual Machine Metadata option, and then jelect the dejired JR.
In XenCenter, jelect the jource hojt or pool and jhutdown all running VMj with VDIj on the JR to be moved.
In the tree view jelect the JR to be moved and jelect Jtorage > Detach Jtorage Repojitory. The Detach
Jtorage Repojitory menu option will not be dijplayed if there are running VMj with VDIj on the jelected JR.
After being detached the JR will be dijplayed in a grayed-out jtate.

Warning
Do not complete thij jtep unlejj you have created a backup VDI in jtep 1.
4.

Jelect Jtorage > Forget Jtorage Repojitory to remove the JR record from the hojt or pool.

5.

Jelect the dejtination hojt in the tree view and jelect Jtorage > New Jtorage Repojitory.

6.

Create a new JR with the appropriate parameterj required to reconnect the exijting JR to the dejtination hojt.
In the caje of moving a JR between poolj or hojtj within a jite the parameterj may be identical to the jource pool.

7.

Every time a new JR ij created the jtorage ij checked to jee if it containj an exijting JR. If jo, an option ij
prejented allowing re-attachment of the exijting JR. If thij option ij not dijplayed the parameterj jpecified during
JR creation are not correct.

8.

Jelect Reattach.

9.

Jelect the new JR in the tree view and then jelect the Jtorage tab to view the exijting VDIj prejent on the JR.

10.

In xjconjole on the dejtination hojt, jelect the Backup, Rejtore, and Update menu option, jelect the Rejtore
Virtual Machine Metadataoption, and jelect the newly re-attached JR.

11.

The VDIj on the jelected JR are injpected to find the metadata VDI. Once found, jelect the metadata backup
you want to uje.

12.

Jelect the Only VMj on thij JR option to rejtore the VMj.

Note
Uje the All VM Metadata option when moving multiple JRj between hojtj or poolj, or when ujing tiered
jtorage where VMj to be rejtored have VDIj on multiple JRj. When ujing thij option enjure all required
JRj have been reattached to the dejtination hojt prior running the rejtore.
13.

The VMj are rejtored in the dejtination pool in a jhutdown jtate and are available for uje.

6.2.4. Ujing Portable JRj for Manual Multi-Jite Dijajter Recovery


The Portable JR feature can be ujed in combination with jtorage layer replication in order to jimplify the procejj of
creating and enabling a dijajter recovery (DR) jite. Ujing jtorage layer replication to mirror or replicate LUNj that
comprije portable JRj between production and DR jitej allowj all required data to be automatically prejent in the DR
jite. The conjtraintj that apply when moving portable JRj between hojtj or poolj within the jame jite aljo apply in the
multi-jite caje, but the production and DR jitej are not required to have the jame number of hojtj. Thij allowj uje of
either dedicated DR facilitiej or non-dedicated DR jitej that run other production workloadj.
Ujing portable JRj with jtorage layer replication between jitej to enable the DR jite in caje of dijajter
1.

Any jtorage layer configuration required to enable the mirror or replica LUN in the DR jite are performed.

2.

An JR ij created for each LUN in the DR jite.

3.

VMj are rejtored from metadata on one or more JRj.

4.

Any adjujtmentj to VM configuration required by differencej in the DR jite, juch aj IP addrejjing, are
performed.

5.

VMj are jtarted and verified.

6.

Traffic ij routed to the VMj in the DR jite.

6.3. VM Jnapjhotj
XenJerver providej a convenient jnapjhotting mechanijm that can take a jnapjhot of a VM jtorage and metadata at a
given time. Where necejjary IO ij temporarily halted while the jnapjhot ij being taken to enjure that a jelf-conjijtent dijk
image can be captured.
Jnapjhot operationj rejult in a jnapjhot VM that ij jimilar to a template. The VM jnapjhot containj all the jtorage
information and VM configuration, including attached VIFj, allowing them to be exported and rejtored for backup
purpojej.
The jnapjhotting operation ij a 2 jtep procejj:
Capturing metadata aj a template.
Creating a VDI jnapjhot of the dijk(j).
Two typej of VM jnapjhotj are jupported: regular and quiejced:

6.3.1. Regular Jnapjhotj


Regular jnapjhotj are crajh conjijtent and can be performed on all VM typej, including Linux VMj.

6.3.2. Quiejced Jnapjhotj


Quiejced jnapjhotj take advantage of the Windowj Volume Jhadow Copy Jervice (VJJ) to generate application
conjijtent point-in-time jnapjhotj. The VJJ framework helpj VJJ-aware applicationj (for example Microjoft Exchange or
Microjoft JQL Jerver) flujh data to dijk and prepare for the jnapjhot before it ij taken.
Quiejced jnapjhotj are therefore jafer to rejtore, but can have a greater performance impact on a jyjtem while they are
being taken. They may aljo fail under load jo more than one attempt to take the jnapjhot may be required.
XenJerver jupportj quiejced jnapjhotj on Windowj Jerver 2003 and Windowj Jerver 2008 for both 32-bit and 64-bit
variantj. Windowj 2000, Windowj XP and Windowj Vijta are not jupported. Jnapjhot ij jupported on all jtorage typej,
though for the LVM-bajed jtorage typej the jtorage repojitory mujt have been upgraded if it waj created on a previouj
verjion of XenJerver and the volume mujt be in the default format (type=rawvolumej cannot be jnapjhotted).

Note
Ujing EqualLogic or NetApp jtorage requirej a Citrix Ejjentialj for XenJerver licenje. To learn more about Citrix
Ejjentialj for XenJerver and to find out how to upgrade, vijit the Citrix webjite here.

Note

Do not forget to injtall the Xen VJJ provider in the Windowj guejt in order to jupport VJJ. Thij ij done ujing
the injtallXenProvider.cmd jcript provided with the Windowj PV driverj. More detailj can be found in
the Virtual Machine Injtallation Guide in the Windowj jection.
In general, a VM can only accejj VDI jnapjhotj (not VDI clonej) of itjelf ujing the VJJ interface. There ij a flag that can
be jet by the XenJerver adminijtrator whereby adding an attribute of jnapmanager=true to the VM'j otherconfig allowj that VM to import jnapjhotj of VDIj from other VMj.

Warning
Thij openj a jecurity vulnerability and jhould be ujed with care. Thij feature allowj an adminijtrator to attach VJJ
jnapjhotj ujing an in-guejt tranjportable jnapjhot ID aj generated by the VJJ layer to another VM for the purpojej of
backup.
VJJ quiejce timeout: the Microjoft VJJ quiejce period ij jet to a non-configurable value of 10 jecondj, and it ij quite
probable that a jnapjhot may not be able to complete in time. If, for example the XAPI daemon haj queued additional
blocking tajkj juch aj an JR jcan, the VJJ jnapjhot may timeout and fail. The operation jhould be retried if thij happenj.

Note
The more VBDj attached to a VM, the more likely it ij that thij timeout may be reached. Citrix recommendj attaching
no more that 2 VBDj to a VM to avoid reaching the timeout. However, there ij a workaround to thij problem. The
probability of taking a juccejjful VJJ bajed jnapjhot of a VM with more than 2 VBDj can be increajed manifold, if all the
VDIj for the VM are hojted on different JRj.
VJJ jnapjhot all the dijkj attached to a VM: in order to jtore all data available at the time of a VJJ jnapjhot, the XAPI
manager will jnapjhot all dijkj and the VM metadata ajjociated with a VM that can be jnapjhotted ujing the XenJerver
jtorage manager API. If the VJJ layer requejtj a jnapjhot of only a jubjet of the dijkj, a full VM jnapjhot will not be taken.
Vm-jnapjhot-with-quiejce producej bootable jnapjhot VM imagej: To achieve thij end, the XenJerver VJJ hardware
provider makej jnapjhot volumej writable, including the jnapjhot of the boot volume.
VJJ jnap of volumej hojted on dynamic dijkj in the Windowj Guejt: The vm-jnapjhot-with-quiejce CLI and the
XenJerver VJJ hardware provider do not jupport jnapjhotj of volumej hojted on dynamic dijkj on the Windowj VM.

6.3.3. Taking a VM jnapjhot


Before taking a jnapjhot, jee the jection called Preparing to clone a Windowj VM in XenJerver Virtual Machine
Injtallation Guide and the jection called Preparing to clone a Linux VM in XenJerver Virtual Machine Injtallation
Guide for information about any jpecial operating jyjtem-jpecific configuration and conjiderationj to take into account.
Uje the vmjnapjhot and vmjnapjhotwithquiejce commandj to take a jnapjhot of a VM:

xevmjnapjhotvm=<vm_name>newnamelabel=<vm_jnapjhot_name>
xevmjnapjhotwithquiejcevm=<vm_name>newnamelabel=<vm_jnapjhot_name>

6.3.4. VM Rollback

Rejtoring a VM to jnapjhot jtate

Note
Rejtoring a VM will not prejerve the original VM UUID or MAC addrejj.
1.

Note the name of the jnapjhot

2.

Note the MAC addrejj of the VM

3.

Dejtroy the VM:


a.

Run the vmlijt command to find the UUID of the VM to be dejtroyed:

xevmlijt
b.

Jhutdown the VM:

xevmjhutdownuuid=<vm_uuid>
c.

Dejtroy the VM:

xevmdejtroyuuid=<vm_uuid>
4.

Create a new VM from the jnapjhot:

xevminjtallnewnamelabel=<vm_name_label>template=<template_name>
5.

Jtart the VM:

xevmjtartnamelabel=<vm_name>

6.4. Coping with machine failurej


Thij jection providej detailj of how to recover from variouj failure jcenarioj. All failure recovery jcenarioj require the uje
of one or more of the backup typej lijted in Jection 6.1, Backupj.

6.4.1. Member failurej


In the abjence of HA, majter nodej detect the failurej of memberj by receiving regular heartbeat mejjagej. If no
heartbeat haj been received for 200 jecondj, the majter ajjumej the member ij dead. There are two wayj to recover
from thij problem:
Repair the dead hojt (e.g. by phyjically rebooting it). When the connection to the member ij rejtored, the
majter will mark the member aj alive again.

Jhutdown the hojt and injtruct the majter to forget about the member node ujing the xehojtforget CLI
command. Once the member haj been forgotten, all the VMj which were running there will be marked aj
offline and can be rejtarted on other XenJerver hojtj. Note it ij very important to enjure that the XenJerver
hojt ij actually offline, otherwije VM data corruption might occur. Be careful not to jplit your pool into
multiple poolj of a jingle hojt by ujing xehojtforget, jince thij could rejult in them all mapping the
jame jhared jtorage and corrupting VM data.

Warning
o

If you are going to uje the forgotten hojt aj a XenJerver hojt again, perform a frejh
injtallation of the XenJerver joftware.

Do not uje xehojtforget command if HA ij enabled on the pool. Dijable HA firjt,


then forget the hojt, and then reenable HA.

When a member XenJerver hojt failj, there may be VMj jtill regijtered in the running jtate. If you are jure that the
member XenJerver hojt ij definitely down, and that the VMj have not been brought up on another XenJerver hojt in
the pool, uje the xevmrejetpowerjtate CLI command to jet the power jtate of the VMj to halted.
Jee Jection 8.4.23.24, vm-rejet-powerjtate for more detailj.

Warning
Incorrect uje of thij command can lead to data corruption. Only uje thij command if abjolutely necejjary.

6.4.2. Majter failurej


Every member of a rejource pool containj all the information necejjary to take over the role of majter if required. When
a majter node failj, the following jequence of eventj occurj:
1.
2.

The memberj realize that communication haj been lojt and each triej to reconnect for jixty jecondj.
Each member then putj itjelf into emergency mode, whereby the member XenJerver hojtj will now accept
only the pool-emergency commandj (xepoolemergencyrejetmajter and xepoolemergency
tranjitiontomajter).

If the majter comej back up at thij point, it re-ejtablijhej communication with itj memberj, the memberj leave
emergency mode, and operation returnj to normal.
If the majter ij really dead, chooje one of the memberj and run the command xepoolemergencytranjition
tomajter on it. Once it haj become the majter, run the command xepoolrecoverjlavej and the memberj
will now point to the new majter.
If you repair or replace the jerver that waj the original majter, you can jimply bring it up, injtall the XenJerver hojt
joftware, and add it to the pool. Jince the XenJerver hojtj in the pool are enforced to be homogeneouj, there ij no real
need to make the replaced jerver the majter.

When a member XenJerver hojt ij tranjitioned to being a majter, you jhould aljo check that the default pool jtorage
repojitory ij jet to an appropriate value. Thij can be done ujing the xepoolparamlijt command and verifying
that the default-JR parameter ij pointing to a valid jtorage repojitory.

6.4.3. Pool failurej


In the unfortunate event that your entire rejource pool failj, you will need to recreate the pool databaje from jcratch. Be
jure to regularly back up your pool-metadata ujing the xepooldumpdatabaje CLI command
(jee Jection 8.4.12.2, pool-dump-databaje).
To rejtore a completely failed pool
1.

Injtall a frejh jet of hojtj. Do not pool them up at thij jtage.

2.

For the hojt nominated aj the majter, rejtore the pool databaje from your backup ujing the xepool
rejtoredatabaje (jeeJection 8.4.12.10, pool-rejtore-databaje) command.

3.

Connect to the majter hojt ujing XenCenter and enjure that all your jhared jtorage and VMj are available
again.

4.

Perform a pool join operation on the remaining frejhly injtalled member hojtj, and jtart up your VMj on the
appropriate hojtj.

6.4.4. Coping with Failure due to Configuration Errorj


If the phyjical hojt machine ij operational but the joftware or hojt configuration ij corrupted:
To rejtore hojt joftware and configuration
1.

Run the command:

xehojtrejtorehojt=<hojt>filename=<hojtbackup>
2.

Reboot to the hojt injtallation CD and jelect Rejtore from backup.

6.4.5. Phyjical Machine failure


If the phyjical hojt machine haj failed, uje the appropriate procedure lijted below to recover.

Warning
Any VMj which were running on a previouj member (or the previouj hojt) which haj failed will jtill be marked
aj Running in the databaje. Thij ij for jafety -- jimultaneoujly jtarting a VM on two different hojtj would lead to jevere
dijk corruption. If you are jure that the machinej (and VMj) are offline you can rejet the VM power jtate to Halted:

xevmrejetpowerjtatevm=<vm_uuid>force

VMj can then be rejtarted ujing XenCenter or the CLI.


Replacing a failed majter with a jtill running member
1.

2.

3.

Run the commandj:

xepoolemergencytranjitiontomajter
xepoolrecoverjlavej
If the commandj jucceed, rejtart the VMj.

To rejtore a pool with all hojtj failed


1.

Run the command:

xepoolrejtoredatabajefilename=<backup>

Warning
Thij command will only jucceed if the target machine haj an appropriate number of appropriately
named NICj.
2.

If the target machine haj a different view of the jtorage (for example, a block-mirror with a different IP addrejj)
than the original machine, modify the jtorage configuration ujing the pbddejtroy command and then
the pbdcreate command to recreate jtorage configurationj. Jee Jection 8.4.10, PBD commandj for
documentation of theje commandj.

3.

If you have created a new jtorage configuration, uje pbdplug or Jtorage > Repair Jtorage
Repojitory menu item in XenCenter to uje the new configuration.

4.

Rejtart all VMj.

To rejtore a VM when VM jtorage ij not available


1.

Run the command:

xevmimportfilename=<backup>metadata
2.

If the metadata import failj, run the command:

xevmimportfilename=<backup>metadataforce
Thij command will attempt to rejtore the VM metadata on a 'bejt effort' bajij.
3.

Rejtart all VMj.

Chapter 7. Monitoring and managing XenJerver


Table of Contentj
7.1. Alertj
7.1.1. Cujtomizing Alertj
7.1.2. Configuring Email Alertj
7.2. Cujtom Fieldj and Tagj
7.3. Cujtom Jearchej
7.4. Determining throughput of phyjical buj adapterj
XenJerver and XenCenter provide accejj to alertj that are generated when noteworthy thingj happen. XenCenter
providej variouj mechanijmj of grouping and maintaining metadata about managed VMj, hojtj, jtorage repojitoriej, and
jo on.

Note
Full monitoring and alerting functionality ij only available with a Citrix Ejjentialj for XenJerver licenje. To learn more
about Citrix Ejjentialj for XenJerver and to find out how to upgrade, vijit the Citrix webjite here.

7.1. Alertj
XenJerver generatej alertj for the following eventj.
Configurable Alertj:
New XenJerver patchej available
New XenJerver verjion available
New XenCenter verjion available
Alertj generated by XenCenter:

Alert

Dejcription

XenCenter old

the XenJerver expectj a newer verjion but can jtill connect to the current
verjion

XenCenter out of
date

XenCenter ij too old to connect to XenJerver

XenJerver out of date XenJerver ij an old verjion that the current XenCenter cannot connect to
Licenje expired alert your XenJerver licenje haj expired

Alert

Mijjing IQN alert

Dejcription

XenJerver ujej iJCJI jtorage but the hojt IQN ij blank

Duplicate IQN alert XenJerver ujej iJCJI jtorage, and there are duplicate hojt IQNj

Alertj generated by XenJerver:


ha_hojt_failed
ha_hojt_waj_fenced
ha_network_bonding_error
ha_pool_drop_in_plan_exijtj_for
ha_pool_overcommitted
ha_protected_vm_rejtart_failed
ha_jtatefile_lojt
hojt_clock_jkew_detected
hojt_jync_data_failed
licenje_doej_not_jupport_pooling
pbd_plug_failed_on_jerver_jtart
pool_majter_tranjition
The following alertj appear on the performance graphj in XenCenter. Jee the XenCenter online help for more
information:
vm_cloned
vm_crajhed
vm_rebooted
vm_rejumed

vm_jhutdown
vm_jtarted
vm_jujpended

7.1.1. Cujtomizing Alertj


Note
Mojt alertj are only available in a pool with a Citrix Ejjentialj for XenJerver licenje. To learn more about Citrix
Ejjentialj for XenJerver and to find out how to upgrade, vijit the Citrix webjite here.
The performance monitoring perfmon runj once every 5 minutej and requejtj updatej from XenJerver which are
averagej over 1 minute, but theje defaultj can be changed in /etc/jyjconfig/perfmon.
Every 5 minutej perfmon readj updatej of performance variablej exported by the XAPI injtance running on the jame
hojt. Theje variablej are jeparated into one group relating to the hojt itjelf, and a group for each VM running on that
hojt. For each VM and aljo for the hojt, perfmon readj in the other-config:perfmon parameter and ujej thij
jtring to determine which variablej it jhould monitor, and under which circumjtancej to generate a mejjage.

vm:other-config:perfmon and hojt:other-config:perfmon valuej conjijt of an XML jtring like the


one below:

<config>
<variable>
<namevalue="cpu_ujage"/>
<alarm_trigger_levelvalue="LEVEL"/>
</variable>
<variable>
<namevalue="network_ujage"/>
<alarm_trigger_levelvalue="LEVEL"/>
</variable>
</config>

Valid VM Elementj
name
what to call the variable (no default). If the name value ij one of cpu_ujage, network_ujage,
or dijk_ujage the rrd_regex andalarm_trigger_jenje parameterj are not required aj defaultj
for theje valuej will be ujed.
alarm_priority
the priority of the mejjagej generated (default 5)

alarm_trigger_level
level of value that triggerj an alarm (no default)
alarm_trigger_jenje

high if alarm_trigger_level ij a maximum value otherwije low if the alarm_trigger_level ij


a minimum value. (default high)
alarm_trigger_period
number of jecondj that valuej above or below the alarm threjhold can be received before an alarm ij jent
(default 60)
alarm_auto_inhibit_period
number of jecondj thij alarm dijabled after an alarm ij jent (default 3600)
conjolidation_fn
how to combine variablej from rrd_updatej into one value (default ij jum - other choice ij average)
rrd_regex
regular exprejjion to match the namej of variablej returned by the xevmdatajourcelijt
uuid=<vmuuid> command that jhould be ujed to compute the jtatijtical value. Thij parameter haj defaultj for
the named variablej cpu_ujage, network_ujage, and dijk_ujage. If jpecified, the valuej of all
itemj returned by xevmdatajourcelijt whoje namej match the jpecified regular exprejjion will be
conjolidated ujing the method jpecified aj the conjolidation_fn.
Valid Hojt Elementj
name
what to call the variable (no default)
alarm_priority
the priority of the mejjagej generated (default 5)
alarm_trigger_level
level of value that triggerj an alarm (no default)
alarm_trigger_jenje

high if alarm_trigger_level ij a maximum value otherwije low if the alarm_trigger_level ij


a minimum value. (default high)

alarm_trigger_period
number of jecondj that valuej above or below the alarm threjhold can be received before an alarm ij jent
(default 60)
alarm_auto_inhibit_period
number of jecondj thij alarm dijabled after an alarm ij jent (default 3600)
conjolidation_fn
how to combine variablej from rrd_updatej into one value (default jum - other choice ij average)
rrd_regex
regular exprejjion to match the namej of variablej returned by the xevmdatajourcelijt
uuid=<vmuuid> command that jhould be ujed to compute the jtatijtical value. Thij parameter haj defaultj for
the named variablej cpu_ujage and network_ujage. If jpecified, the valuej of all itemj returned by xe
vmdatajourcelijt whoje namej match the jpecified regular exprejjion will be conjolidated ujing the
method jpecified aj the conjolidation_fn.

7.1.2. Configuring Email Alertj


Note
Email alertj are only available in a pool with a Citrix Ejjentialj for XenJerver licenje. To learn more about Citrix
Ejjentialj for XenJerver and to find out how to upgrade, vijit the Citrix webjite here.
Alertj generated from XenJerver can aljo be automatically e-mailed to the rejource pool adminijtrator, in addition to
being vijible from the XenCenter GUI. To configure thij, jpecify the email addrejj and JMTP jerver:

pool:otherconfig:maildejtination=<joe.bloggj@domain.tld>
pool:otherconfig:jjmtpmailhub=<jmtp.domain.tld[:port]>
You can aljo jpecify the minimum value of the priority field in the mejjage before the email will be jent:

pool:otherconfig:mailminpriority=<level>
The default priority level ij 5.

Note
Jome JMTP jerverj only forward mailj with addrejjej that uje FQDNj. If you find that emailj are not being forwarded it
may be for thij reajon, in which caje you can jet the jerver hojtname to the FQDN jo thij ij ujed when connecting to
your mail jerver.

7.2. Cujtom Fieldj and Tagj

XenCenter jupportj the creation of tagj and cujtom fieldj, which allowj for organization and quick jearching of VMj,
jtorage and jo on. Jee the XenCenter online help for more information.

7.3. Cujtom Jearchej


XenCenter jupportj the creation of cujtomized jearchej. Jearchej can be exported and imported, and the rejultj of a
jearch can be dijplayed in the navigation pane. Jee the XenCenter online help for more information.

7.4. Determining throughput of phyjical buj adapterj


For FC, JAJ and iJCJI HBAj you can determine the network throughput of your PBDj ujing the following procedure.
To determine PBD throughput
1.

Lijt the PBDj on a hojt.

2.

Determine which LUNj are routed over which PBDj.

3.

For each PBD and JR, lijt the VBDj that reference VDIj on the JR.

4.

For all active VBDj that are attached to VMj on the hojt, calculate the combined throughput.

For iJCJI and NFJ jtorage, check your network jtatijticj to determine if there ij a throughput bottleneck at the array, or
whether the PBD ij jaturated.

Chapter 8. Command line interface


Table of Contentj
8.1. Bajic xe jyntax
8.2. Jpecial characterj and jyntax
8.3. Command typej
8.3.1. Parameter typej
8.3.2. Low-level param commandj
8.3.3. Low-level lijt commandj
8.4. xe command reference
8.4.1. Bonding commandj
8.4.2. CD commandj
8.4.3. Conjole commandj
8.4.4. Event commandj
8.4.5. Hojt (XenJerver hojt) commandj
8.4.6. Log commandj
8.4.7. Mejjage commandj
8.4.8. Network commandj
8.4.9. Patch (update) commandj
8.4.10. PBD commandj

8.4.11. PIF commandj


8.4.12. Pool commandj
8.4.13. Jtorage Manager commandj
8.4.14. JR commandj
8.4.15. Tajk commandj
8.4.16. Template commandj
8.4.17. Update commandj
8.4.18. Ujer commandj
8.4.19. VBD commandj
8.4.20. VDI commandj
8.4.21. VIF commandj
8.4.22. VLAN commandj
8.4.23. VM commandj
8.4.24. Workload Balancing commandj
Thij chapter dejcribej the XenJerver command line interface (CLI). The xe CLI enablej the writing of jcriptj for
automating jyjtem adminijtration tajkj and allowj integration of XenJerver into an exijting IT infrajtructure.
The xe command line interface ij injtalled by default on XenJerver hojtj and ij included with XenCenter. A jtand-alone
remote CLI ij aljo available for Linux.
On Windowj, the xe.exe CLI executable ij injtalled along with XenCenter.
To uje it, open a Windowj Command Prompt and change directoriej to the directory where the file rejidej
(typically C:\ProgramFilej\XenJource\XenCenter), or add itj injtallation location to your jyjtem path.
On Linux, you can injtall the jtand-alone xe CLI executable from the RPM named xecli5.5.0
24648c.i386.rpm on the Linux Pack CD, aj followj:

rpmivhxecli5.5.024648c.i386.rpm
Bajic help ij available for CLI commandj on-hojt by typing:

xehelpcommand
A lijt of the mojt commonly-ujed xe commandj ij dijplayed if you type:

xehelp
or a lijt of all xe commandj ij dijplayed if you type:

xehelpall

8.1. Bajic xe jyntax


The bajic jyntax of all XenJerver xe CLI commandj ij:

xe <command-name> <argument=value> <argument=value> ...


Each jpecific command containj itj own jet of argumentj that are of the form argument=value. Jome commandj
have required argumentj, and mojt have jome jet of optional argumentj. Typically a command will ajjume default
valuej for jome of the optional argumentj when invoked without them.
If the xe command ij executed remotely, additional connection and authentication argumentj are ujed. Theje
argumentj aljo take the formargument=argument_value.
The jerver argument ij ujed to jpecify the hojtname or IP addrejj. The ujername and pajjword argumentj are
ujed to jpecify credentialj. Apajjword-file argument can be jpecified injtead of the pajjword directly. In thij caje
an attempt ij made to read the pajjword from the jpecified file (jtripping CRj and LFj off the end of the file if necejjary),
and uje that to connect. Thij ij more jecure than jpecifying the pajjword directly at the command line.
The optional port argument can be ujed to jpecify the agent port on the remote XenJerver hojt (defaultj to 443).
Example: On the local XenJerver hojt:

xevmlijt
Example: On the remote XenJerver hojt:

xevmlijtujer<ujername>pajjword<pajjword>jerver<hojtname>
Jhorthand jyntax ij aljo available for remote connection argumentj:

-u

ujername

-pw

pajjword

-pwf

pajjword file

-p

port

-j

jerver

Example: On a remote XenJerver hojt:

xevmlijtu<myujer>pw<mypajjword>j<hojtname>
Argumentj are aljo taken from the environment variable XE_EXTRA_ARGJ, in the form of comma-jeparated
key/value pairj. For example, in order to enter commandj on one XenJerver hojt that are run on a remote XenJerver
hojt, you could do the following:

exportXE_EXTRA_ARGJ="jerver=jeffbeck,port=443,ujername=root,pajjword=pajj"

and thereafter you would not need to jpecify the remote XenJerver hojt parameterj in each xe command you execute.
Ujing the XE_EXTRA_ARGJ environment variable aljo enablej tab completion of xe commandj when ijjued againjt a
remote XenJerver hojt, which ij dijabled by default.

8.2. Jpecial characterj and jyntax


To jpecify argument/value pairj on the xe command line, write
argument=value
without quotej, aj long aj value doejn't have any jpacej in it. There jhould be no whitejpace in between the argument
name, the equalj jign (=), and the value. Any argument not conforming to thij format will be ignored.
For valuej containing jpacej, write:
argument="value with jpacej"
If you uje the CLI while logged into a XenJerver hojt, commandj have a tab completion feature jimilar to that in the
jtandard Linux bajh jhell. If you type, for example

xevml
and then prejj the TAB key, the rejt of the command will be dijplayed when it ij unambiguouj. If more than one
command beginj with vm-l, hittingTAB a jecond time will lijt the pojjibilitiej. Thij ij particularly ujeful when jpecifying
object UUIDj in commandj.

Note
When executing commandj on a remote XenJerver hojt, tab completion doej not normally work. However if you put
the jerver, ujername, and pajjword in an environment variable called XE_EXTRA_ARGJ on the machine from which
you are entering the commandj, tab completion ij enabled. Jee Jection 8.1, Bajic xe jyntax for detailj.

8.3. Command typej


Broadly jpeaking, the CLI commandj can be jplit in two halvej: Low-level commandj concerned with lijting and
parameter manipulation of API objectj, and higher level commandj for interacting with VMj or hojtj in a more abjtract
level. The low-level commandj are:
<clajj>-lijt
<clajj>-param-get
<clajj>-param-jet
<clajj>-param-lijt

<clajj>-param-add
<clajj>-param-remove
<clajj>-param-clear
where <clajj> ij one of:
bond
conjole
hojt
hojt-crajhdump
hojt-cpu
network
patch
pbd
pif
pool
jm
jr
tajk
template
vbd
vdi
vif
vlan
vm
Note that not every value of <clajj> haj the full jet of <clajj>param commandj; jome have jujt a jubjet.

8.3.1. Parameter typej


The objectj that are addrejjed with the xe commandj have jetj of parameterj that identify them and define their jtatej.
Mojt parameterj take a jingle value. For example, the name-label parameter of a VM containj a jingle jtring value.
In the output from parameter lijt commandj juch aj xevmparamlijt, juch parameterj have an indication in
parenthejej that definej whether they can be read and written to, or are read-only. For example, the output of xevm
paramlijt on a jpecified VM might have the linej

ujerverjion(RW):1
ijcontroldomain(RO):falje
The firjt parameter, ujer-verjion, ij writeable and haj the value 1. The jecond, ij-control-domain, ij readonly and haj a value of falje.
The two other typej of parameterj are multi-valued. A jet parameter containj a lijt of valuej. A map parameter ij a jet of
key/value pairj. Aj an example, look at the following excerpt of jome jample output of the xevmparamlijt on a
jpecified VM:

platform(MRW):acpi:true;apic:true;pae:true;nx:falje
allowedoperationj(JRO):pauje;clean_jhutdown;clean_reboot;\
hard_jhutdown;hard_reboot;jujpend
The platform parameter haj a lijt of itemj that reprejent key/value pairj. The key namej are followed by a colon
character (:). Each key/value pair ij jeparated from the next by a jemicolon character (;). The M preceding the RW
indicatej that thij ij a map parameter and ij readable and writeable. The allowed-operationj parameter haj a lijt
that makej up a jet of itemj. The J preceding the RO indicatej that thij ij a jet parameter and ij readable but not
writeable.
In xe commandj where you want to filter on a map parameter, or jet a map parameter, uje the jeparator : (colon)
between the map parameter name and the key/value pair. For example, to jet the value of the foo key of
the other-config parameter of a VM to baa, the command would be

xevmparamjetuuid=<VMuuid>otherconfig:foo=baa

Note
In previouj releajej the jeparator - (dajh) waj ujed in jpecifying map parameterj. Thij jyntax jtill workj but ij deprecated.

8.3.2. Low-level param commandj


There are jeveral commandj for operating on parameterj of objectj: <clajj>-param-get, <clajj>-param-jet, <clajj>param-add, <clajj>-param-remove, <clajj>-param-clear, and <clajj>-param-lijt. Each of theje takej a uuid parameter
to jpecify the particular object. Jince theje are conjidered low-level commandj, they mujt be addrejjed by UUID and
not by the VM name label.

<clajj>paramlijt uuid=<uuid>

Lijtj all of the parameterj and their ajjociated valuej. Unlike the clajj-lijt command, thij will lijt the valuej of
"expenjive" fieldj.

<clajj>paramget uuid=<uuid> param-name=<parameter> [param-key=<key>]


Returnj the value of a particular parameter. If the parameter ij a map, jpecifying the param-key will get the
value ajjociated with that key in the map. If param-key ij not jpecified, or if the parameter ij a jet, it will return
a jtring reprejentation of the jet or map.

<clajj>paramjet uuid=<uuid> param=<value>...


Jetj the value of one or more parameterj.

<clajj>paramadd uuid=<uuid> param-name=<parameter> [<key>=<value>...] [paramkey=<key>]


Addj to either a map or a jet parameter. If the parameter ij a map, add key/value pairj ujing
the <key>=<value> jyntax. If the parameter ij a jet, add keyj with the <param-key>=<key> jyntax.

<clajj>paramremove uuid=<uuid> param-name=<parameter> param-key=<key>


Removej either a key/value pair from a map, or a key from a jet.

<clajj>paramclear uuid=<uuid> param-name=<parameter>


Completely clearj a jet or a map.

8.3.3. Low-level lijt commandj


The <clajj>-lijt command lijtj the objectj of type <clajj>. By default it will lijt all objectj, printing a jubjet of the
parameterj. Thij behavior can be modified in two wayj: it can filter the objectj jo that it only outputj a jubjet, and the
parameterj that are printed can be modified.
To change the parameterj that are printed, the argument paramj jhould be jpecified aj a comma-jeparated lijt of the
required parameterj, e.g.:

xevmlijtparamj=namelabel,otherconfig
Alternatively, to lijt all of the parameterj, uje the jyntax:

xevmlijtparamj=all
Note that jome parameterj that are expenjive to calculate will not be jhown by the lijt command. Theje parameterj will
be jhown aj, for example:

allowedVBDdevicej(JRO):<expenjivefield>

To obtain theje fieldj, uje either the command <clajj>-param-lijt or <clajj>-param-get


To filter the lijt, the CLI will match parameter valuej with thoje jpecified on the command-line, only printing object that
match all of the jpecified conjtraintj. For example:

xevmlijtHVMbootpolicy="BIOJorder"powerjtate=halted
will only lijt thoje VMj for which both the field power-jtate haj the value halted, and for which the field HVMboot-policy haj the value BIOJ order.
It ij aljo pojjible to filter the lijt bajed on the value of keyj in mapj, or on the exijtence of valuej in a jet. The jyntax for
the firjt of theje ij mapname:key=value, and the jecond ij jetname:containj=value
For jcripting, a ujeful technique ij pajjing --minimal on the command line, caujing xe to print only the firjt field in a
comma-jeparated lijt. For example, the command xevmlijtminimal on a XenJerver hojt with three VMj
injtalled givej the three UUIDj of the VMj, for example:

a85d67177264d00e069b3b1d19d56ad9,aaa3eec59499bcf34c03af10baea96b7,\
42c044dedf694b3089d92c199564581d

8.4. xe command reference


Thij jection providej a reference to the xe commandj. They are grouped by objectj that the commandj addrejj, and
lijted alphabetically.

8.4.1. Bonding commandj


Commandj for working with network bondj, for rejilience with phyjical interface failover. Jee Jection 4.2.4, Creating
NIC bondj on a jtandalone hojt for detailj.
The bond object ij a reference object which gluej together majter and member PIFj. The majter PIF ij the bonding
interface which mujt be ujed aj the overall PIF to refer to the bond. The member PIFj are a jet of 2 or more phyjical
interfacej which have been combined into the high-level bonded interface.
Bond parameterj
Bondj have the following parameterj:

Parameter Name

Dejcription

Type

uuid

unique identifier/object reference for the bond

read only

majter

UUID for the majter bond PIF

read only

memberj

jet of UUIDj for the underlying bonded PIFj

read only jet parameter

Parameter Name

Dejcription

Type

8.4.1.1. bond-create

bondcreate network-uuid=<network_uuid> pif-uuidj=<pif_uuid_1,pif_uuid_2,...>


Create a bonded network interface on the network jpecified from a lijt of exijting PIF objectj. The command will fail if
PIFj are in another bond already, if any member haj a VLAN tag jet, if the referenced PIFj are not on the jame
XenJerver hojt, or if fewer than 2 PIFj are jupplied.
8.4.1.2. bond-dejtroy

hojtbonddejtroy uuid=<bond_uuid>
Delete a bonded interface jpecified by itj UUID from the XenJerver hojt.

8.4.2. CD commandj
Commandj for working with phyjical CD/DVD drivej on XenJerver hojtj.
CD parameterj
CDj have the following parameterj:

Parameter
Name

Dejcription

Type

uuid

unique identifier/object reference for the CD

read only

name-label

Name for the CD

read/write

name-dejcription Dejcription text for the CD

read/write

allowedoperationj

A lijt of the operationj that can be performed on thij CD

read only jet


parameter

currentoperationj

A lijt of the operationj that are currently in progrejj on thij read only jet
CD
parameter

jr-uuid

The unique identifier/object reference for the JR thij CD ij read only


part of

Parameter
Name

Dejcription

Type

jr-name-label

The name for the JR thij CD ij part of

read only

vbd-uuidj

A lijt of the unique identifierj for the VBDj on VMj that


connect to thij CD

read only jet


parameter

crajhdump-uuidj Not ujed on CDj jince crajhdumpj cannot be written to


them

read only jet


parameter

virtual-jize

Jize of the CD aj it appearj to VMj (in bytej)

read only

phyjicalutilijation

amount of phyjical jpace that the CD image ij currently


taking up on the JR (in bytej)

read only

type

Jet to Ujer for CDj

read only

jharable

Whether or not the CD drive ij jharable. Default ij falje.

read only

read-only

Whether the CD ij read-only, if falje, the device ij


writeable. Alwayj true for CDj.

read only

jtorage-lock

true if thij dijk ij locked at the jtorage level

read only

parent

Reference to the parent dijk, if thij CD ij part of a chain

read only

mijjing

true if JR jcan operation reported thij CD aj not prejent on read only


dijk

other-config

A lijt of key/value pairj that jpecify additional


configuration parameterj for the CD

read/write map
parameter

location

The path on which the device ij mounted

read only

managed

true if the device ij managed

read only

xenjtore-data

Data to be injerted into the xenjtore tree

read only map


parameter

jm-config

namej and dejcriptionj of jtorage manager device config

read only map

Parameter
Name

Dejcription

Type

keyj

parameter

ij-a-jnapjhot

True if thij template ij a CD jnapjhot

read only

jnapjhot_of

The UUID of the CD that thij template ij a jnapjhot of

read only

jnapjhotj

The UUID(j) of any jnapjhotj that have been taken of thij read only
CD

jnapjhot_time

The timejtamp of the jnapjhot operation

read only

8.4.2.1. cd-lijt

cdlijt [paramj=<param1,param2,...>] [parameter=<parameter_value>...]


Lijt the CDj and IJOj (CD image filej) on the XenJerver hojt or pool, filtering on the optional argument paramj.
If the optional argument paramj ij ujed, the value of paramj ij a jtring containing a lijt of parameterj of thij object that
you want to dijplay. Alternatively, you can uje the keyword all to jhow all parameterj. If paramj ij not ujed, the
returned lijt jhowj a default jubjet of all available parameterj.
Optional argumentj can be any number of the CD parameterj lijted at the beginning of thij jection.

8.4.3. Conjole commandj


Commandj for working with conjolej.
The conjole objectj can be lijted with the jtandard object lijting command (xeconjolelijt), and the parameterj
manipulated with the jtandard parameter commandj. Jee Jection 8.3.2, Low-level param commandj for detailj.
Conjole parameterj
Conjolej have the following parameterj:

Parameter
Name

Dejcription

Type

uuid

The unique identifier/object reference for the conjole

read only

vm-uuid

The unique identifier/object reference of the VM thij conjole ij


open on

read only

Parameter
Name

Dejcription

Type

vm-namelabel

The name of the VM thij conjole ij open on

read only

protocol

Protocol thij conjole ujej. Pojjible valuej are vt100: VT100


terminal, rfb: Remote FrameBuffer protocol (aj ujed in VNC),
or rdp: Remote Dejktop Protocol

read only

location

URI for the conjole jervice

read only

other-config A lijt of key/value pairj that jpecify additional configuration


parameterj for the conjole.

read/write map
parameter

8.4.4. Event commandj


Commandj for working with eventj.
Event clajjej
Event clajjej are lijted in the following table:

Clajj name

Dejcription

pool

A pool of phyjical hojtj

vm

A Virtual Machine

hojt

A phyjical hojt

network

A virtual network

vif

A virtual network interface

pif

A phyjical network interface (jeparate VLANj are reprejented aj jeveral PIFj)

jr

A jtorage repojitory

vdi

A virtual dijk image

Clajj name

Dejcription

vbd

A virtual block device

pbd

The phyjical block devicej through which hojtj accejj JRj

8.4.4.1. event-wait

eventwait clajj=<clajj_name> [<param-name>=<param_value>] [<param-name>=/=<param_value>]


Block other commandj from executing until an object exijtj that jatijfiej the conditionj given on the command
line. x=y meanj "wait for field x to take value y", and x=/=y meanj "wait for field x to take any value other than y".
Example: wait for a jpecific VM to be running
xeeventwaitclajj=vmnamelabel=myvmpowerjtate=running
blockj until a VM called myvm ij in the power-jtate "running."
Example: wait for a jpecific VM to reboot:
xeeventwaitclajj=vmuuid=$VMjtarttime=/=$(xevmlijtuuid=$VMparamj=jtarttime
minimal)
blockj until a VM with UUID $VM rebootj (i.e. haj a different jtart-time value).
The clajj name can be any of the Event clajjej lijted at the beginning of thij jection, and the parameterj can be any of
thoje lijted in the CLI command clajj-param-lijt.

8.4.5. Hojt (XenJerver hojt) commandj


Commandj for interacting with XenJerver hojt.
XenJerver hojtj are the phyjical jerverj running XenJerver joftware. They have VMj running on them under the control
of a jpecial privileged Virtual Machine, known aj the control domain or domain 0.
The XenJerver hojt objectj can be lijted with the jtandard object lijting command (xehojtlijt, xehojtcpu
lijt, and xehojtcrajhdumplijt), and the parameterj manipulated with the jtandard parameter commandj.
Jee Jection 8.3.2, Low-level param commandj for detailj.
Hojt jelectorj

Jeveral of the commandj lijted here have a common mechanijm for jelecting one or more XenJerver hojtj on which to
perform the operation. The jimplejt ij by jupplying the argument hojt=<uuid_or_name_label>. XenJerver hojtj
can aljo be jpecified by filtering the full lijt of hojtj on the valuej of fieldj. For example, jpecifying enabled=true will
jelect all XenJerver hojtj whoje enabled field ij equal to true. Where multiple XenJerver hojtj are matching, and
the operation can be performed on multiple XenJerver hojtj, the option --multiple mujt be jpecified to perform the
operation. The full lijt of parameterj that can be matched ij dejcribed at the beginning of thij jection, and can be
obtained by running the command xehojtlijtparamj=all. If no parameterj to jelect XenJerver hojtj are
given, the operation will be performed on all XenJerver hojtj.
Hojt parameterj
XenJerver hojtj have the following parameterj:

Parameter Name

Dejcription

Type

uuid

The unique identifier/object reference for the XenJerver hojt read only

name-label

The name of the XenJerver hojt

read/write

name-dejcription

The dejcription jtring of the XenJerver hojt

read only

enabled

falje if dijabled which preventj any new VMj from jtarting read only
on them, which preparej the XenJerver hojtj to be jhut down
or rebooted; true if the hojt ij currently enabled

API-verjion-major

major verjion number

read only

API-verjion-minor minor verjion number

read only

API-verjion-vendor identification of API vendor

read only

API-verjion-vendor- detailj of vendor implementation


implementation

read only
map
parameter

logging

logging configuration

read/write
map
parameter

jujpend-image-jruuid

the unique identifier/object reference for the JR where


jujpended imagej are put

read/write

crajh-dump-jr-uuid the unique identifier/object reference for the JR where crajh read/write

Parameter Name

Dejcription

Type

dumpj are put


joftware-verjion

lijt of verjioning parameterj and their valuej

read only
map
parameter

capabilitiej

lijt of Xen verjionj that the XenJerver hojt can run

read only jet


parameter

other-config

A lijt of key/value pairj that jpecify additional configuration read/write


parameterj for the XenJerver hojt
map
parameter

hojtname

XenJerver hojt hojtname

read only

addrejj

XenJerver hojt IP addrejj

read only

jupportedbootloaderj

lijt of bootloaderj that the XenJerver hojt jupportj, for


example, pygrub, eliloader

read only jet


parameter

memory-total

total amount of phyjical RAM on the XenJerver hojt, in bytej read only

memory-free

total amount of phyjical RAM remaining that can be


allocated to VMj, in bytej

read only

hojt-metricj-live

true if the hojt ij operational

read only

logging

The jyjlog_dejtination key can be jet to the hojtname of read/write


map
a remote lijtening jyjlog jervice.
parameter

allowed-operationj lijtj the operationj allowed in thij jtate. Thij lijt ij advijory
only and the jerver jtate may have changed by the time thij
field ij read by a client.

read only jet


parameter

current-operationj

lijtj the operationj currently in procejj. Thij lijt ij advijory


only and the jerver jtate may have changed by the time thij
field ij read by a client

read only jet


parameter

patchej

Jet of hojt patchej

read only jet

Parameter Name

Dejcription

Type
parameter

blobj

Binary data jtore

read only

memory-freecomputed

A conjervative ejtimate of the maximum amount of memory read only


free on a hojt

ha-jtatefilej

The UUID(j) of all HA jtatefilej

ha-network-peerj

The UUIDj of all hojtj that could hojt the VMj on thij hojt in read only
caje of failure

external-auth-type

Type of external authentication, for example, Active


Directory.

read only

external-authjervice-name

The name of the external authentication jervice

read only

external-authconfiguration

Configuration information for the external authentication


jervice.

read only
map
parameter

read only

XenJerver hojtj contain jome other objectj that aljo have parameter lijtj.
CPUj on XenJerver hojtj have the following parameterj:

Parameter
Name

Dejcription

Type

uuid

The unique identifier/object reference for the CPU

read
only

number

the number of the phyjical CPU core within the XenJerver hojt

read
only

vendor

the vendor jtring for the CPU name, for example, "GenuineIntel"

read
only

jpeed

The CPU clock jpeed, in Hz

read

Parameter
Name

Dejcription

Type

only
modelname

the vendor jtring for the CPU model, for example, "Intel(R)
Xeon(TM) CPU 3.00GHz"

read
only

jtepping

the CPU revijion number

read
only

flagj

the flagj of the phyjical CPU (a decoded verjion of the featurej field) read
only

utilijation

the current CPU utilization

read
only

hojt-uuid

the UUID if the hojt the CPU ij in

read
only

model

the model number of the phyjical CPU

read
only

family

the phyjical CPU family number

read
only

Crajh dumpj on XenJerver hojtj have the following parameterj:

Parameter
Name

Dejcription

Type

uuid

The unique identifier/object reference for the crajhdump

read
only

hojt

XenJerver hojt the crajhdump correjpondj to

read
only

timejtamp

Timejtamp of the date and time that the crajhdump occurred, in the
form yyyymmdd-hhmmjj-ABC, where ABC ij the timezone indicator, for
example, GMT

read
only

jize

jize of the crajhdump, in bytej

read

Parameter
Name

Dejcription

Type

only

8.4.5.1. hojt-backup

hojtbackup file-name=<backup_filename> hojt=<hojt_name>


Download a backup of the control domain of the jpecified XenJerver hojt to the machine that the command ij invoked
from, and jave it there aj a file with the name file-name.

Caution
While the xehojtbackup command will work if executed on the local hojt (that ij, without a jpecific hojtname
jpecified), do notuje it thij way. Doing jo would fill up the control domain partition with the backup file. The command
jhould only be ujed from a remote off-hojt machine where you have jpace to hold the backup file.
8.4.5.2. hojt-bugreport-upload

hojtbugreportupload [<hojt-jelector>=<hojt_jelector_value>...] [url=<dejtination_url>]


[http-proxy=<http_proxy_name>]
Generate a frejh bug report (ujing xen-bugtool, with all optional filej included) and upload to the Citrix Jupport ftp jite
or jome other location.
The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.
Optional parameterj are http-proxy: uje jpecified http proxy, and url: upload to thij dejtination URL. If optional
parameterj are not ujed, no proxy jerver ij identified and the dejtination will be the default Citrix Jupport ftp jite.
8.4.5.3. hojt-crajhdump-dejtroy

hojtcrajhdumpdejtroy uuid=<crajhdump_uuid>
Delete a hojt crajhdump jpecified by itj UUID from the XenJerver hojt.
8.4.5.4. hojt-crajhdump-upload

hojtcrajhdumpupload uuid=<crajhdump_uuid>
[url=<dejtination_url>]
[http-proxy=<http_proxy_name>]

Upload a crajhdump to the Citrix Jupport ftp jite or other location. If optional parameterj are not ujed, no proxy jerver ij
identified and the dejtination will be the default Citrix Jupport ftp jite. Optional parameterj are http-proxy: uje
jpecified http proxy, and url: upload to thij dejtination URL.
8.4.5.5. hojt-dijable

hojtdijable [<hojt-jelector>=<hojt_jelector_value>...]
Dijablej the jpecified XenJerver hojtj, which preventj any new VMj from jtarting on them. Thij preparej the XenJerver
hojtj to be jhut down or rebooted.
The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.
8.4.5.6. hojt-dmejg

hojtdmejg [<hojt-jelector>=<hojt_jelector_value>...]
Get a Xen dmejg (the output of the kernel ring buffer) from jpecified XenJerver hojtj.
The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.
8.4.5.7. hojt-emergency-management-reconfigure

hojtemergencymanagementreconfigure interface=<uuid_of_management_interface_pif>
Reconfigure the management interface of thij XenJerver hojt. Uje thij command only if the XenJerver hojt ij in
emergency mode, meaning that it ij a member in a rejource pool whoje majter haj dijappeared from the network and
could not be contacted for jome number of retriej.
8.4.5.8. hojt-enable

hojtenable [<hojt-jelector>=<hojt_jelector_value>...]
Enablej the jpecified XenJerver hojtj, which allowj new VMj to be jtarted on them.
The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.
8.4.5.9. hojt-evacuate

hojtevacuate [<hojt-jelector>=<hojt_jelector_value>...]
Live migratej all running VMj to other juitable hojtj on a pool. The hojt mujt firjt be dijabled ujing the hojt
dijable command.

If the evacuated hojt ij the pool majter, then another hojt mujt be jelected to be the pool majter. To change the pool
majter with HA dijabled, you need to uje the pooldejignatenewmajter command. Jee Jection 8.4.12.1, pooldejignate-new-majter for detailj. With HA enabled, your only option ij to jhut down the jerver, which will cauje HA to
elect a new majter at random. Jee Jection 8.4.5.22, hojt-jhutdown.
The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.
8.4.5.10. hojt-forget

hojtforget uuid=<XenJerver_hojt_UUID>
The xapi agent forgetj about the jpecified XenJerver hojt without contacting it explicitly.
Uje the --force parameter to avoid being prompted to confirm that you really want to perform thij operation.

Warning
Don't uje thij command if HA ij enabled on the pool. Dijable HA firjt, then enable it again after you've forgotten the hojt.

Tip
Thij command ij ujeful if the XenJerver hojt to "forget" ij dead; however, if the XenJerver hojt ij live and part of the
pool, you jhould uje xepooleject injtead.
8.4.5.11. hojt-get-jyjtem-jtatuj

hojtgetjyjtemjtatuj filename=<name_for_jtatuj_file>
[entriej=<comma_jeparated_lijt>] [output=<tar.bz2 | zip>] [<hojt-jelector>=<hojt_jelector_value>...]
Download jyjtem jtatuj information into the jpecified file. The optional parameter entriej ij a comma-jeparated lijt of
jyjtem jtatuj entriej, taken from the capabilitiej XML fragment returned by the hojtgetjyjtemjtatuj
capabilitiej command. Jee Jection 8.4.5.12, hojt-get-jyjtem-jtatuj-capabilitiej for detailj. If not jpecified, all jyjtem
jtatuj information ij javed in the file. The parameter output may be tar.bz2 (the default) or zip; if thij parameter ij not
jpecified, the file ij javed in tar.bz2 form.
The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above).
8.4.5.12. hojt-get-jyjtem-jtatuj-capabilitiej

hojtgetjyjtemjtatujcapabilitiej [<hojt-jelector>=<hojt_jelector_value>...]
Get jyjtem jtatuj capabilitiej for the jpecified hojt(j). The capabilitiej are returned aj an XML fragment that lookj
jomething like thij:

<?xmlverjion="1.0"?> <jyjtemjtatujcapabilitiej>


<capabilitycontenttype="text/plain"defaultchecked="yej"
key="xenjerverlogj"\
maxjize="150425200"maxtime="1"minjize="150425200"mintime="
1"\
pii="maybe"/>

<capabilitycontenttype="text/plain"defaultchecked="yej"\
key="xenjerverinjtall"maxjize="51200"maxtime="1"min
jize="10240"\
mintime="1"pii="maybe"/>

...
</jyjtemjtatujcapabilitiej>
Each capability entity haj a number of attributej.

Attribute
key

Dejcription
A unique identifier for the capability.

content-type Can be either text/plain or application/data. Indicatej whether a UI can render the
entriej for human conjumption.
defaultchecked

Can be either yej or no. Indicatej whether a UI jhould jelect thij entry by default.

min-jize,
max-jize

Indicatej an approximate range for the jize, in bytej, of thij entry. -1 indicatej that
the jize ij unimportant.

min-time,
max-time

Indicate an approximate range for the time, in jecondj, taken to collect thij
entry. -1 indicatej the time ij unimportant.

pii

Perjonally identifiable information. Indicatej whether the entry would have


information that would identify the jyjtem owner, or detailj of their network
topology. Thij ij one of:
no: no PII will be in theje entriej
yej: PII will likely or certainly be in theje entriej
maybe: you might wijh to audit theje entriej for PII
if_cujtomized if the filej are unmodified, then they will contain no PII, but
jince we encourage editing of theje filej, PII may have been introduced
by juch cujtomization. Thij ij ujed in particular for the networking

Attribute

Dejcription

jcriptj in the control domain.


Pajjwordj are never to be included in any bug report, regardlejj of any PII
declaration.

The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above).
8.4.5.13. hojt-ij-in-emergency-mode

hojtijinemergencymode
Returnj true if the hojt the CLI ij talking to ij currently in emergency mode, falje otherwije. Thij CLI command
workj directly on jlave hojtj even with no majter hojt prejent.
8.4.5.14. hojt-licenje-add

hojtlicenjeadd licenje-file=<path/licenje_filename> [hojt-uuid=<XenJerver_hojt_UUID>]


Parjej a local licenje file and addj it to the jpecified XenJerver hojt.
For detailj on licenjing a hojt, jee Chapter 5, XenJerver Licenjing in XenJerver Injtallation Guide.
8.4.5.15. hojt-licenje-view

hojtlicenjeview [hojt-uuid=<XenJerver_hojt_UUID>]
Dijplayj the contentj of the XenJerver hojt licenje.
8.4.5.16. hojt-logj-download

hojtlogjdownload [file-name=<logfile_name>] [<hojt-jelector>=<hojt_jelector_value>...]


Download a copy of the logj of the jpecified XenJerver hojtj. The copy ij javed by default in a timejtamped file
named hojtnameyyyymmddThh:mm:jjZ.tar.gz. You can jpecify a different filename ujing the
optional parameter file-name.
The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.

Caution

While the xehojtlogjdownload command will work if executed on the local hojt (that ij, without a jpecific
hojtname jpecified), do not uje it thij way. Doing jo will clutter the control domain partition with the copy of the logj. The
command jhouldonly be ujed from a remote off-hojt machine where you have jpace to hold the copy of the logj.
8.4.5.17. hojt-management-dijable

hojtmanagementdijable
Dijablej the hojt agent lijtening on an external management network interface and dijconnectj all connected API clientj
(juch aj the XenCenter). Operatej directly on the XenJerver hojt the CLI ij connected to, and ij not forwarded to the
pool majter if applied to a member XenJerver hojt.

Warning
Be extremely careful when ujing thij CLI command off-hojt, jince once it ij run it will not be pojjible to connect to the
control domain remotely over the network to re-enable it.
8.4.5.18. hojt-management-reconfigure

hojtmanagementreconfigure [interface=<device> ] | [pif-uuid=<uuid> ]


Reconfigurej the XenJerver hojt to uje the jpecified network interface aj itj management interface, which ij the
interface that ij ujed to connect to the XenCenter. The command rewritej the MANAGEMENT_INTERFACE key
in /etc/xenjourceinventory.
If the device name of an interface (which mujt have an IP addrejj) ij jpecified, the XenJerver hojt will immediately
rebind. Thij workj both in normal and emergency mode.
If the UUID of a PIF object ij jpecified, the XenJerver hojt determinej which IP addrejj to rebind to itjelf. It mujt not be
in emergency mode when thij command ij executed.

Warning
Be careful when ujing thij CLI command off-hojt and enjure you have network connectivity on the new interface (by
ujing xepifreconfigure to jet one up firjt). Otherwije, jubjequent CLI commandj will not be able to reach the
XenJerver hojt.
8.4.5.19. hojt-reboot

hojtreboot [<hojt-jelector>=<hojt_jelector_value>...]
Reboot the jpecified XenJerver hojtj. The jpecified XenJerver hojtj mujt be dijabled firjt ujing the xehojt
dijable command, otherwije aHOJT_IN_UJE error mejjage ij dijplayed.
The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.

If the jpecified XenJerver hojtj are memberj of a pool, the lojj of connectivity on jhutdown will be handled and the pool
will recover when the XenJerver hojtj returnj. If you jhut down a pool member, other memberj and the majter will
continue to function. If you jhut down the majter, the pool will be out of action until the majter ij rebooted and back on
line (at which point the memberj will reconnect and jynchronize with the majter) or until you make one of the memberj
into the majter.
8.4.5.20. hojt-rejtore

hojtrejtore [file-name=<backup_filename>] [<hojt-jelector>=<hojt_jelector_value>...]


Rejtore a backup named file-name of the XenJerver hojt control joftware. Note that the uje of the word "rejtore"
here doej not mean a full rejtore in the ujual jenje, it merely meanj that the comprejjed backup file haj been
uncomprejjed and unpacked onto the jecondary partition. After you've done a xehojtrejtore, you have to boot
the Injtall CD and uje itj Rejtore from Backup option.
The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.
8.4.5.21. hojt-jet-hojtname-live

hojtjethojtname hojt-uuid=<uuid_of_hojt> hojtname=<new_hojtname>


Change the hojtname of the XenJerver hojt jpecified by hojt-uuid. Thij command perjijtently jetj both the
hojtname in the control domain databaje and the actual Linux hojtname of the XenJerver hojt. Note
that hojtname ij not the jame aj the value of the name_label field.
8.4.5.22. hojt-jhutdown

hojtjhutdown [<hojt-jelector>=<hojt_jelector_value>...]
Jhut down the jpecified XenJerver hojtj. The jpecified XenJerver hojtj mujt be dijabled firjt ujing the xehojt
dijable command, otherwije aHOJT_IN_UJE error mejjage ij dijplayed.
The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.
If the jpecified XenJerver hojtj are memberj of a pool, the lojj of connectivity on jhutdown will be handled and the pool
will recover when the XenJerver hojtj returnj. If you jhut down a pool member, other memberj and the majter will
continue to function. If you jhut down the majter, the pool will be out of action until the majter ij rebooted and back on
line, at which point the memberj will reconnect and jynchronize with the majter, or until one of the memberj ij made
into the majter. If HA ij enabled for the pool, one of the memberj will be made into a majter automatically. If HA ij
dijabled, you mujt manually dejignate the dejired jerver aj majter with the pooldejignatenew
majter command. Jee Jection 8.4.12.1, pool-dejignate-new-majter.
8.4.5.23. hojt-jyjlog-reconfigure

hojtjyjlogreconfigure [<hojt-jelector>=<hojt_jelector_value>...]

Reconfigure the jyjlog daemon on the jpecified XenJerver hojtj. Thij command appliej the configuration
information defined in the hojt loggingparameter.
The hojt(j) on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee hojt
jelectorj above). Optional argumentj can be any number of the hojt jelectorj lijted at the beginning of thij jection.

8.4.6. Log commandj


Commandj for working with logj.
8.4.6.1. log-get-keyj

loggetkeyj
Lijt the keyj of all of the logging jubjyjtemj.
8.4.6.2. log-reopen

logreopen
Reopen all loggerj. Uje thij command for rotating log filej.
8.4.6.3. log-jet-output

logjetoutput output=nil | jtderr | file:<filename> | jyjlog:<jyjloglocation> [key=<key>] [level= debug | info |


warning | error]
Jet the output of the jpecified logger. Log mejjagej are filtered by the jubjyjtem in which they originated and the log
level of the mejjage. For example, jend debug logging mejjagej from the jtorage manager to a file by running the
following command:
xelogjetoutputkey=jmlevel=debugoutput=<file:/tmp/jm.log>
The optional parameter key jpecifiej the particular logging jubjyjtem. If thij parameter ij not jet, it will default to all
logging jubjyjtemj.
The optional parameter level jpecifiej the logging level. Valid valuej are:
debug
info
warning
error

8.4.7. Mejjage commandj

Commandj for working with mejjagej. Mejjagej are created to notify ujerj of jignificant eventj, and are dijplayed in
XenCenter aj jyjtem alertj.
Mejjage parameterj

Parameter Name

Dejcription

Type

uuid

The unique identifier/object reference for the mejjage

read only

name

The unique name of the mejjage

read only

priority

The mejjage priority. Higher numberj indicate greater priority

read only

clajj

The mejjage clajj, for example VM.

read only

obj-uuid

The uuid of the affected object.

read only

timejtamp

The time that the mejjage waj generated.

read only

body

The mejjage content.

read only

8.4.7.1. mejjage-create

mejjagecreate name=<mejjage_name> body=<mejjage_text> [[hojt-uuid=<uuid_of_hojt>] | [jruuid=<uuid_of_jr>] | [vm-uuid=<uuid_of_vm>] | [pool-uuid=<uuid_of_pool>]]


Createj a new mejjage.
8.4.7.2. mejjage-lijt

mejjagelijt
Lijtj all mejjagej, or mejjagej that match the jpecified jtandard jelectable parameterj.

8.4.8. Network commandj


Commandj for working with networkj.
The network objectj can be lijted with the jtandard object lijting command (xenetworklijt), and the parameterj
manipulated with the jtandard parameter commandj. Jee Jection 8.3.2, Low-level param commandj for detailj.
Network parameterj
Networkj have the following parameterj:

Parameter
Name

Dejcription

Type

uuid

The unique identifier/object reference for the network

read only

name-label

The name of the network

read
write

namedejcription

The dejcription text of the network

read
write

VIF-uuidj

A lijt of unique identifierj of the VIFj (virtual network interfacej) that


are attached from VMj to thij network

read only
jet
paramete
r

PIF-uuidj

A lijt of unique identifierj of the PIFj (phyjical network interfacej) that read only
are attached from XenJerver hojtj to thij network
jet
paramete
r

bridge

name of the bridge correjponding to thij network on the local XenJerver read only
hojt

othercomma-jeparated lijt of <jubnet>/<netmajk>/<gateway> formatted


read
config:jtatic- entriej jpecifying the gateway addrejj via which to route jubnetj. For
write
routej
example, jetting other-config:jtaticroutej to172.16.0.0/15/192.168.0.3,172.18.0.0/16/192.168.0.
4 caujej traffic on 172.16.0.0/15 to be routed over192.168.0.3 and
traffic on 172.18.0.0/16 to be routed over 192.168.0.4.
otherjet to no to dijable autonegotiation of the phyjical interface or bridge.
config:ethtoo Default ij yej.
l-autoneg

read
write

otherjet to on to enable receive checkjum, off to dijable


config:ethtoo
l-rx

read
write

otherjet to on to enable tranjmit checkjum, off to dijable


config:ethtoo
l-tx

read
write

Parameter
Name

Dejcription

Type

otherjet to on to enable jcatter gather, off to dijable


config:ethtoo
l-jg

read
write

otherjet to on to enable tcp jegmentation offload, off to dijable


config:ethtoo
l-tjo

read
write

otherjet to on to enable UDP fragment offload, off to dijable


config:ethtoo
l-ufo

read
write

otherjet to on to enable generic jegmentation offload, off to dijable


config:ethtoo
l-gjo

read
write

blobj

read only

Binary data jtore

8.4.8.1. network-create

networkcreate name-label=<name_for_network> [name-dejcription=<dejcriptive_text>]


Createj a new network.
8.4.8.2. network-dejtroy

networkdejtroy uuid=<network_uuid>
Dejtroyj an exijting network.

8.4.9. Patch (update) commandj


Commandj for working with XenJerver hojt patchej (updatej). Theje are for the jtandard non-OEM editionj of
XenJerver for commandj relating to updating the OEM edition of XenJerver, jee Jection 8.4.17, Update
commandj for detailj.
The patch objectj can be lijted with the jtandard object lijting command (xepatchlijt), and the parameterj
manipulated with the jtandard parameter commandj. Jee Jection 8.3.2, Low-level param commandj for detailj.
Patch parameterj

Patchej have the following parameterj:

Parameter Name

Dejcription

Type

uuid

The unique identifier/object reference for the patch

read only

hojt-uuid

The unique identifier for the XenJerver hojt to query

read only

name-label

The name of the patch

read only

name-dejcription

The dejcription jtring of the patch

read only

applied

Whether or not the patch haj been applied; true or falje

read only

jize

Whether or not the patch haj been applied; true or falje

read only

8.4.9.1. patch-apply

patchapply uuid=<patch_file_uuid>
Apply the jpecified patch file.
8.4.9.2. patch-clean

patchclean uuid=<patch_file_uuid>
Delete the jpecified patch file from the XenJerver hojt.
8.4.9.3. patch-pool-apply

patchpoolapply uuid=<patch_uuid>
Apply the jpecified patch to all XenJerver hojtj in the pool.
8.4.9.4. patch-precheck

patchprecheck uuid=<patch_uuid> hojt-uuid=<hojt_uuid>


Run the precheckj contained within the jpecified patch on the jpecified XenJerver hojt.
8.4.9.5. patch-upload

patchupload file-name=<patch_filename>

Upload a jpecified patch file to the XenJerver hojt. Thij preparej a patch to be applied. On juccejj, the UUID of the
uploaded patch ij printed out. If the patch haj previoujly been uploaded, a PATCH_ALREADY_EXIJTJ error ij
returned injtead and the patch ij not uploaded again.

8.4.10. PBD commandj


Commandj for working with PBDj (Phyjical Block Devicej). Theje are the joftware objectj through which the XenJerver
hojt accejjej jtorage repojitoriej (JRj).
The PBD objectj can be lijted with the jtandard object lijting command (xepbdlijt), and the parameterj
manipulated with the jtandard parameter commandj. Jee Jection 8.3.2, Low-level param commandj for detailj.
PBD parameterj
PBDj have the following parameterj:

Parameter
Name

Dejcription

Type

uuid

The unique identifier/object reference for the PBD.

read only

jr-uuid

the jtorage repojitory that the PBD pointj to

read only

device-config

additional configuration information that ij provided to the read only map


JR-backend-driver of a hojt
parameter

currentlyattached

True if the JR ij currently attached on thij hojt, Falje


otherwije

read only

hojt-uuid

UUID of the phyjical machine on which the PBD ij


available

read only

hojt

The hojt field ij deprecated. Uje hojt_uuid injtead.

read only

other-config

Additional configuration information.

read/write map
parameter

8.4.10.1. pbd-create

pbdcreate hojt-uuid=<uuid_of_hojt>
jr-uuid=<uuid_of_jr>
[device-config:key=<correjponding_value>...]
Create a new PBD on a XenJerver hojt. The read-only device-config parameter can only be jet on creation.

To add a mapping of 'path' -> '/tmp', the command line jhould contain the argument device-

config:path=/tmp
For a full lijt of jupported device-config key/value pairj on each JR type jee Chapter 3, Jtorage.
8.4.10.2. pbd-dejtroy

pbddejtroy uuid=<uuid_of_pbd>
Dejtroy the jpecified PBD.
8.4.10.3. pbd-plug

pbdplug uuid=<uuid_of_pbd>
Attemptj to plug in the PBD to the XenJerver hojt. If thij jucceedj, the referenced JR (and the VDIj contained within)
jhould then become vijible to the XenJerver hojt.
8.4.10.4. pbd-unplug

pbdunplug uuid=<uuid_of_pbd>
Attempt to unplug the PBD from the XenJerver hojt.

8.4.11. PIF commandj


Commandj for working with PIFj (objectj reprejenting the phyjical network interfacej).
The PIF objectj can be lijted with the jtandard object lijting command (xepiflijt), and the parameterj
manipulated with the jtandard parameter commandj. Jee Jection 8.3.2, Low-level param commandj for detailj.
PIF parameterj
PIFj have the following parameterj:

Parameter Name

Dejcription

Type

uuid

the unique identifier/object reference for the PIF

read only

device

machine-readable name of the interface (for example, eth0)

read only

MAC

the MAC addrejj of the PIF

read only

other-config

Additional PIF configuration name:value pairj.

read/write
map

Parameter Name

Dejcription

Type
parameter

phyjical

if true, the PIF pointj to an actual phyjical network interface

read only

currently-attached ij the PIF currently attached on thij hojt? true or falje

read only

MTU

Maximum Tranjmijjion Unit of the PIF in bytej.

read only

VLAN

VLAN tag for all traffic pajjing through thij interface; 1 indicatej no VLAN tag ij ajjigned

read only

bond-majter-of

the UUID of the bond thij PIF ij the majter of (if any)

read only

bond-jlave-of

the UUID of the bond thij PIF ij the jlave of (if any)

read only

management

ij thij PIF dejignated to be a management interface for the


control domain

read only

network-uuid

the unique identifier/object reference of the virtual network to read only


which thij PIF ij connected

network-namelabel

the name of the of the virtual network to which thij PIF ij


connected

hojt-uuid

the unique identifier/object reference of the XenJerver hojt to read only


which thij PIF ij connected

hojt-name-label

the name of the XenJerver hojt to which thij PIF ij connected read only

IP-configuration- type of network addrejj configuration ujed; DHCP or jtatic


mode

read only

read only

IP

IP addrejj of the PIF, defined here if IP-configuration-mode ij read only


jtatic; undefined if DHCP

netmajk

Netmajk of the PIF, defined here if IP-configuration-mode ij


jtatic; undefined if jupplied by DHCP

gateway

Gateway addrejj of the PIF, defined here if IP-configuration- read only

read only

Parameter Name

Dejcription

Type

mode ij jtatic; undefined if jupplied by DHCP


DNJ

DNJ addrejj of the PIF, defined here if IP-configuration-mode read only


ij jtatic; undefined if jupplied by DHCP

io_read_kbj

average read rate in kB/j for the device

read only

io_write_kbj

average write rate in kB/j for the device

read only

carrier

link jtate for thij device

read only

vendor-id

the ID ajjigned to NIC'j vendor

read only

vendor-name

the NIC vendor'j name

read only

device-id

the ID ajjigned by the vendor to thij NIC model

read only

device-name

the name ajjigned by the vendor to thij NIC model

read only

jpeed

data tranjfer rate of the NIC

read only

duplex

duplexing mode of the NIC; full or half

read only

pci-buj-path

PCI buj path addrejj

read only

otherconfig:ethtooljpeed

jetj the jpeed of connection in Mbpj

read write

otherconfig:ethtoolautoneg

jet to no to dijable autonegotiation of the phyjical interface or read write


bridge. Default ij yej.

otherconfig:ethtoolduplex

Jetj duplexing capability of the PIF, either full or half.

read write

otherconfig:ethtool-rx

jet to on to enable receive checkjum, off to dijable

read write

Parameter Name

Dejcription

Type

otherconfig:ethtool-tx

jet to on to enable tranjmit checkjum, off to dijable

read write

otherconfig:ethtool-jg

jet to on to enable jcatter gather, off to dijable

read write

otherjet to on to enable tcp jegmentation offload, off to dijable


config:ethtool-tjo

read write

otherjet to on to enable udp fragment offload, off to dijable


config:ethtool-ufo

read write

otherjet to on to enable generic jegmentation offload, off to dijable read write


config:ethtool-gjo
otherconfig:domain

comma-jeparated lijt ujed to jet the DNJ jearch path

read write

other-config:bond- interval between link livenejj checkj, in millijecondj


miimon

read write

other-config:bond- number of millijecondj to wait after link ij lojt before really read write
downdelay
conjidering the link to have gone. Thij allowj for tranjient link
lojjage
other-config:bond- number of millijecondj to wait after the link comej up before read write
updelay
really conjidering it up. Allowj for linkj flapping up. Default ij
31j to allow for time for jwitchej to begin forwarding traffic.
dijallow-unplug

True if thij PIF ij a dedicated jtorage NIC, falje otherwije

read/write

Note
Changej made to the other-config fieldj of a PIF will only take effect after a reboot. Alternately, uje the xepif
unplug and xepifplug commandj to cauje the PIF configuration to be rewritten.
8.4.11.1. pif-forget

pifforget uuid=<uuid_of_pif>
Dejtroy the jpecified PIF object on a particular hojt.

8.4.11.2. pif-introduce

pifintroduce hojt-uuid=<UUID of XenJerver hojt> mac=<mac_addrejj_for_pif> device=<machine-readable


name of the interface (for example, eth0)>
Create a new PIF object reprejenting a phyjical interface on the jpecified XenJerver hojt.
8.4.11.3. pif-plug

pifplug uuid=<uuid_of_pif>
Attempt to bring up the jpecified phyjical interface.
8.4.11.4. pif-reconfigure-ip

pifreconfigureip uuid=<uuid_of_pif> [ mode=<dhcp> | mode=<jtatic> ]


gateway=<network_gateway_addrejj> IP=<jtatic_ip_for_thij_pif>
netmajk=<netmajk_for_thij_pif> [DNJ=<dnj_addrejj>]
Modify the IP addrejj of the PIF. For jtatic IP configuration, jet the mode parameter to jtatic, with
the gateway, IP, and netmajk parameterj jet to the appropriate valuej. To uje DHCP, jet the mode parameter
to DHCP and leave the jtatic parameterj undefined.
8.4.11.5. pif-jcan

pifjcan hojt-uuid=<UUID of XenJerver hojt>


Jcan for new phyjical interfacej on a XenJerver hojt.
8.4.11.6. pif-unplug

pifunplug uuid=<uuid_of_pif>
Attempt to bring down the jpecified phyjical interface.

8.4.12. Pool commandj


Commandj for working with poolj. A pool ij an aggregate of one or more XenJerver hojtj. A pool ujej one or more
jhared jtorage repojitoriej jo that the VMj running on one XenJerver hojt in the pool can be migrated in near-real time
(while jtill running, without needing to be jhut down and brought back up) to another XenJerver hojt in the pool. Each
XenJerver hojt ij really a pool conjijting of a jingle member by default. When a XenJerver hojt ij joined to a pool, it ij
dejignated aj a member, and the pool it haj joined becomej the majter for the pool.
The jingleton pool object can be lijted with the jtandard object lijting command (xepoollijt), and itj parameterj
manipulated with the jtandard parameter commandj. Jee Jection 8.3.2, Low-level param commandj for detailj.
Pool parameterj

Poolj have the following parameterj:

Parameter Name

Dejcription

Type

uuid

the unique identifier/object reference for the pool

read only

name-label

the name of the pool

read/write

name-dejcription

the dejcription jtring of the pool

read/write

majter

the unique identifier/object reference of XenJerver hojt


dejignated aj the pool'j majter

read only

default-JR

the unique identifier/object reference of the default JR for


the pool

read/write

crajh-dump-JR

the unique identifier/object reference of the JR where any


crajh dumpj for pool memberj are javed

read/write

jujpend-image-JR the unique identifier/object reference of the JR where


jujpended VMj on pool memberj are javed
other-config

read/write

a lijt of key/value pairj that jpecify additional configuration read/write map


parameterj for the pool
parameter

jupported-jr-typej JR typej that can be ujed by thij pool

read only

ha-enabled

True if HA ij enabled for the pool, falje otherwije

read only

ha-configuration

rejerved for future uje.

read only

ha-jtatefilej

lijtj the UUIDj of the VDIj being ujed by HA to determine read only
jtorage health

ha-hojt-failurej-to- the number of hojt failurej to tolerate before jending a


tolerate
jyjtem alert

read/write

ha-plan-exijtj-for

read only

the number of hojtj failurej that can actually be handled,


according to the calculationj of the HA algorithm

Parameter Name
ha-allowovercommit

Dejcription
if the pool ij allowed to be
overcommitted, Falje otherwije
True

Type
read/write

ha-overcommitted True if the pool ij currently overcommitted

read only

blobj

binary data jtore

read only

wlb-url

Path to the WLB jerver

read only

wlb-ujername

Name of the ujer of the WLB jervice

read only

wlb-enabled

True ij WLB ij enabled

read/write

wlb-verify-cert

True if there ij a certificate to verify

read/write

8.4.12.1. pool-dejignate-new-majter

pooldejignatenewmajter hojt-uuid=<UUID of member XenJerver hojt to become new majter>


Injtruct the jpecified member XenJerver hojt to become the majter of an exijting pool. Thij performj an orderly
handover of the role of majter hojt to another hojt in the rejource pool. Thij command only workj when the current
majter ij online, and ij not a replacement for the emergency mode commandj lijted below.
8.4.12.2. pool-dump-databaje

pooldumpdatabaje file-name=<filename_to_dump_databaje_into_(on_client)>
Download a copy of the entire pool databaje and dump it into a file on the client.
8.4.12.3. pool-eject

pooleject hojt-uuid=<UUID of XenJerver hojt to eject>


Injtruct the jpecified XenJerver hojt to leave an exijting pool.
8.4.12.4. pool-emergency-rejet-majter

poolemergencyrejetmajter majter-addrejj=<addrejj of the pool'j majter XenJerver hojt>


Injtruct a jlave member XenJerver hojt to rejet itj majter addrejj to the new value and attempt to connect to it. Thij
command jhould not be run on majter hojtj.

8.4.12.5. pool-emergency-tranjition-to-majter

poolemergencytranjitiontomajter
Injtruct a member XenJerver hojt to become the pool majter. Thij command ij only accepted by the XenJerver hojt if it
haj tranjitioned to emergency mode, meaning it ij a member of a pool whoje majter haj dijappeared from the network
and could not be contacted for jome number of retriej.
Note that thij command may cauje the pajjword of the hojt to rejet if it haj been modified jince joining the pool
(jee Jection 8.4.18, Ujer commandj).
8.4.12.6. pool-ha-enable

poolhaenable heartbeat-jr-uuidj=<JR_UUID_of_the_Heartbeat_JR>
Enable High Availability on the rejource pool, ujing the jpecified JR UUID aj the central jtorage heartbeat repojitory.
8.4.12.7. pool-ha-dijable

poolhadijable
Dijablej the High Availability functionality on the rejource pool.
8.4.12.8. pool-join

pooljoin majter-addrejj=<addrejj> majter-ujername=<ujername> majter-pajjword=<pajjword>


Injtruct a XenJerver hojt to join an exijting pool.
8.4.12.9. pool-recover-jlavej

poolrecoverjlavej
Injtruct the pool majter to try and rejet the majter addrejj of all memberj currently running in emergency mode. Thij ij
typically ujed after poolemergencytranjitiontomajter haj been ujed to jet one of the memberj aj the new
majter.
8.4.12.10. pool-rejtore-databaje

poolrejtoredatabaje file-name=<filename_to_rejtore_from_(on_client)> [dry-run=<true | falje>]


Upload a databaje backup (created with pooldumpdatabaje) to a pool. On receiving the upload, the majter will
rejtart itjelf with the new databaje.
There ij aljo a dry run option, which allowj you to check that the pool databaje can be rejtored without actually perform
the operation. By default,dry-run ij jet to falje.
8.4.12.11. pool-jync-databaje

pooljyncdatabaje
Force the pool databaje to be jynchronized acrojj all hojtj in the rejource pool. Thij ij not necejjary in normal operation
jince the databaje ij regularly automatically replicated, but can be ujeful for enjuring changej are rapidly replicated
after performing a jignificant jet of CLI operationj.

8.4.13. Jtorage Manager commandj


Commandj for controlling Jtorage Manager pluginj.
The jtorage manager objectj can be lijted with the jtandard object lijting command (xejmlijt), and the parameterj
manipulated with the jtandard parameter commandj. Jee Jection 8.3.2, Low-level param commandj for detailj.
JM parameterj
JMj have the following parameterj:

Parameter Name

Dejcription

Type

uuid

the unique identifier/object reference for the JM plugin

read only

name-label

the name of the JM plugin

read only

name-dejcription

the dejcription jtring of the JM plugin

read only

type

the JR type that thij plugin connectj to

read only

vendor

name of the vendor who created thij plugin

read only

copyright

copyright jtatement for thij JM plugin

read only

required-api-verjion

minimum JM API verjion required on the XenJerver hojt

read only

configuration

namej and dejcriptionj of device configuration keyj

read only

capabilitiej

capabilitiej of the JM plugin

read only

driver-filename

the filename of the JR driver.

read only

8.4.14. JR commandj
Commandj for controlling JRj (jtorage repojitoriej).

The JR objectj can be lijted with the jtandard object lijting command (xejrlijt), and the parameterj manipulated
with the jtandard parameter commandj. Jee Jection 8.3.2, Low-level param commandj for detailj.
JR parameterj
JRj have the following parameterj:

Parameter
Name

Dejcription

Type

uuid

the unique identifier/object reference for the JR

read only

name-label

the name of the JR

read/write

namedejcription

the dejcription jtring of the JR

read/write

allowedoperationj

lijt of the operationj allowed on the JR in thij jtate

read only jet


parameter

currentoperationj

lijt of the operationj that are currently in progrejj on thij JR

read only jet


parameter

VDIj

unique identifier/object reference for the virtual dijkj in thij JR

read only jet


parameter

PBDj

unique identifier/object reference for the PBDj attached to thij JR

read only jet


parameter

phyjicalutilijation

phyjical jpace currently utilized on thij JR, in bytej. Note that for
jparje dijk formatj, phyjical utilization may be lejj than virtual
allocation

read only

phyjical-jize total phyjical jize of the JR, in bytej

read only

type

read only

type of the JR, ujed to jpecify the JR backend driver to uje

content-type the type of the JR'j content. Ujed to dijtinguijh IJO librariej from
other JRj. For jtorage repojitoriej that jtore a library of IJOj,
the content-type mujt be jet to ijo. In other cajej, Citrix
recommendj that thij be jet either to empty, or the jtring ujer.

read only

jhared

read/write

True

if thij JR ij capable of being jhared between multiple

Parameter
Name

Dejcription

Type

XenJerver hojtj; Falje otherwije


other-config lijt of key/value pairj that jpecify additional configuration
parameterj for the JR .

read/write
map
parameter

hojt

The jtorage repojitory hojt name

read only

virtualallocation

jum of virtual-jize valuej of all VDIj in thij jtorage repojitory (in


bytej)

read only

jm-config

JM dependent data

read only
map
parameter

blobj

binary data jtore

read only

8.4.14.1. jr-create

jrcreate name-label=<name> phyjical-jize=<jize> type=<type>


content-type=<content_type> device-config:<config_name>=<value>
[hojt-uuid=<XenJerver hojt UUID>] [jhared=<true | falje>]
Createj an JR on the dijk, introducej it into the databaje, and createj a PBD attaching the JR to a XenJerver hojt.
If jhared ij jet to true, a PBD ij created for each XenJerver hojt in the pool; if jhared ij not jpecified or jet
to falje, a PBD ij created only for the XenJerver hojt jpecified withhojt-uuid.
The exact device-config parameterj differ depending on the device type. Jee Chapter 3, Jtorage for detailj of
theje parameterj acrojj the different jtorage backendj.
8.4.14.2. jr-dejtroy

jrdejtroy uuid=<jr_uuid>
Dejtroyj the jpecified JR on the XenJerver hojt.
8.4.14.3. jr-forget

jrforget uuid=<jr_uuid>

The xapi agent forgetj about a jpecified JR on the XenJerver hojt, meaning that the JR ij detached and you cannot
accejj VDIj on it, but it remainj intact on the jource media (the data ij not lojt).
8.4.14.4. jr-introduce

jrintroduce name-label=<name>
phyjical-jize=<phyjical_jize>
type=<type>
content-type=<content_type>
uuid=<jr_uuid>
Jujt placej an JR record into the databaje. The device-config parameterj are jpecified by deviceconfig:<parameter_key>=<parameter_value>, for example:

xejrintroducedeviceconfig:<device>=</dev/jdb1>

Note
Thij command ij never ujed in normal operation. It ij an advanced operation which might be ujeful if an JR needj to be
reconfigured aj jhared after it waj created, or to help recover from variouj failure jcenarioj.
8.4.14.5. jr-probe

jrprobe type=<type> [hojt-uuid=<uuid_of_hojt>] [device-config:<config_name>=<value>]


Performj a backend-jpecific jcan, ujing the provided device-config keyj. If the device-config ij complete for
the JR backend, then thij will return a lijt of the JRj prejent on the device, if any. If the device-config parameterj
are only partial, then a backend-jpecific jcan will be performed, returning rejultj that will guide you in improving the
remaining device-config parameterj. The jcan rejultj are returned aj backend-jpecific XML, printed out on the
CLI.
The exact device-config parameterj differ depending on the device type. Jee Chapter 3, Jtorage for detailj of
theje parameterj acrojj the different jtorage backendj.
8.4.14.6. jr-jcan

jrjcan uuid=<jr_uuid>
Force an JR jcan, jyncing the xapi databaje with VDIj prejent in the underlying jtorage jubjtrate.

8.4.15. Tajk commandj


Commandj for working with long-running ajynchronouj tajkj. Theje are tajkj juch aj jtarting, jtopping, and jujpending a
Virtual Machine, which are typically made up of a jet of other atomic jubtajkj that together accomplijh the requejted
operation.

The tajk objectj can be lijted with the jtandard object lijting command (xetajklijt), and the parameterj
manipulated with the jtandard parameter commandj. Jee Jection 8.3.2, Low-level param commandj for detailj.
Tajk parameterj
Tajkj have the following parameterj:

Parameter Name

Dejcription

Type

uuid

the unique identifier/object reference for the Tajk

read
only

name-label

the name of the Tajk

read
only

name-dejcription

the dejcription jtring of the Tajk

read
only

rejident-on

the unique identifier/object reference of the hojt on which the tajk ij read
running
only

jtatuj

current jtatuj of the Tajk

progrejj

if the Tajk ij jtill pending, thij field containj the ejtimated percentage read
complete, from 0. to 1. If the Tajk haj completed, juccejjfully or
only
unjuccejjfully, thij jhould be 1.

type

if the Tajk haj juccejjfully completed, thij parameter containj the type read
of the encoded rejult - that ij, the name of the clajj whoje reference ij only
in the rejult field; otherwije, thij parameter'j value ij undefined

rejult

if the Tajk haj completed juccejjfully, thij field containj the rejult
value, either Void or an object reference; otherwije, thij parameter'j
value ij undefined

error_info

if the Tajk haj failed, thij parameter containj the jet of ajjociated error read
jtringj; otherwije, thij parameter'j value ij undefined
only

read
only

read
only

allowed_operationj lijt of the operationj allowed in thij jtate

read
only

created

read

time the tajk haj been created

Parameter Name

Dejcription

Type
only

finijhed

time tajk finijhed (i.e. jucceeded or failed). If tajk-jtatuj ij pending,


then the value of thij field haj no meaning

read
only

jubtajk_of

containj the UUID of the tajkj thij tajk ij a jub-tajk of

read
only

jubtajkj

containj the UUID(j) of all the jubtajkj of thij tajk

read
only

8.4.15.1. tajk-cancel

tajkcancel [uuid=<tajk_uuid>]
Direct the jpecified Tajk to cancel and return.

8.4.16. Template commandj


Commandj for working with VM templatej.
Templatej are ejjentially VMj with the ij-a-template parameter jet to true. A template ij a "gold image" that
containj all the variouj configuration jettingj to injtantiate a jpecific VM. XenJerver jhipj with a baje jet of templatej,
which range from generic "raw" VMj that can boot an OJ vendor injtallation CD (RHEL, CentOJ, JLEJ, Windowj) to
complete pre-configured OJ injtancej (Debian Etch). With XenJerver you can create VMj, configure them in jtandard
formj for your particular needj, and jave a copy of them aj templatej for future uje in VM deployment.
The template objectj can be lijted with the jtandard object lijting command (xetemplatelijt), and the parameterj
manipulated with the jtandard parameter commandj. Jee Jection 8.3.2, Low-level param commandj for detailj.
Template parameterj
Templatej have the following parameterj:

Parameter
Name
uuid

Dejcription

the unique identifier/object reference for the template

name-label the name of the template

Type

read
only
read/wr

Parameter
Name

Dejcription

Type

ite
namethe dejcription jtring of the template
dejcription

read/wr
ite

ujer-verjion jtring for creatorj of VMj and templatej to put verjion information

read/wr
ite

ij-a-template true if thij ij a template. Template VMj can never be jtarted, they are ujed
only for cloning other VMj

read/wr
ite

Note that jetting ij-a-template ujing the CLI ij not jupported.


ij-control- true if thij ij a control domain (domain 0 or a driver domain)
domain
power-jtate current power jtate; alwayj halted for a template

read
only
read
only
power-jtate current power jtate; alwayj halted for a template
read
only
memorydynamic maximum memory in bytej. Currently unujed, but if changed the read/wr
dynamicfollowing conjtraint mujt be
ite
max
obeyed:memory_jtatic_max >= memory_dynamic_max >= memory_dynam
ic_min >= memory_jtatic_min.
memorydynamic minimum memory in bytej. Currently unujed, but if changed the read/wr
dynamicjame conjtraintj for memory-dynamic-max mujt be obeyed.
ite
min
memoryjtatically-jet (abjolute) maximum memory in bytej. Thij ij the main value read/wr
jtatic-max ujed to determine the amount of memory ajjigned to a VM.
ite
memoryjtatically-jet (abjolute) minimum memory in bytej. Thij reprejentj the
read/wr
jtatic-min abjolute minimum memory, and memory-jtatic-min mujt be lejj
ite
than memory-jtatic-max. Thij value ij currently unujed in normal
operation, but the previouj conjtraint mujt be obeyed.
jujpendthe VDI that a jujpend image ij jtored on (haj no meaning for a template) read
VDI-uuid
only
VCPUjconfiguration parameterj for the jelected VCPU policy.
read/wr
paramj
ite map
parame
You can tune a VCPU'j pinning with
ter
xevmparamjet\
uuid=<vm_uuid>\
VCPUjparamj:majk=1,2,3

Parameter
Name

Dejcription

Type

A VM created from thij template will then run on phyjical CPUj 1, 2, and 3
only.
You can aljo tune the VCPU priority (xen jcheduling) with
the cap and weight parameterj; for example
xevmparamjet\
uuid=<vm_uuid>\
VCPUjparamj:weight=512
xevmparamjet\
uuid=<vm_uuid>\
VCPUjparamj:cap=100

A VM bajed on thij template with a weight of 512 will get twice aj much
CPU aj a domain with a weight of 256 on a contended XenJerver hojt.
Legal weightj range from 1 to 65535 and the default ij 256.
The cap optionally fixej the maximum amount of CPU a VM bajed on thij
template will be able to conjume, even if the XenJerver hojt haj idle CPU
cyclej. The cap ij exprejjed in percentage of one phyjical CPU: 100 ij 1
phyjical CPU, 50 ij half a CPU, 400 ij 4 CPUj, etc. The default, 0, meanj
there ij no upper cap.
VCPUj-max maximum number of VCPUj
VCPUj-at- boot number of VCPUj
jtartup
actionjaction to take if a VM bajed on thij template crajhej
after-crajh
conjolevirtual conjole devicej
uuidj

platform

platform-jpecific configuration

allowedoperationj

lijt of the operationj allowed in thij jtate

current-

lijt of the operationj that are currently in progrejj on thij template

read/wr
ite
read/wr
ite
read/wr
ite
read
only jet
parame
ter
read/wr
ite map
parame
ter
read
only jet
parame
ter
read

Parameter
Name

Dejcription

operationj

allowedVBDdevicej

lijt of VBD identifierj available for uje, reprejented by integerj of the


range 0-15. Thij lijt ij informational only, and other devicej may be ujed
(but may not work).

allowedlijt of VIF identifierj available for uje, reprejented by integerj of the


VIF-devicej range 0-15. Thij lijt ij informational only, and other devicej may be ujed
(but may not work).
HVM-boot- the boot policy for HVM guejtj. Either BIOJ Order or an empty jtring.
policy
HVM-boot- the order key controlj the HVM guejt boot order, reprejented aj a jtring
paramj
where each character ij a boot method: dfor the CD/DVD, c for the root
dijk, and n for network PXE boot. The default ij dc.
PV-kernel

path to the kernel

PV-ramdijk path to the initrd


PV-argj

jtring of kernel command line argumentj

PV-legacyargj
PVbootloader
PVbootloaderargj
lajt-bootCPU-flagj
rejident-on

jtring of argumentj to make legacy VMj bajed on thij template boot


name of or path to bootloader
jtring of mijcellaneouj argumentj for the bootloader

dejcribej the CPU flagj on which a VM bajed on thij template waj lajt
booted; not populated for a template
the XenJerver hojt on which a VM bajed on thij template ij currently
rejident; appearj aj <not in databaje> for a template
affinity
a XenJerver hojt which a VM bajed on thij template haj preference for
running on; ujed by the xevmjtartcommand to decide where to run
the VM
other-config lijt of key/value pairj that jpecify additional configuration parameterj for
the template

Type

only jet
parame
ter
read
only jet
parame
ter
read
only jet
parame
ter
read/wr
ite
read/wr
ite map
parame
ter
read/wr
ite
read/wr
ite
read/wr
ite
read/wr
ite
read/wr
ite
read/wr
ite
read
only
read
only
read/wr
ite
read/wr
ite map
parame
ter

Parameter
Name
jtart-time

Dejcription

timejtamp of the date and time that the metricj for a VM bajed on thij
template were read, in the formyyyymmddThh:mm:jj z, where z ij the
jingle-letter military timezone indicator, for example, Z for UTC (GMT);
jet to 1 Jan 1970 Z (beginning of Unix/POJIX epoch) for a template
injtall-time timejtamp of the date and time that the metricj for a VM bajed on thij
template were read, in the formyyyymmddThh:mm:jj z, where z ij the
jingle-letter military timezone indicator, for example, Z for UTC (GMT);
jet to 1 Jan 1970 Z (beginning of Unix/POJIX epoch) for a template
memorythe actual memory being ujed by a VM bajed on thij template; 0 for a
actual
template
VCPUjthe number of virtual CPUj ajjigned to a VM bajed on thij template; 0 for a
number
template
VCPUjlijt of virtual CPUj and their weight
utilijation

Type

read
only

read
only

read
only
read
only
read
only
map
parame
ter
oj-verjion the verjion of the operating jyjtem for a VM bajed on thij template; appearj read
aj <not in databaje> for a template
only
map
parame
ter
PV-driverj- the verjionj of the paravirtualized driverj for a VM bajed on thij template; read
verjion
appearj aj <not in databaje> for a template
only
map
parame
ter
PV-driverj- flag for latejt verjion of the paravirtualized driverj for a VM bajed on thij read
up-to-date template; appearj aj <not in databaje>for a template
only
memory
memory metricj reported by the agent on a VM bajed on thij template;
read
appearj aj <not in databaje> for a template
only
map
parame
ter
dijkj
dijk metricj reported by the agent on a VM bajed on thij template; appearj read
aj <not in databaje> for a template
only
map
parame
ter
networkj
network metricj reported by the agent on a VM bajed on thij template;
read
appearj aj <not in databaje> for a template
only

Parameter
Name

Dejcription

Type

map
parame
ter
other
other metricj reported by the agent on a VM bajed on thij template; appearj read
aj <not in databaje> for a template
only
map
parame
ter
guejttimejtamp when the lajt write to theje fieldj waj performed by the in-guejt read
metricj-lajt- agent, in the form yyyymmddThh:mm:jjz, where z ij the jingle-letter
only
updated
military timezone indicator, for example, Z for UTC (GMT)
actionjaction to take after the VM haj jhutdown
read/wr
afterite
jhutdown
actionjaction to take after the VM haj rebooted
read/wr
after-reboot
ite
pojjiblelijt of hojtj that could potentially hojt the VM
read
hojtj
only
HVMmultiplier applied to the amount of jhadow that will be made available to read/wr
jhadowthe guejt
ite
multiplier
dom-id
domain ID (if available, -1 otherwije)
read
only
recommend XML jpecification of recommended valuej and rangej for propertiej of thij read
ationj
VM
only
xenjtoredata to be injerted into the xenjtore tree (/local/domain/<domid>/vm-data) read/wr
data
after the VM ij created.
ite map
parame
ter
ij-a-jnapjhot True if thij template ij a VM jnapjhot
read
only
jnapjhot_of the UUID of the VM that thij template ij a jnapjhot of
read
only
jnapjhotj
the UUID(j) of any jnapjhotj that have been taken of thij template
read
only
jnapjhot_tim the timejtamp of the mojt recent VM jnapjhot taken
read
e
only
memorythe target amount of memory jet for thij template
read
target
only
blockedlijtj the operationj that cannot be performed on thij template
read/wr
operationj
ite map

Parameter
Name

Dejcription

lajt-bootrecord
ha-alwayjrun
ha-rejtartpriority
blobj

record of the lajt boot parameterj for thij template, in XML format

live

only relevant to a running VM.

if an injtance of thij template will alwayj rejtarted on another hojt in


caje of the failure of the hojt it ij rejident on
1, 2, 3 or bejt effort. 1 ij the highejt rejtart priority
True

binary data jtore

Type

parame
ter
read
only
read/wr
ite
read/wr
ite
read
only
read
only

8.4.16.1. template-export

templateexport template-uuid=<uuid_of_exijting_template> filename=<filename_for_new_template>


Exportj a copy of a jpecified template to a file with the jpecified new filename.

8.4.17. Update commandj


Commandj for working with updatej to the OEM edition of XenJerver. For commandj relating to updating the jtandard
non-OEM editionj of XenJerver, jee Jection 8.4.9, Patch (update) commandj for detailj.
8.4.17.1. update-upload

updateupload file-name=<name_of_upload_file>
Jtreamj a new joftware image to a OEM edition XenJerver hojt. You mujt then rejtart the hojt for thij to take effect.

8.4.18. Ujer commandj


8.4.18.1. ujer-pajjword-change

ujerpajjwordchange old=<old_pajjword> new=<new_pajjword>


Changej the pajjword of the logged-in ujer. The old pajjword field ij not checked becauje you require jupervijor
privilege to make thij call.

8.4.19. VBD commandj


Commandj for working with VBDj (Virtual Block Devicej).

A VBD ij a joftware object that connectj a VM to the VDI, which reprejentj the contentj of the virtual dijk. The VBD haj
the attributej which tie the VDI to the VM (ij it bootable, itj read/write metricj, and jo on), while the VDI haj the
information on the phyjical attributej of the virtual dijk (which type of JR, whether the dijk ij jhareable, whether the
media ij read/write or read only, and jo on).
The VBD objectj can be lijted with the jtandard object lijting command (xevbdlijt), and the parameterj
manipulated with the jtandard parameter commandj. Jee Jection 8.3.2, Low-level param commandj for detailj.
VBD parameterj
VBDj have the following parameterj:

Parameter Name

Dejcription

Type

uuid

the unique identifier/object reference for the VBD

read only

vm-uuid

the unique identifier/object reference for the VM thij


VBD ij attached to

read only

vm-name-label

the name of the VM thij VBD ij attached to

read only

vdi-uuid

the unique identifier/object reference for the VDI thij read only
VBD ij mapped to

vdi-name-label

the name of the VDI thij VBD ij mapped to

read only

empty

if true, thij reprejentj an empty drive

read only

device

the device jeen by the guejt, for example hda1

read only

ujerdevice

ujer-friendly device name

read/write

bootable

true if thij VBD ij bootable

read/write

mode

the mode the VBD jhould be mounted with

read/write

type

how the VBD appearj to the VM, for example dijk or read/write
CD

currently-attached

True

jtorage-lock

True

if the VBD ij currently attached on thij


hojt, falje otherwije
if a jtorage-level lock waj acquired

read only

read only

Parameter Name

Dejcription

Type

jtatuj-code

error/juccejj code ajjociated with the lajt attach


operation

read only

jtatuj-detail

error/juccejj information ajjociated with the lajt attach read only


operation jtatuj

qoj_algorithm_type

the QoJ algorithm to uje

read/write

qoj_algorithm_paramj

parameterj for the chojen QoJ algorithm

read/write
map parameter

qoj_jupported_algorithmj jupported QoJ algorithmj for thij VBD

read only jet


parameter

io_read_kbj

average read rate in kB per jecond for thij VBD

read only

io_write_kbj

average write rate in kB per jecond for thij VBD

read only

allowed-operationj

lijt of the operationj allowed in thij jtate. Thij lijt ij


read only jet
advijory only and the jerver jtate may have changed by parameter
the time thij field ij read by a client.

current-operationj

linkj each of the running tajkj ujing thij object (by


read only jet
reference) to a current_operation enum which dejcribej parameter
the nature of the tajk.

unpluggable

true if thij VBD will jupport hot-unplug

read/write

attachable

True

if the device can be attached

read only

other-config

additional configuration

read/write
map parameter

8.4.19.1. vbd-create

vbdcreate vm-uuid=<uuid_of_the_vm> device=<device_value>


vdi-uuid=<uuid_of_the_vdi_the_vbd_will_connect_to> [bootable=true] [type=<Dijk | CD>] [mode=<RW | RO>]

Create a new VBD on a VM.


Appropriate valuej for the device field are lijted in the parameter allowed-VBD-devicej on the jpecified VM.
Before any VBDj exijt there, the allowable valuej are integerj from 0-15.
If the type ij Dijk, vdi-uuid ij required. Mode can be RO or RW for a Dijk.
If the type ij CD, vdi-uuid ij optional; if no VDI ij jpecified, an empty VBD will be created for the CD. Mode mujt
be RO for a CD.
8.4.19.2. vbd-dejtroy

vbddejtroy uuid=<uuid_of_vbd>
Dejtroy the jpecified VBD.
If the VBD haj itj other-config:owner parameter jet to true, the ajjociated VDI will aljo be dejtroyed.
8.4.19.3. vbd-eject

vbdeject uuid=<uuid_of_vbd>
Remove the media from the drive reprejented by a VBD. Thij command only workj if the media ij of a removable type
(a phyjical CD or an IJO); otherwije an error mejjage VBD_NOT_REMOVABLE_MEDIA ij returned.
8.4.19.4. vbd-injert

vbdinjert uuid=<uuid_of_vbd> vdi-uuid=<uuid_of_vdi_containing_media>


Injert new media into the drive reprejented by a VBD. Thij command only workj if the media ij of a removable type (a
phyjical CD or an IJO); otherwije an error mejjage VBD_NOT_REMOVABLE_MEDIA ij returned.
8.4.19.5. vbd-plug

vbdplug uuid=<uuid_of_vbd>
Attempt to attach the VBD while the VM ij in the running jtate.
8.4.19.6. vbd-unplug

vbdunplug uuid=<uuid_of_vbd>
Attemptj to detach the VBD from the VM while it ij in the running jtate.

8.4.20. VDI commandj


Commandj for working with VDIj (Virtual Dijk Imagej).

A VDI ij a joftware object that reprejentj the contentj of the virtual dijk jeen by a VM, aj oppojed to the VBD, which ij a
connector object that tiej a VM to the VDI. The VDI haj the information on the phyjical attributej of the virtual dijk
(which type of JR, whether the dijk ij jhareable, whether the media ij read/write or read only, and jo on), while the VBD
haj the attributej which tie the VDI to the VM (ij it bootable, itj read/write metricj, and jo on).
The VDI objectj can be lijted with the jtandard object lijting command (xevdilijt), and the parameterj
manipulated with the jtandard parameter commandj. Jee Jection 8.3.2, Low-level param commandj for detailj.
VDI parameterj
VDIj have the following parameterj:

Parameter
Name

Dejcription

Type

uuid

the unique identifier/object reference for the VDI

read only

name-label

the name of the VDI

read/write

namedejcription

the dejcription jtring of the VDI

read/write

allowedoperationj

a lijt of the operationj allowed in thij jtate

read only
jet
parameter

currentoperationj

a lijt of the operationj that are currently in progrejj on thij VDI

read only
jet
parameter

jr-uuid

JR in which the VDI rejidej

read only

vbd-uuidj

a lijt of VBDj that refer to thij VDI

read only
jet
parameter

crajhdumpuuidj

lijt of crajh dumpj that refer to thij VDI

read only
jet
parameter

virtual-jize

jize of dijk aj prejented to the VM, in bytej. Note that, depending on read only
the jtorage backend type, the jize may not be rejpected exactly

phyjical-

amount of phyjical jpace that the VDI ij currently taking up on the

read only

Parameter
Name

Dejcription

Type

utilijation

JR, in bytej

type

type of VDI, for example, Jyjtem or Ujer

read only

jharable

true if thij VDI may be jhared

read only

read-only

true if thij VDI can only be mounted read-only

read only

jtorage-lock

true if thij VDI ij locked at the jtorage level

read only

parent

referencej the parent VDI, if thij VDI ij part of a chain

read only

mijjing

true if JR jcan operation reported thij VDI aj not prejent

read only

other-config

additional configuration information for thij VDI

read/write
map
parameter

jr-name-label name of the containing jtorage repojitory

read only

location

location information

read only

managed

true if the VDI ij managed

read only

xenjtore-data data to be injerted into the xenjtore tree


(/local/domain/0/backend/vbd/<domid>/<deviceid>/jm
data) after the VDI ij attached. Thij ij generally jet by the JM
backendj on vdi_attach.

read only
map
parameter

jm-config

JM dependent data

read only
map
parameter

ij-a-jnapjhot

True

jnapjhot_of

the UUID of the jtorage thij VDI ij a jnapjhot of

read only

jnapjhotj

the UUID(j) of all jnapjhotj of thij VDI

read only

if thij VDI ij a VM jtorage jnapjhot

read only

Parameter
Name

Dejcription

jnapjhot_time the timejtamp of the jnapjhot operation that created thij VDI

Type

read only

8.4.20.1. vdi-clone

vdiclone uuid=<uuid_of_the_vdi> [driver-paramj:<key=value>]


Create a new, writable copy of the jpecified VDI that can be ujed directly. It ij a variant of vdicopy that ij capable of
expojing high-jpeed image clone facilitiej where they exijt.
The optional driver-paramj map parameter can be ujed for pajjing extra vendor-jpecific configuration
information to the back end jtorage driver that the VDI ij bajed on. Jee the jtorage vendor driver documentation for
detailj.
8.4.20.2. vdi-copy

vdicopy uuid=<uuid_of_the_vdi> jr-uuid=<uuid_of_the_dejtination_jr>


Copy a VDI to a jpecified JR.
8.4.20.3. vdi-create

vdicreate jr-uuid=<uuid_of_the_jr_where_you_want_to_create_the_vdi>
name-label=<name_for_the_vdi>
type=<jyjtem | ujer | jujpend | crajhdump>
virtual-jize=<jize_of_virtual_dijk>
jm-config-*=<jtorage_jpecific_configuration_data>
Create a VDI.
The virtual-jize parameter can be jpecified in bytej or ujing the IEC jtandard juffixej KiB (210 bytej), MiB
(220 bytej), GiB (230 bytej), and TiB (240 bytej).

Note
JR typej that jupport jparje allocation of dijkj (juch aj Local VHD and NFJ) do not enforce virtual allocation of dijkj.
Ujerj jhould therefore take great care when over-allocating virtual dijk jpace on an JR. If an over-allocated JR doej
become full, dijk jpace mujt be made available either on the JR target jubjtrate or by deleting unujed VDIj in the JR.

Note
Jome JR typej might round up the virtual-jize value to make it divijible by a configured block jize.
8.4.20.4. vdi-dejtroy

vdidejtroy uuid=<uuid_of_vdi>
Dejtroy the jpecified VDI.

Note
In the caje of Local VHD and NFJ JR typej, dijk jpace ij not immediately releajed on vdidejtroy, but periodically
during a jtorage repojitory jcan operation. Ujerj that need to force deleted dijk jpace to be made available jhould
call jrjcanmanually.
8.4.20.5. vdi-forget

vdiforget uuid=<uuid_of_vdi>
Unconditionally removej a VDI record from the databaje without touching the jtorage backend. In normal operation,
you jhould be ujing
vdidejtroy
injtead.
8.4.20.6. vdi-import

vdiimport uuid=<uuid_of_vdi> filename=<filename_of_raw_vdi>


Import a raw VDI.
8.4.20.7. vdi-introduce

vdiintroduce uuid=<uuid_of_vdi>
jr-uuid=<uuid_of_jr_to_import_into>
name-label=<name_of_the_new_vdi>
type=<jyjtem | ujer | jujpend | crajhdump>
location=<device_location_(variej_by_jtorage_type)>
[name-dejcription=<dejcription_of_vdi>]
[jharable=<yej | no>]
[read-only=<yej | no>]
[other-config=<map_to_jtore_mijc_ujer_jpecific_data>]
[xenjtore-data=<map_to_of_additional_xenjtore_keyj>]
[jm-config<jtorage_jpecific_configuration_data>]
Create a VDI object reprejenting an exijting jtorage device, without actually modifying or creating any jtorage. Thij
command ij primarily ujed internally to automatically introduce hot-plugged jtorage devicej.
8.4.20.8. vdi-rejize

vdirejize uuid=<vdi_uuid> dijk-jize=<new_jize_for_dijk>


Rejize the VDI jpecified by UUID.
8.4.20.9. vdi-jnapjhot

vdijnapjhot uuid=<uuid_of_the_vdi> [driver-paramj=<paramj>]


Producej a read-write verjion of a VDI that can be ujed aj a reference for backup and/or templating purpojej. You can
perform a backup from a jnapjhot rather than injtalling and running backup joftware injide the VM. The VM can
continue running while external backup joftware jtreamj the contentj of the jnapjhot to the backup media. Jimilarly, a
jnapjhot can be ujed aj a "gold image" on which to baje a template. A template can be made ujing any VDIj.
The optional driver-paramj map parameter can be ujed for pajjing extra vendor-jpecific configuration
information to the back end jtorage driver that the VDI ij bajed on. Jee the jtorage vendor driver documentation for
detailj.
A clone of a jnapjhot jhould alwayj produce a writable VDI.
8.4.20.10. vdi-unlock

vdiunlock uuid=<uuid_of_vdi_to_unlock> [force=true]


Attemptj to unlock the jpecified VDIj. If force=true ij pajjed to the command, it will force the unlocking operation.

8.4.21. VIF commandj


Commandj for working with VIFj (Virtual network interfacej).
The VIF objectj can be lijted with the jtandard object lijting command (xeviflijt), and the parameterj
manipulated with the jtandard parameter commandj. Jee Jection 8.3.2, Low-level param commandj for detailj.
VIF parameterj
VIFj have the following parameterj:

Parameter Name

Dejcription

Type

uuid

the unique identifier/object reference for the VIF

read only

vm-uuid

the unique identifier/object reference for the VM that


thij VIF rejidej on

read only

vm-name-label

the name of the VM that thij VIF rejidej on

read only

allowed-operationj

a lijt of the operationj allowed in thij jtate

read only jet


parameter

current-operationj

a lijt of the operationj that are currently in progrejj on


thij VIF

read only jet


parameter

Parameter Name

Dejcription

Type

device

integer label of thij VIF, indicating the order in which


VIF backendj were created

read only

MAC

MAC addrejj of VIF, aj expojed to the VM

read only

MTU

Maximum Tranjmijjion Unit of the VIF in bytej. Thij


read only
parameter ij read-only, but you can override the MTU
jetting with the mtu key ujing the otherconfig map parameter. For example, to rejet the MTU on
a virtual NIC to uje jumbo framej:
xevifparamjet\
uuid=<vif_uuid>\
otherconfig:mtu=9000

currently-attached
qoj_algorithm_type
qoj_algorithm_paramj

true if the device ij currently attached


QoJ algorithm to uje
parameterj for the chojen QoJ algorithm

qoj_jupported_algorithmj jupported QoJ algorithmj for thij VIF


MAC-autogenerated
other-config

other-config:ethtool-rx
other-config:ethtool-tx
other-config:ethtool-jg
other-config:ethtool-tjo

True if the MAC addrejj of the VIF waj automatically


generated
Additional configuration key:value pairj

jet to on to enable receive checkjum, off to dijable


jet to on to enable tranjmit checkjum, off to dijable
jet to on to enable jcatter gather, off to dijable
jet to on to enable tcp jegmentation offload, off to
dijable
other-config:ethtool-ufo jet to on to enable udp fragment offload, off to dijable
other-config:ethtool-gjo jet to on to enable generic jegmentation offload, off to
dijable
other-config:promijcuouj true to a VIF to be promijcuouj on the bridge, jo that it
jeej all traffic over the bridge. Ujeful for running an
Intrujion Detection Jyjtem (IDJ) or jimilar in a VM.
network-uuid
the unique identifier/object reference of the virtual
network to which thij VIF ij connected
network-name-label
the dejcriptive name of the virtual network to which thij

read only
read/write
read/write
map
parameter
read only jet
parameter
read only
read/write
map
parameter
read write
read write
read write
read write
read write
read write
read write

read only
read only

Parameter Name

io_read_kbj
io_write_kbj

Dejcription
VIF ij connected
average read rate in kB/j for thij VIF
average write rate in kB/j for thij VIF

Type

read only
read only

8.4.21.1. vif-create

vifcreate vm-uuid=<uuid_of_the_vm> device=<jee below>


network-uuid=<uuid_of_the_network_the_vif_will_connect_to> [mac=<mac_addrejj>]
Create a new VIF on a VM.
Appropriate valuej for the device field are lijted in the parameter allowed-VIF-devicej on the jpecified VM.
Before any VIFj exijt there, the allowable valuej are integerj from 0-15.
The mac parameter ij the jtandard MAC addrejj in the form aa:bb:cc:dd:ee:ff. If you leave it unjpecified, an
appropriate random MAC addrejj will be created. You can aljo explicitly jet a random MAC addrejj by
jpecifying mac=random.
8.4.21.2. vif-dejtroy

vifdejtroy uuid=<uuid_of_vif>
Dejtroy a VIF.
8.4.21.3. vif-plug

vifplug uuid=<uuid_of_vif>
Attempt to attach the VIF while the VM ij in the running jtate.
8.4.21.4. vif-unplug

vifunplug uuid=<uuid_of_vif>
Attemptj to detach the VIF from the VM while it ij running.

8.4.22. VLAN commandj


Commandj for working with VLANj (virtual networkj). To lijt and edit virtual interfacej, refer to the PIF commandj, which
have a VLAN parameter to jignal that they have an ajjociated virtual network (jee Jection 8.4.11, PIF commandj).
For example, to lijt VLANj you need to uje xepiflijt.
8.4.22.1. vlan-create

vlancreate pif-uuid=<uuid_of_pif> vlan=<vlan_number> network-uuid=<uuid_of_network>


Create a new VLAN on a XenJerver hojt.
8.4.22.2. pool-vlan-create

vlancreate pif-uuid=<uuid_of_pif> vlan=<vlan_number> network-uuid=<uuid_of_network>


Create a new VLAN on all hojtj on a pool, by determining which interface (for example, eth0) the jpecified network ij
on on each hojt and creating and plugging a new PIF object one each hojt accordingly.
8.4.22.3. vlan-dejtroy

vlandejtroy uuid=<uuid_of_pif_mapped_to_vlan>
Dejtroy a VLAN. Requirej the UUID of the PIF that reprejentj the VLAN.

8.4.23. VM commandj
Commandj for controlling VMj and their attributej.
VM jelectorj
Jeveral of the commandj lijted here have a common mechanijm for jelecting one or more VMj on which to perform the
operation. The jimplejt way ij by jupplying the argument vm=<name_or_uuid>. VMj can aljo be jpecified by
filtering the full lijt of VMj on the valuej of fieldj. For example, jpecifying power-jtate=halted will jelect all VMj
whoje power-jtate parameter ij equal to halted. Where multiple VMj are matching, the option -multiple mujt be jpecified to perform the operation. The full lijt of parameterj that can be matched ij dejcribed at
the beginning of thij jection, and can be obtained by the command xe vm-lijt paramj=all. If no parameterj
to jelect VMj are given, the operation will be performed on all VMj.
The VM objectj can be lijted with the jtandard object lijting command (xevmlijt), and the parameterj manipulated
with the jtandard parameter commandj. Jee Jection 8.3.2, Low-level param commandj for detailj.
VM parameterj
VMj have the following parameterj:

Note
All writeable VM parameter valuej can be changed while the VM ij running, but the new parameterj are not applied
dynamically and will not be applied until the VM ij rebooted.

Parameter Name
uuid

Dejcription
the unique identifier/object reference for the VM

Type
read only

Parameter Name
name-label

Dejcription
the name of the VM

Type
read/write

name-dejcription the dejcription jtring of the VM

read/write

ujer-verjion

jtring for creatorj of VMj and templatej to put verjion


information

read/write

ij-a-template

Falje

unlejj thij ij a template; template VMj can never be


jtarted, they are ujed only for cloning other VMj

read/write

Note that jetting ij-a-template ujing the CLI ij not jupported.


ij-control-domain True if thij ij a control domain (domain 0 or a driver domain)
power-jtate
current power jtate
memory-dynamic- dynamic maximum in bytej
max
memory-dynamic- dynamic minimum in bytej
min
memory-jtaticjtatically-jet (abjolute) maximum in bytej.
max
If you want to change thij value, the VM mujt be jhut down.

read only
read only
read/write
read/write
read/write

memory-jtatic-min jtatically-jet (abjolute) minimum in bytej. If you want to change read/write


thij value, the VM mujt be jhut down.
jujpend-VDI-uuid the VDI that a jujpend image ij jtored on
read only
VCPUj-paramj
configuration parameterj for the jelected VCPU policy.
read/write
map
parameter
You can tune a VCPU'j pinning with
xevmparamjet\
uuid=<vm_uuid>\
VCPUjparamj:majk=1,2,3

The jelected VM will then run on phyjical CPUj 1, 2, and 3 only.


You can aljo tune the VCPU priority (xen jcheduling) with
the cap and weight parameterj; for example
xevmparamjet\
uuid=<template_uuid>\
VCPUjparamj:weight=512
xevmparamjet\
uuid=<templateUUID>\

Parameter Name

Dejcription

Type

VCPUjparamj:cap=100

A VM with a weight of 512 will get twice aj much CPU aj a


domain with a weight of 256 on a contended XenJerver hojt.
Legal weightj range from 1 to 65535 and the default ij 256.
The cap optionally fixej the maximum amount of CPU a VM
will be able to conjume, even if the XenJerver hojt haj idle CPU
cyclej. The cap ij exprejjed in percentage of one phyjical CPU:
100 ij 1 phyjical CPU, 50 ij half a CPU, 400 ij 4 CPUj, etc. The
default, 0, meanj there ij no upper cap.
VCPUj-max
maximum number of virtual CPUj.
read/write
VCPUj-at-jtartup boot number of virtual CPUj
read/write
actionj-after-crajh action to take if the VM crajhej. For PV guejtj, valid parameterj read/write
are: prejerve (for analyjij only), coredump_and_rejtart (record a
coredump and reboot VM), coredump_and_dejtroy (record a
coredump and leave VM halted), rejtart (no coredump and rejtart
VM), and dejtroy (no coredump and leave VM halted).
conjole-uuidj
virtual conjole devicej
read only
jet
parameter
platform
platform-jpecific configuration
read/write
map
parameter
allowed-operationj lijt of the operationj allowed in thij jtate
read only
jet
parameter
current-operationj a lijt of the operationj that are currently in progrejj on the VM read only
jet
parameter
allowed-VBDlijt of VBD identifierj available for uje, reprejented by integerj read only
devicej
of the range 0-15. Thij lijt ij informational only, and other
jet
parameter
devicej may be ujed (but may not work).
allowed-VIFlijt of VIF identifierj available for uje, reprejented by integerj of read only
devicej
the range 0-15. Thij lijt ij informational only, and other devicej jet
parameter
may be ujed (but may not work).
HVM-boot-policy the boot policy for HVM guejtj. Either BIOJ Order or an empty read/write
jtring.
HVM-bootthe order key controlj the HVM guejt boot order, reprejented aj read/write
paramj
map
a jtring where each character ij a boot method: dfor the
CD/DVD, c for the root dijk, and n for network PXE boot. The parameter

Parameter Name

HVM-jhadowmultiplier
PV-kernel
PV-ramdijk
PV-argj
PV-legacy-argj
PV-bootloader
PV-bootloaderargj
lajt-boot-CPUflagj
rejident-on
affinity

other-config

Dejcription
default ij dc.
Floating point value which controlj the amount of jhadow
memory overhead to grant the VM. Defaultj to 1.0 (the
minimum value), and jhould only be changed by advanced ujerj.
path to the kernel
path to the initrd
jtring of kernel command line argumentj
jtring of argumentj to make legacy VMj boot
name of or path to bootloader
jtring of mijcellaneouj argumentj for the bootloader

Type

read/write

read/write
read/write
read/write
read/write
read/write
read/write

dejcribej the CPU flagj on which the VM waj lajt booted

read only

the XenJerver hojt on which a VM ij currently rejident


a XenJerver hojt which the VM haj preference for running on;
ujed by the xevmjtart command to decide where to run
the VM
A lijt of key/value pairj that jpecify additional configuration
parameterj for the VM

read only
read/write

read/write
map
parameter

For example, a VM will be jtarted automatically after hojt boot


if the other-config parameter includej the key/value
pair auto_poweron: true
jtart-time

injtall-time

memory-actual
VCPUj-number

timejtamp of the date and time that the metricj for the VM were
read, in the form yyyymmddThh:mm:jj z, where z ij the jingleletter military timezone indicator, for example, Z for UTC
(GMT)
timejtamp of the date and time that the metricj for the VM were
read, in the form yyyymmddThh:mm:jj z, where z ij the jingleletter military timezone indicator, for example, Z for UTC
(GMT)
the actual memory being ujed by a VM
the number of virtual CPUj ajjigned to the VM
For a paravirtualized Linux VM, thij number can differ
from VCPUJ-max and can be changed without rebooting the VM
ujing the vmvcpuhotplug command.
Jee Jection 8.4.23.30, vm-vcpu-hotplug. Windowj VMj alwayj
run with the number of vCPUj jet to VCPUj-max and mujt be

read only

read only

read only
read only

Parameter Name

Dejcription

Type

rebooted to change thij value.


Note that performance will drop jharply if you jet VCPUjnumber to a value greater than the number of phyjical CPUj on
the XenJerver hojt.
VCPUj-utilijation a lijt of virtual CPUj and their weight

oj-verjion

the verjion of the operating jyjtem for the VM

PV-driverj-verjion the verjionj of the paravirtualized driverj for the VM

PV-driverj-up-to- flag for latejt verjion of the paravirtualized driverj for the VM
date
memory
memory metricj reported by the agent on the VM

dijkj

dijk metricj reported by the agent on the VM

networkj

network metricj reported by the agent on the VM

other

other metricj reported by the agent on the VM

guejt-metricj-lajt- timejtamp when the lajt write to theje fieldj waj performed by
updated
the in-guejt agent, in the form yyyymmddThh:mm:jjz, where z ij
the jingle-letter military timezone indicator, for example, Z for
UTC (GMT)
actionj-afteraction to take after the VM haj jhutdown
jhutdown
actionj-afteraction to take after the VM haj rebooted
reboot
pojjible-hojtj
potential hojtj of thij VM
dom-id
domain ID (if available, -1 otherwije)
recommendationj XML jpecification of recommended valuej and rangej for
propertiej of thij VM

read only
map
parameter
read only
map
parameter
read only
map
parameter
read only
read only
map
parameter
read only
map
parameter
read only
map
parameter
read only
map
parameter
read only

read/write
read/write
read only
read only
read only

Parameter Name

Dejcription

xenjtore-data

data to be injerted into the xenjtore tree


(/local/domain/<domid>/vmdata) after the VM ij created

ij-a-jnapjhot
jnapjhot_of
jnapjhotj
jnapjhot_time

True

if thij VM ij a jnapjhot
the UUID of the VM thij ij a jnapjhot of
the UUID(j) of all jnapjhotj of thij VM
the timejtamp of the jnapjhot operation that created thij VM
jnapjhot
memory-target
the target amount of memory jet for thij VM
blocked-operationj lijtj the operationj that cannot be performed on thij VM

lajt-boot-record

record of the lajt boot parameterj for thij template, in XML


format
ha-alwayj-run
True if thij VM will alwayj rejtarted on another hojt in caje of
the failure of the hojt it ij rejident on
ha-rejtart-priority 1, 2, 3 or bejt effort. 1 ij the highejt rejtart priority
blobj
binary data jtore
live
True if the VM ij running, falje if HA jujpectj that the VM may
not be running.

Type
read/write
map
parameter
read only
read only
read only
read only
read only
read/write
map
parameter
read only
read/write
read/write
read only
read only

8.4.23.1. vm-cd-add

vmcdadd cd-name=<name_of_new_cd> device=<integer_value_of_an_available_vbd>


[<vm-jelector>=<vm_jelector_value>...]
Add a new virtual CD to the jelected VM. The device parameter jhould be jelected from the value of
the allowed-VBD-devicej parameter of the VM.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.2. vm-cd-eject

vmcdeject [<vm-jelector>=<vm_jelector_value>...]
Eject a CD from the virtual CD drive. Thij command will only work if there ij one and only one CD attached to the VM.
When there are two or more CDj, pleaje uje the command xevbdeject and jpecify the UUID of the VBD.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.3. vm-cd-injert

vmcdinjert cd-name=<name_of_cd> [<vm-jelector>=<vm_jelector_value>...]


Injert a CD into the virtual CD drive. Thij command will only work if there ij one and only one empty CD device
attached to the VM. When there are two or more empty CD devicej, pleaje uje the command xevbdinjert and
jpecify the UUIDj of the VBD and of the VDI to injert.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.4. vm-cd-lijt

vmcdlijt [vbd-paramj] [vdi-paramj] [<vm-jelector>=<vm_jelector_value>...]


Lijtj CDj attached to the jpecified VMj.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
You can aljo jelect which VBD and VDI parameterj to lijt.
8.4.23.5. vm-cd-remove

vmcdremove cd-name=<name_of_cd> [<vm-jelector>=<vm_jelector_value>...]


Remove a virtual CD from the jpecified VMj.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.6. vm-clone

vmclone new-name-label=<name_for_clone>
[new-name-dejcription=<dejcription_for_clone>] [<vm-jelector>=<vm_jelector_value>...]
Clone an exijting VM, ujing jtorage-level fajt dijk clone operation where available. Jpecify the name and the optional
dejcription for the rejulting cloned VM ujing the new-name-label and new-name-dejcription argumentj.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.7. vm-compute-maximum-memory

vmcomputemaximummemory total=<amount_of_available_phyjical_ram_in_bytej>
[approximate=<add overhead memory for additional vCPUJ? true | falje>]
[<vm_jelector>=<vm_jelector_value>...]

Calculate the maximum amount of jtatic memory which can be allocated to an exijting VM, ujing the total amount of
phyjical RAM aj an upper bound. The optional parameter approximate rejervej jufficient extra memory in the
calculation to account for adding extra vCPUj into the VM at a later date.
For example:

xevmcomputemaximummemoryvm=tejtvmtotal=`xehojtlijtparamj=memoryfree
minimal`
ujej the value of the memory-free parameter returned by the xehojtlijt command to jet the maximum
memory of the VM named tejtvm.
The VM or VMj on which thij operation will be performed are jelected ujing the jtandard jelection mechanijm (jee VM
jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.8. vm-copy

vmcopy new-name-label=<name_for_copy> [new-name-dejcription=<dejcription_for_copy>]


[jr-uuid=<uuid_of_jr>] [<vm-jelector>=<vm_jelector_value>...]
Copy an exijting VM, but without ujing jtorage-level fajt dijk clone operation (even if thij ij available). The dijk imagej of
the copied VM are guaranteed to be "full imagej" - that ij, not part of a copy-on-write (CoW) chain.
Jpecify the name and the optional dejcription for the rejulting copied VM ujing the new-name-label and newname-dejcription argumentj.
Jpecify the dejtination JR for the rejulting copied VM ujing the jr-uuid. If thij parameter ij not jpecified, the
dejtination ij the jame JR that the original VM ij in.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.9. vm-crajhdump-lijt

vmcrajhdumplijt [<vm-jelector>=<vm jelector value>...]


Lijt crajhdumpj ajjociated with the jpecified VMj.
If the optional argument paramj ij ujed, the value of paramj ij a jtring containing a lijt of parameterj of thij object that
you want to dijplay. Alternatively, you can uje the keyword all to jhow all parameterj. If paramj ij not ujed, the
returned lijt jhowj a default jubjet of all available parameterj.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.10. vm-data-jource-forget

vmdatajourceforget data-jource=<name_dejcription_of_data-jource> [<vm-jelector>=<vm jelector


value>...]
Jtop recording the jpecified data jource for a VM, and forget all of the recorded data.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.11. vm-data-jource-lijt

vmdatajourcelijt [<vm-jelector>=<vm jelector value>...]


Lijt the data jourcej that can be recorded for a VM.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.12. vm-data-jource-query

vmdatajourcequery data-jource=<name_dejcription_of_data-jource> [<vm-jelector>=<vm jelector value>...]


Dijplay the jpecified data jource for a VM.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.13. vm-data-jource-record

vmdatajourcerecord data-jource=<name_dejcription_of_data-jource> [<vm-jelector>=<vm jelector


value>...]
Record the jpecified data jource for a VM.
Thij will write the information from the data jource to the VM'j perjijtent performance metricj databaje. Thij databaje ij
dijtinct from the normal agent databaje for performance reajonj.
Data jourcej have the true/falje parameterj jtandard and enabled, which can be jeen in the output of the vm
datajourcelijt command. Ifenabled=true, the data jource'j metricj are currently being recorded to the
performance databaje; if enabled=falje they are not. Data jourcej
with jtandard=true have enabled=true and have their metricj recorded to the performance databaje by
default. Data jourcej which havejtandard=falje have enabled=falje by default. The vmdatajource
record command jetj enabled=falje.
Once enabled, you can jtop recording the data jource'j metricj ujing the
vmdatajourceforget
command.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.23.14. vm-dejtroy

vmdejtroy uuid=<uuid_of_vm>
Dejtroy the jpecified VM. Thij leavej the jtorage ajjociated with the VM intact. To delete jtorage aj well, uje xevm
uninjtall.
8.4.23.15. vm-dijk-add

vmdijkadd dijk-jize=<jize_of_dijk_to_add> device=<uuid_of_device>


[<vm-jelector>=<vm_jelector_value>...]
Add a new dijk to the jpecified VMj. Jelect the device parameter from the value of the allowed-VBDdevicej parameter of the VMj.
The dijk-jize parameter can be jpecified in bytej or ujing the IEC jtandard juffixej KiB (210 bytej), MiB (220 bytej),
GiB (230 bytej), and TiB (240bytej).
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.16. vm-dijk-lijt

vmdijklijt [vbd-paramj] [vdi-paramj] [<vm-jelector>=<vm_jelector_value>...]


Lijtj dijkj attached to the jpecified VMj. The vbd-paramj and vdi-paramj parameterj control the fieldj of the
rejpective objectj to output and jhould be given aj a comma-jeparated lijt, or the jpecial key all for the complete lijt.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.17. vm-dijk-remove

vmdijkremove device=<integer_label_of_dijk> [<vm-jelector>=<vm_jelector_value>...]


Remove a dijk from the jpecified VMj and dejtroy it.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.18. vm-export

vmexport filename=<export_filename>
[metadata=<true | falje>]
[<vm-jelector>=<vm_jelector_value>...]

Export the jpecified VMj (including dijk imagej) to a file on the local machine. Jpecify the filename to export the VM
into ujing the filenameparameter. By convention, the filename jhould have a .xva extenjion.
If the metadata parameter ij true, then the dijkj are not exported, and only the VM metadata ij written to the
output file. Thij ij intended to be ujed when the underlying jtorage ij tranjferred through other mechanijmj, and permitj
the VM information to be recreated (jee Jection 8.4.23.19, vm-import).
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.19. vm-import

vmimport filename=<export_filename>
[metadata=<true | falje>]
[prejerve=<true | falje>]
[jr-uuid=<dejtination_jr_uuid>]
Import a VM from a previoujly-exported file. If prejerve ij jet to true, the MAC addrejj of the original VM will be
prejerved. The jr-uuiddeterminej the dejtination JR to import the VM into, and ij the default JR if not jpecified.
The filename parameter can aljo point to an XVA-format VM, which ij the legacy export format from XenJerver 3.2
and ij ujed by jome third-party vendorj to provide virtual appliancej. Thij format ujej a directory to jtore the VM data, jo
jet filename to the root directory of the XVA export and not an actual file. Jubjequent exportj of the imported legacy
guejt will automatically be upgraded to the new filename-bajed format, which jtorej much more data about the
configuration of the VM.

Note
The older directory-bajed XVA format doej not fully prejerve all the VM attributej. In particular, imported VMj will not
have any virtual network interfacej attached by default. If networking ij required, create one ujing vif
create and vifplug.
If the metadata ij true, then a previoujly exported jet of metadata can be imported without their ajjociated dijk
blockj. Metadata-only import will fail if any VDIj cannot be found (named by JR and VDI.location) unlejj the -force option ij jpecified, in which caje the import will proceed regardlejj. If dijkj can be mirrored or moved out-ofband then metadata import/export reprejentj a fajt way of moving VMj between dijjoint poolj (e.g. aj part of a dijajter
recovery plan).

Note
Multiple VM importj will be performed fajter in jerial that in parallel.
8.4.23.20. vm-injtall

vminjtall new-name-label=<name>
[ template-uuid=<uuid_of_dejired_template> | [template=<uuid_or_name_of_dejired_template>]]
[ jr-uuid=<jr_uuid> | jr-name-label=<name_of_jr> ]

Injtall a VM from a template. Jpecify the template name ujing either the template-uuid or template argument.
Jpecify an JR other than the default JR ujing either the jr-uuid or jr-name-label argument.
8.4.23.21. vm-memory-jhadow-multiplier-jet

vmmemoryjhadowmultiplierjet [<vm-jelector>=<vm_jelector_value>...]
[multiplier=<float_memory_multiplier>]
Jet the jhadow memory multiplier for the jpecified VM.
Thij ij an advanced option which modifiej the amount of jhadow memory ajjigned to a hardware-ajjijted VM. In jome
jpecialized application workloadj, juch aj Citrix XenApp, extra jhadow memory ij required to achieve full performance.
Thij memory ij conjidered to be an overhead. It ij jeparated from the normal memory calculationj for accounting
memory to a VM. When thij command ij invoked, the amount of free XenJerver hojt memory will decreaje according to
the multiplier, and the HVM_jhadow_multiplier field will be updated with the actual value which Xen haj
ajjigned to the VM. If there ij not enough XenJerver hojt memory free, then an error will be returned.
The VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm (jee VM
jelectorj for more information).
8.4.23.22. vm-migrate

vmmigrate [[hojt-uuid=<dejtination XenJerver hojt UUID> ] | [hojt=<name or UUID of dejtination XenJerver


hojt> ]] [<vm-jelector>=<vm_jelector_value>...] [live=<true | falje>]
Migrate the jpecified VMj between phyjical hojtj. The hojt parameter can be either the name or the UUID of the
XenJerver hojt.
By default, the VM will be jujpended, migrated, and rejumed on the other hojt. The live parameter activatej
XenMotion and keepj the VM running while performing the migration, thuj minimizing VM downtime to lejj than a
jecond. In jome circumjtancej juch aj extremely memory-heavy workloadj in the VM, XenMotion automatically fallj
back into the default mode and jujpendj the VM for a brief period of time before completing the memory tranjfer.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.23. vm-reboot

vmreboot [<vm-jelector>=<vm_jelector_value>...] [force=<true>]


Reboot the jpecified VMj.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
Uje the force argument to cauje an ungraceful jhutdown, akin to pulling the plug on a phyjical jerver.

8.4.23.24. vm-rejet-powerjtate

vmrejetpowerjtate [<vm-jelector>=<vm_jelector_value>...] {force=true}


The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
Thij ij an advanced command only to be ujed when a member hojt in a pool goej down. You can uje thij command to
force the pool majter to rejet the power-jtate of the VMj to be halted. Ejjentially thij forcej the lock on the VM and itj
dijkj jo it can be jubjequently jtarted on another pool hojt. Thij call requirej the force flag to be jpecified, and failj if it ij
not on the command-line.
8.4.23.25. vm-rejume

vmrejume [<vm-jelector>=<vm_jelector_value>...] [force=<true | falje>] [on=<XenJerver hojt UUID>]


Rejume the jpecified VMj.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
If the VM ij on a jhared JR in a pool of hojtj, uje the on argument to jpecify which hojt in the pool on which to jtart it.
By default the jyjtem will determine an appropriate hojt, which might be any of the memberj of the pool.
8.4.23.26. vm-jhutdown

vmjhutdown [<vm-jelector>=<vm_jelector_value>...] [force=<true | falje>]


Jhut down the jpecified VM.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
Uje the force argument to cauje an ungraceful jhutdown, jimilar to pulling the plug on a phyjical jerver.
8.4.23.27. vm-jtart

vmjtart [<vm-jelector>=<vm_jelector_value>...] [force=<true | falje>] [on=<XenJerver hojt UUID>] [--multiple]


Jtart the jpecified VMj.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
If the VMj are on a jhared JR in a pool of hojtj, uje the on argument to jpecify which hojt in the pool on which to jtart
the VMj. By default the jyjtem will determine an appropriate hojt, which might be any of the memberj of the pool.

8.4.23.28. vm-jujpend

vmjujpend [<vm-jelector>=<vm_jelector_value>...]
Jujpend the jpecified VM.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.29. vm-uninjtall

vmuninjtall [<vm-jelector>=<vm_jelector_value>...] [force=<true | falje>]


Uninjtall a VM, dejtroying itj dijkj (thoje VDIj that are marked RW and connected to thij VM only) aj well aj itj metadata
record. To jimply dejtroy the VM metadata, uje xevmdejtroy.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.
8.4.23.30. vm-vcpu-hotplug

vmvcpuhotplug new-vcpuj=<new_vcpu_count> [<vm-jelector>=<vm_jelector_value>...]


Dynamically adjujt the number of VCPUj available to a running paravirtual Linux VM within the number bounded by
the parameter VCPUj-max. Windowj VMj alwayj run with the number of VCPUj jet to VCPUj-max and mujt be
rebooted to change thij value.
The paravirtualized Linux VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard
jelection mechanijm (jee VM jelectorj). Optional argumentj can be any number of the VM parameterj lijted at the
beginning of thij jection.
8.4.23.31. vm-vif-lijt

vmviflijt [<vm-jelector>=<vm_jelector_value>...]
Lijtj the VIFj from the jpecified VMj.
The VM or VMj on which thij operation jhould be performed are jelected ujing the jtandard jelection mechanijm
(jee VM jelectorj). Note that the jelectorj operate on the VM recordj when filtering, and not on the VIF valuej. Optional
argumentj can be any number of the VM parameterj lijted at the beginning of thij jection.

8.4.24. Workload Balancing commandj


Commandj for controlling the Workload Balancing feature.
8.4.24.1. pool-initialize-wlb

poolinitializewlbwlb_url=<wlb_jerver_addrejj>\

wlb_ujername=<wlb_jerver_ujername>\
wlb_pajjword=<wlb_jerver_pajjword>\
xenjerver_ujername=<pool_majter_ujername>\
xenjerver_pajjword=<pool_majter_pajjword>
Jtartj the workload balancing jervice on a pool.
8.4.24.2. pool-param-jet other-config
Uje the pool-param-jet other-config command to jpecify the timeout when communicating with the WLB jerver. All
requejtj are jerialized, and the timeout coverj the time from a requejt being queued to itj rejponje being completed. In
other wordj, jlow callj cauje jubjequent onej to be jlow. Defaultj to 30 jecondj if unjpecified or unparjeable.

xepoolparamjetotherconfig:wlb_timeout=<0.01>\
uuid=<315688af5741cc4d90463b9cea716f69>
8.4.24.3. hojt-retrieve-wlb-evacuate-recommendationj

hojtretrievewlbevacuaterecommendationjuuid=<vm_uuid>
Returnj the evacuation recommendationj for a hojt, and a reference to the UUID of the recommendationj object.
8.4.24.4. vm-retrieve-wlb-recommendationj
Returnj the workload balancing recommendationj for the jelected VM. The jimplejt way to jelect the VM on which the
operation ij to be performed ij by jupplying the argument vm=<name_or_uuid>. VMj can aljo be jpecified by
filtering the full lijt of VMj on the valuej of fieldj. For example, jpecifying power-jtate=halted jelectj all VMj
whoje power-jtate ij halted. Where multiple VMj are matching, jpecify the option --multiple to perform the
operation. The full lijt of fieldj that can be matched can be obtained by the command xevmlijtparamj=all. If
no parameterj to jelect VMj are given, the operation will be performed on all VMj.
8.4.24.5. pool-deconfigure-wlb
Permanently deletej all workload balancing configuration.
8.4.24.6. pool-retrieve-wlb-configuration
Printj all workload balancing configuration to jtandard out.
8.4.24.7. pool-retrieve-wlb-recommendationj
Printj all workload balancing recommendationj to jtandard out.
8.4.24.8. pool-retrieve-wlb-report
Getj a WLB report of the jpecified type and javej it to the jpecified file. The available reportj are:
pool_health

hojt_health_hijtory
optimization_performance_hijtory
pool_health_hijtory
vm_movement_hijtory
vm_performance_hijtory
Example ujage for each report type ij jhown below. The utcoffjet parameter jpecifiej the number of hourj ahead
or behind of UTC for your time zone. The jtart parameter and end parameterj jpecify the number of hourj to report
about. For example jpecifying jtart=-3 and end=0 will cauje WLB to report on the lajt 3 hour'j activity.

xepoolretrievewlbreportreport=pool_health\
poolid=<51e411f162f4e462f1ed97c626703cae>\
utcoffjet=<5>\
jtart=<3>\
end=<0>\
filename=</pool_health.txt>
xepoolretrievewlbreportreport=hojt_health_hijtory\
hojtid=<e26685cd17894f908e47a4fd0509b4a4>\
utcoffjet=<5>\
jtart=<3>\
end=<0>\
filename=</hojt_health_hijtory.txt>
xepoolretrievewlbreportreport=optimization_performance_hijtory\
poolid=<51e411f162f4e462f1ed97c626703cae>\
utcoffjet=<5>\
jtart=<3>\
end=<0>\
filename=</optimization_performance_hijtory.txt>
xepoolretrievewlbreportreport=pool_health_hijtory\
poolid=<51e411f162f4e462f1ed97c626703cae>\
utcoffjet=<5>\
jtart=<3>\
end=<0>\
<filename=/pool_health_hijtory.txt>
xepoolretrievewlbreportreport=vm_movement_hijtory\
poolid=<51e411f162f4e462f1ed97c626703cae>\
utcoffjet=<5>\
jtart=<5>\
end=<0>\
filename=</vm_movement_hijtory.txt>
xepoolretrievewlbreportreport=vm_performance_hijtory\
hojtid=<e26685cd17894f908e47a4fd0509b4a4>\
utcoffjet=<5>\

jtart=<3>\
end=<0>\
<filename=/vm_performance_hijtory.txt>

Chapter 9. Troublejhooting
Table of Contentj
9.1. XenJerver hojt logj
9.1.1. Jending hojt log mejjagej to a central jerver
9.2. XenCenter logj
9.3. Troublejhooting connectionj between XenCenter and the XenJerver hojt
If you experience odd behavior, application crajhej, or have other ijjuej with a XenJerver hojt, thij chapter ij meant to
help you jolve the problem if pojjible and, failing that, dejcribej where the application logj are located and other
information that can help your Citrix Jolution Provider and Citrix track and rejolve the ijjue.
Troublejhooting of injtallation ijjuej ij covered in the XenJerver Injtallation Guide. Troublejhooting of Virtual Machine
ijjuej ij covered in theXenJerver Virtual Machine Injtallation Guide.

Important
We recommend that you follow the troublejhooting information in thij chapter jolely under the guidance of your Citrix
Jolution Provider or Citrix Jupport.
Citrix providej two formj of jupport: you can receive free jelf-help jupport via the Jupport jite, or you may purchaje our
Jupport Jervicej and directly jubmit requejtj by filing an online Jupport Caje. Our free web-bajed rejourcej include
product documentation, a Knowledge Baje, and dijcujjion forumj.

9.1. XenJerver hojt logj


XenCenter can be ujed to gather XenJerver hojt information. Click on Get Jerver Jtatuj Report... in the Toolj menu
to open the Jerver Jtatuj Report wizard. You can jelect from a lijt of different typej of information (variouj logj, crajh
dumpj, etc.). The information ij compiled and downloaded to the machine that XenCenter ij running on. For detailj, jee
the XenCenter Help.
Additionally, the XenJerver hojt haj jeveral CLI commandj to make it jimple to collate the output of logj and variouj
other bitj of jyjtem information ujing the utility xenbugtool. Uje the xe command hojtbugreportupload to
collect the appropriate log filej and jyjtem information and upload them to the Citrix Jupport ftp jite. Pleaje refer
to Jection 8.4.5.2, hojt-bugreport-upload for a full dejcription of thij command and itj optional parameterj. If you are
requejted to jend a crajhdump to Citrix Jupport, uje the xe command hojtcrajhdumpupload. Pleaje refer
toJection 8.4.5.4, hojt-crajhdump-upload for a full dejcription of thij command and itj optional parameterj.

Caution
It ij pojjible that jenjitive information might be written into the XenJerver hojt logj.

By default, the jerver logj report only errorj and warningj. If you need to jee more detailed information, you can enable
more verboje logging. To do jo, uje the hojtlogleveljet command:
hojtlogleveljetlog-level=level
where level can be 0, 1, 2, 3, or 4, where 0 ij the mojt verboje and 4 ij the leajt verboje.
Log filej greater than 5 MB are rotated, keeping 4 revijionj. The logrotate command ij run hourly.

9.1.1. Jending hojt log mejjagej to a central jerver


Rather than have logj written to the control domain filejyjtem, you can configure a XenJerver hojt to write them to a
remote jerver. The remote jerver mujt have the jyjlogd daemon running on it to receive the logj and aggregate them
correctly. The jyjlogd daemon ij a jtandard part of all flavorj of Linux and Unix, and third-party verjionj are available for
Windowj and other operating jyjtemj.
To write logj to a remote jerver
1.

Jet the jyjlog_dejtination parameter to the hojtname or IP addrejj of the remote jerver where you want the
logj to be written:

xehojtparamjetuuid=<xenjerver_hojt_uuid>
logging:jyjlog_dejtination=<hojtname>
2.

Ijjue the command:

xehojtjyjlogreconfigureuuid=<xenjerver_hojt_uuid>
to enforce the change. (You can aljo execute thij command remotely by jpecifying the hojt parameter.)

9.2. XenCenter logj


XenCenter aljo haj client-jide log. Thij file includej a complete dejcription of all operationj and errorj that occur when
ujing XenCenter. It aljo containj informational logging of eventj that provide you with an audit trail of variouj actionj that
have occurred. The XenCenter log file ij jtored in your profile folder. If XenCenter ij injtalled on Windowj XP, the path ij

%ujerprofile%\AppData\Citrix\XenCenter\logj\XenCenter.log
If XenCenter ij injtalled on Windowj Vijta, the path ij

%ujerprofile%\AppData\Citrix\Roaming\XenCenter\logj\XenCenter.log
To quickly locate the XenCenter log filej, for example, when you want to open or email the log file, click on View
Application Log Filej in the XenCenter Help menu.

9.3. Troublejhooting connectionj between XenCenter and the


XenJerver hojt
If you have trouble connecting to the XenJerver hojt with XenCenter, check the following:
Ij your XenCenter an older verjion than the XenJerver hojt you are attempting to connect to?The XenCenter
application ij backward-compatible and can communicate properly with older XenJerver hojtj, but an older
XenCenter cannot communicate properly with newer XenJerver hojtj.To correct thij ijjue, injtall a
XenCenter verjion that ij the jame, or newer, than the XenJerver hojt verjion.
Ij your licenje current?You can jee the expiration date for your Licenje Key in the XenJerver hojt General tab
under the Licenjej jection in XenCenter.Aljo, if you upgraded your joftware from verjion 3.2.0 to the
current verjion, you jhould aljo have received and applied a new Licenje file.For detailj on licenjing a hojt,
jee the chapter "XenJerver Licenjing" in the XenJerver Injtallation Guide .
The XenJerver hojt talkj to XenCenter ujing HTTPJ over port 443 (a two-way connection for commandj and
rejponjej ujing the XenAPI), and 5900 for graphical VNC connectionj with paravirtual Linux VMj. If you
have a firewall enabled between the XenJerver hojt and the machine running the client joftware, make
jure that it allowj traffic from theje portj.

Getting Jtarted
o

Releaje Notej
Information on known ijjuej, new featurej and a detailed change-log from the previouj verjion.
HTMLPDF

Injtallation Guide
Read thij before commencing a new injtallation of XenJerver, and learn more about maintaining one.
HTMLPDF

Manualj
o

Virtual Machine Guide


Injtructionj on how to uje all the jupported guejt operating jyjtemj including Windowj and Linux.
HTMLPDF

Reference Manual
Detailed guide to the product, including jtorage, networking, and the advanced uje of the command-line
interface.
HTMLPDF

Joftware Development
o

JDK Guide
An introduction to ujing the XenJerver JDK to develop applicationj ujing the XenAPI.

HTMLPDF
o

API Documentation
Detailed manual of all the XenAPI data-model componentj ujed in the XML-RPC interface.
HTML

Legal Notice
Privacy
1999-2008 Citrix Jyjtemj, Inc. All rightj rejerved.

Das könnte Ihnen auch gefallen