Sie sind auf Seite 1von 182

RH184

Red Hat Enterprise Linux Virtualization


RH184-RHEL5-en-2-20070907

Table of Contents
RH184 - Red Hat Enterprise Linux Virtualization
RH184: Red Hat Enterprise Linux Virtualization
Copyright
Welcome
Participant Introductions
Red Hat Enterprise Linux
Red Hat Enterprise Linux Variants
Red Hat Network
Other Red Hat Supported Software
The Fedora Project
Classroom Network
Notes on Internationalization
Objectives of RH184
Audience and Prerequisites

vi
vii
viii
ix
x
xi
xii
xiii
xiv
xv
xvi
xvii

Unit 1 - Introduction to Virtualization


Objectives
What is Virtualization?
System Virtualization
Virtualization Terminology
Full Virtualization
Emulation and Dynamic Translation
Paravirtualization
Use Case: Consolidation
Use Case: Compartmentalization
Use Case: Development and Testing
Use Case: Virtual Appliances
Other Types of Virtualization
End of Unit 1

2
3
4
5
6
7
8
9
10
11
12
13
14

Unit 2 - Basic Paravirtualized Domain Installation


Objectives
Xen
Xen Architecture
Hardware Considerations
Basic Dom0 Installation
Dom0 Installation Considerations
virt-manager
DomU Components
Basic Paravirtualized DomU Installation
Copyright 2007 Red Hat, Inc.
All rights reserved

16
17
18
19
20
21
22
23
24
RH184-RHEL5-en-2-20070907, Table of Contents
i

Activating Domains at Boot


End of Unit 2
Lab 2: Installing Red Hat Virtualization
Sequence 1: Installing the Red Hat Virtualization environment
Sequence 2: Installing a Domain-U virtual machine
Sequence 3: Configuring automatic restart of domains at boot

25
26
27
28
29
30

Unit 3 - Virtual Machine Management


Objectives
Virtual Machine Management
Identifying Virtual Machines
gnome-applet-vm
virsh
xm
Booting Domains
Stopping and Rebooting Domains
Suspending/Resuming Active Domains
Saving/Restoring Domain State on Disk
virsh list and xm list
Monitoring Domains with xentop
End of Unit 3
Lab 3: Virtual Machine Management
Sequence 1: Working with VM Applet (gnome-applet-vm)
Sequence 2: Working with virsh, xm, and xentop

36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51

Unit 4 - Paravirtualized Domain Configuration


Objectives
Paravirtualized Domains
Review of DomU Components
CPUs and Virtual CPUs
Memory
Dynamic CPU and Memory Management
Virtual Block Devices
Logical Volumes and VBDs
Virtual Machines and Networking
DomU Graphical Consoles
DomU Virtual Serial Console
Dom0 Serial Console
virt-install
Manual Domain Installation
End of Unit 4
Lab 4: Advanced Paravirtualized Domain Management
Sequence 1: Dynamic adjustment of domain memory
Sequence 2: Configuring Virtual Serial Consoles
Sequence 3: Installing a new DomU with virt-install
Copyright 2007 Red Hat, Inc.
All rights reserved

56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74

RH184-RHEL5-en-2-20070907, Table of Contents


ii

Sequence 4: Rapid domain cloning from LVM-based template


Sequence 5: Manual domain installation with xm

75
77

Unit 5 - xend Configuration and Live Migration


Objectives
Xen Dom0 User-Space Components
/etc/xen/xend-config.sxp
Management Interfaces
Creating Local Private Networks
Physical Network Separation
Masquerading the Private Bridge
Virtual Machine Migration
Shared Storage
Shared Storage and iSCSI Considerations
Enabling Live Migration
Performing Domain Migration
Security Considerations of Migration
End of Unit 5
Lab 5: Live Migration
Sequence 1: Configuring an iSCSI target for shared storage
Sequence 2: Configuring an iSCSI initiator
Sequence 3: Live Migration

92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
109
110

Unit 6 - Troubleshooting Virtualization


Objectives
Troubleshooting in a Virtual Machine
Networking Environment
Virtual Machine Information
Log Information
Recovery Runlevels and PyGRUB
Accessing Disk Images from Dom0
Collecting Crash Dumps
SELinux and Virtualization
Common Issues
End of Unit 6
Lab 6: Virtualization and Troubleshooting
Sequence 1: Using rescue runlevels with virtual machines
Sequence 2: Using Dom0 as a virtual machine rescue environment
Sequence 3: Collecting crash dumps from virtual machines

117
118
119
120
121
122
123
124
125
126
127
128
129
130
131

Unit 7 - Hardware-Assisted Full Virtualization


Objectives
Introduction to HVM
Copyright 2007 Red Hat, Inc.
All rights reserved

138
139
RH184-RHEL5-en-2-20070907, Table of Contents
iii

Processor Support for HVM Domains


Limitations of HVM Domains
Device Model and QEMU-DM
Installation of HVM Domains
HVM Domain Configuration Files
Manual Installation of HVM Domains
Microsoft Windows Installation Tips
HVM DomU Virtual Serial Console
Troubleshooting HVM Domains
End of Unit 7
Lab 7: Advanced Topics
Sequence 1: Configuration of a private network for local Xen guests
Sequence 2: Dynamically adding a VBD to a paravirtualized guest
Sequence 3: Red Hat Network registration of Xen guests

Copyright 2007 Red Hat, Inc.


All rights reserved

140
141
142
143
144
145
146
147
148
149
150
151
153
154

RH184-RHEL5-en-2-20070907, Table of Contents


iv

Introduction

RH184: Red Hat Enterprise Linux Virtualization

1
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Introduction
v

Copyright

The contents of this course and all its modules and related materials, including handouts
to audience members, are Copyright 2007 Red Hat, Inc.
No part of this publication may be stored in a retrieval system, transmitted or reproduced
in any way, including, but not limited to, photocopy, photograph, magnetic, electronic or
other record, without the prior written permission of Red Hat, Inc.
This instructional program, including all material provided herein, is supplied without any
guarantees from Red Hat, Inc. Red Hat, Inc. assumes no liability for damages or legal
action arising from the use or misuse of contents or details contained herein.
If you believe Red Hat training materials are being used, copied, or otherwise improperly
distributed please email training@redhat.com or phone toll-free (USA) +1 866 626 2994
or +1 919 754 3700.
2

For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Introduction
vi

Welcome
Please let us know if you have any special needs while at our
training facility.
3
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Phone and network availability


Please only make calls during breaks. Your instructor will show you which phone to use. Network access and analog
phone lines may be available; your instructor will provide information about these facilities. Please turn pagers to
silent and cell phones off during class.

Restrooms
Your instructor will notify you of the location of these facilities.

Lunch and breaks


Your instructor will notify you of the areas to which you have access for lunch and for breaks

In case of Emergency
Please let us know if anything comes up that will prevent you from attending.

Access
Each facility has its own opening and closing times. Your instructor will provide you with this information.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Introduction
vii

Participant Introductions
Please introduce yourself to the rest of the class!
4
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Introduction
viii

Red Hat Enterprise Linux


Enterprise-targeted operating system
Focused on mature open source technology
18-24 month release cycle
Certified with leading OEM and ISV products

Purchased with one year Red Hat Network subscription and


support contract
Support available for seven years after release
Up to 24x7 coverage plans available

5
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The Red Hat Enterprise Linux product family is designed specifically for organizations planning to use Linux in
production settings. All products in the Red Hat Enterprise Linux family are built on the same software foundation,
and maintain the highest level of ABI/API compatibility across releases and errata. Extensive support services are
available: a one year support contract and Update Module entitlement to Red Hat Network are included with purchase.
Various Service Level Agreements are available that may provide up to 24x7 coverage with guaranteed one hour
response time. Support will be available for up to seven years after a particular release.
Red Hat Enterprise Linux is released on an eighteen to twenty-four month cycle. It is based on code developed by
the open source community and adds performance enhancements, intensive testing, and certification on products
produced by top independent software and hardware vendors such as Dell, IBM, Fujitsu, BEA, and Oracle. Red Hat
Enterprise Linux provides a high degree of standardization through its support for five processor architectures (Intel
x86-compatible, AMD64/Intel 64, Intel Itanium 2, IBM POWER, and IBM mainframe on System z).

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Introduction
ix

Red Hat Enterprise Linux Variants


Server:
Red Hat Enterprise Linux Advanced Platform
Red Hat Enterprise Linux

Client:
Red Hat Enterprise Linux Desktop
with Workstation option
with Multi-OS option
with Workstation and Multi-OS options

6
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Currently, on the x86 and x86-64 architectures, the product family includes:
Red Hat Enterprise Linux Advanced Platform: the most cost-effective server solution, this product includes support
for the largest x86-compatible servers, unlimited virtualized guest operating systems, storage virtualization, highavailability application and guest fail-over clusters, and the highest levels of technical support.
Red Hat Enterprise Linux: the basic server solution, supporting servers with up to two CPU sockets and up to four
virtualized guest operating systems.
Red Hat Enterprise Linux Desktop: a general-purpose client solution, offering desktop applications such as the
OpenOffice.org office suite and Evolution mail client. Add-on options provide support for high-end technical and
development workstations and for running multiple operating systems simultaneously through virtualization.
Two standard installation media kits are used to distribute variants of the operating system. Red Hat Enterprise Linux
Advanced Platform and Red Hat Enterprise Linux are shipped on the Server media kit. Red Hat Enterprise Linux
Desktop and its add-on options are shipped on the Client media kit. Media kits may be downloaded as ISO 9660 CDROM file system images from Red Hat Network or may be provided in a boxed set on DVD-ROMs.
Please visit http://www.redhat.com/rhel/ for more information about the Red Hat Enterprise Linux product
family.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Introduction
x

Red Hat Network


A comprehensive software delivery, system management, and
monitoring framework
Update Module: Provides software updates
Included with all Red Hat Enterprise Linux subscriptions
Management Module: Extended capabilities for large deployments
Provisioning Module: Bare-metal installation, configuration management, and
multi-state configuration rollback capabilities
Monitoring Module provides infrastructure health monitoring of networks,
systems, applications, etc.

7
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Red Hat Network is a complete systems management platform. It is a framework of modules for easy software
updates, systems management, and monitoring, built on open standards. There are currently four modules in Red Hat
Network; the Update Module, the Management Module, the Provisioning Module, and the Monitoring Module.
The Update Module is included with all subscriptions to Red Hat Enterprise Linux. It allows for easy software updates
to all your Red Hat Enterprise Linux systems.
The Management Module is an enhanced version of the Update Module, which adds additional features tailored
for large organizations. These enhancements include system grouping and set management, multiple organizational
administrators, and package profile comparison among others. In addition, with RHN Proxy Server or Satellite Server,
local package caching and management capabilities become available.
The Provisioning Module provides mechanisms to provision and manage the configuration of Red Hat Enterprise
Linux systems throughout their entire life cycle. It supports bare metal and existing state provisioning, storage and
editing of kickstart files in RHN, configuration file management and deployment, multi-state rollback and snapshotbased recovery, and RPM-based application provisioning. If used with RHN Satellite Server, support is added for PXE
bare-metal provisioning, an integrated network tree, and configuration management profiles.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Introduction
xi

Other Red Hat Supported Software

Red Hat Application Stack


JBoss Enterprise Middleware Suite
Red Hat Directory Server
Red Hat Certificate System
Red Hat Global File System
8

For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Red Hat offers a number of additional open source application products and operating system enhancements which
may be added to the standard Red Hat Enterprise Linux operating system. As with Red Hat Enterprise Linux, Red
Hat provides a range of maintenance and support services for these add-on products. Installation media and software
updates are provided through the same Red Hat Network interface used to manage Red Hat Enterprise Linux systems.
Red Hat Application Stack: the first fully integrated open source stack, supplying everything needed to run standardsbased web applications, including Red Hat Enterprise Linux, JBoss Application Server with Tomcat, JBoss Hibernate,
and a choice of open source databases: MySQL or PostgreSQL, and Apache Web Server.
JBoss Enterprise Middleware Suite: enterprise-class open source software to build, deploy, integrate, orchestrate, and
present web applications and services in a Service-Oriented Architecture.
Red Hat Directory Server: an LDAP-based server that centralizes directory storage and data distribution, such as user
and group data.
Red Hat Certificate System: a security framework for deploying and maintaining a Public Key Infrastructure (PKI) for
identity management, fully integrated with Red Hat Directory Server to enable enterprise deployment of secure Webbased authentication, S/MIME, VPNs, and SSL.
Red Hat Global File System: an open source cluster file system appropriate for enterprise deployments, allowing
servers to share a common pool of storage. Includes Red Hat Cluster Suite for providing application fail-over between
servers for critical services. Part of RHEL5 Advanced Platform, an add-on product for RHEL3 and RHEL4.
For additional information, see the following web pages:

Red Hat Application Stack: http://www.redhat.com/appstack/

JBoss Enterprise Middleware Suite: http://www.redhat.com/jboss/

Red Hat Directory Server: http://www.redhat.com/directory_server/

Red Hat Certificate System: http://www.redhat.com/certificate_system/

Red Hat Global File System: http://www.redhat.com/gfs/

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Introduction
xii

The Fedora Project


Open source projects sponsored by Red Hat
Fedora distribution is focused on latest open source technology
Rapid four to six month release cycle
Available as free download from the Internet

EPEL provides add-on software for Red Hat Enterprise Linux


Open, community-supported proving grounds for technologies
which may be used in upcoming enterprise products
Red Hat does not provide formal support
9
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The Fedora Project is a collection of community supported open source projects sponsored by Red Hat intended
to encourage rapid progress of free and open source software. The flagship project is Fedora, a rapidly evolving,
technology-driven Linux distribution with an open, highly scalable development and distribution model. It is designed
to be an incubator and test bed for new technologies that may be used in later Red Hat enterprise products. The basic
Fedora distribution is available for free download from the Internet.
The Fedora Project produces releases of Fedora on a short four to six month release cycle, to bring the latest
innovations of open source technology to the community. This may make it attractive for power users and developers
who want access to cutting-edge technology and can handle the risks of adopting rapidly changing new technology.
Red Hat does not provide formal support for the Fedora Project.
The Fedora Project also supports EPEL, Extra Packages for Enterprise Linux. EPEL is a volunteer-based community
effort to create a repository of high-quality add-on packages which can be used with Red Hat Enterprise Linux and
compatible derivatives. Red Hat does not provide commercial support or service level agreements for EPEL packages.
Visit http://fedoraproject.org/ for more information about the Fedora Project.
Visit http://fedoraproject.org/wiki/EPEL/ for more information about EPEL.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Introduction
xiii

Classroom Network

Our Network
Our Server
Our Stations
Hostile Network
Hostile Server
Hostile Stations
Trusted Station

Names
example.com
server1.example.com
stationX.example.com
cracker.org
server1.cracker.org
stationX.cracker.org
trusted.cracker.org

IP Addresses
192.168.0.0/24
192.168.0.254
192.168.0.X
192.168.1.0/24
192.168.1.254
192.168.1.X
192.168.1.21
10

For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Introduction
xiv

Notes on Internationalization
Red Hat Enterprise Linux supports nineteen languages
Default language can be selected:
During installation
With system-config-language
System->Administration->Language

Alternate languages can be used on a per-command basis:


$ LANG=en_US.UTF8 date

Language settings are stored in /etc/sysconfig/i18n


11
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Red Hat Enterprise Linux 5 supports nineteen languages: English, Bengali, Chinese (Simplified), Chinese
(Traditional), French, German, Gujarati, Hindi, Italian, Japanese, Korean, Malayalam, Marathi, Oriya, Portuguese
(Brazilian), Punjabi, Russian, Spanish and Tamil. Support for Assamese, Kannada, Sinhalese and Telugu are provided
as technology previews.
A system's language can be selected during installation, but the default is US English. To use other languages, you
may need to install extra packages to provide the appropriate fonts, translations and so forth. These can be selected
during system installation or with system-config-packages ( Applications->Add/Remove Software).
The currently selected language is set with the LANG shell variable. Programs read this variable to determine what
language to use for output:
[student@stationX ~]$ echo $LANG
ru_RU.UTF8
A system's default language can be changed with system-config-language ( System->Administration->Language),
which affects the /etc/sysconfig/i18n file.
Languages with non-ASCII characters may have problems displaying in some environments. Kanji characters, for
example, may not display as expected on a virtual console. Individual commands can be made to use another language
by setting LANG on the command-line:
[student@stationX ~]$ LANG=en_US.UTF8 date
Thu Feb 22 13:54:34 EST 2007
Subsequent commands will revert to using the system's default language for output.
SCIM (Smart Common Input Method) can be used to input text in various languages under X if the appropriate
language support packages are installed. Type Ctrl-Space to switch input methods.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Introduction
xv

Objectives of RH184
Gain a better understanding of virtualization and its uses
Develop skills required to manage and deploy virtualization
and virtual machines on and running Red Hat Enterprise Linux
systems
12
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Introduction
xvi

Audience and Prerequisites


Audience: Linux system administrators who understand how
to install and configure a Red Hat Enterprise Linux system and
who wish to learn to install, configure, and manage Red Hat
Enterprise Linux 5 in a virtualized environment.
Prerequisites: RHCT certification or has taken RH131 or RH133,
or equivalent system administration knowledge on Red Hat
Enterprise Linux
13
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Introduction
xvii

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Introduction
xviii

Unit 1

Introduction to Virtualization

1-1
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 1
1

Objectives
Upon completion of this unit, you should be able to:
Describe what system virtualization is
Understand basic virtualization terminology
Understand use cases of virtualization
Compare types of virtualization
1-2
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 1
2

What is Virtualization?
Virtualization techniques:
Abstract the user environment from the physical hardware
Compartmentalize the resources of a single computer into multiple isolated
environments
Allow an environment to appear to have sole control of resources which are
actually shared

There are many different virtualization techniques which may be


used for different purposes
1-3
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

What is virtualization? There are many different types of virtualization, but the basic idea is that virtualization allows
the user environment on a computer to be abstracted from the physical hardware. Typically, the goal of virtualization
is to make it easier to manage or use the hardware resources on a computer system.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 1
3

System Virtualization
System virtualization divides a computer into multiple execution
environments running separate operating systems
Each operating system has its own private virtual machine
May run a different operating system on each virtual machine

Virtual peripherals share physical hardware resources


Details hidden from virtual machines
Can allow efficient resource management

1-4
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Most computers today run a single operating system on the hardware at a time. If a machine needs to run code under
another operating system, the traditional model on the x86 platform is to reboot the machine into that operating system
first. This is typically referred to as a dual-boot or multiple-boot configuration. If both operating systems need to be
available simultaneously, two computers are needed.
System virtualization allows a single computer to be partitioned or divided into multiple virtual computers which may
each run its own operating system simultaneously. These virtual machines are isolated from each other. From the
perspective of each operating system, it is running on its own private hardware. They may have their own network
interfaces and IP addresses, file systems, and other peripherals. Different virtual machines need not run the same
operating system or version of the operating system.
In this course, we will mainly be interested in system virtualization.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 1
4

Virtualization Terminology
Hypervisor
Runs the virtual machines and isolates them from real hardware

Domain
A virtual machine running on the hypervisor
Sometimes referred to as a guest

Privileged domain
A domain with the ability to manage other domains or the hypervisor

1-5
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

A hypervisor is the software that manages and supports the virtualization environment. It runs the virtual machines for
each virtualized operating system, providing access to virtual CPUs, memory, disks, networking, and other peripherals
while restricting the virtual machines from having direct access to the real hardware and each other. A hypervisor that
runs directly on the "bare metal" hardware is sometimes called a type I hypervisor. Examples of type I hypervisors
include Linux KVM, IBM's z/VM, and VMware ESX Server. By contrast, a type II hypervisor is implemented as
software that runs on a host operating system. Examples include VMware Workstation, Parallels Desktop for Mac,
and Microsoft Virtual Server. The Xen hypervisor currently used by Red Hat Virtualization is sometimes called a
hybrid hypervisor that runs directly on the bare metal like a type I hypervisor, but is heavily dependent on drivers
and support from one of its virtual machines in order to function properly. Others consider the combination of Xen
and the privileged virtual machine running Linux supporting it to be a form of type I hypervisor. An alternative term
sometimes used in place of "hypervisor" is virtual machine monitor.
A domain is one frequently used term to refer to a virtual machine running on the hypervisor. This is also sometimes
called a guest, especially by users more familiar with virtualization software based on type II hypervisors, where
the operating system which is running the hypervisor is called the host. (Some people reserve "guest" to refer to the
operating system running on the virtual machine, and use "domain" to refer to the virtual machine itself.)
A privileged domain is a domain which has special privileges to manage other domains and the hypervisor. The
management utilities are typically installed as part of an operating system running in that domain. Users of the
Xen hypervisor sometimes call this domain "Domain-0" or "Dom0", as the first domain which starts up is the
privileged domain and always has that name and ID number. Other terms sometimes used are management domain
or management console. Domains normally do not have the ability to manage other domains running on the same
hardware, nor the ability to access virtual hardware belonging to another domain directly.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 1
5

Full Virtualization
Allows software to be run in a virtual machine without
modification
Virtual machine environment acts like "bare metal"
Hypervisor controls all real devices/resources
Most code directly executes on CPU; sensitive instructions are intercepted by
hypervisor

Design of x86 ISA makes it hard to fully virtualize


Intel VT-x and AMD-V extensions help fix this
IBM mainframe has supported for decades

1-6
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Full virtualization is a form of system virtualization that allows unmodified operating systems and software to be run
on a virtual machine exactly as if it were running directly on the real hardware. The virtual machine environment looks
just like "bare metal", complete with virtual peripherals. Access to the real hardware is controlled by the hypervisor so
that virtual machines can not interfere with each other.
Full virtualization most typically is implemented as native virtualization or hardware-assisted full virtualization in
which almost all code is run directly by the CPU without any changes, for efficiency. The hypervisor only needs
to intervene when the code uses sensitive instructions that would interfere with the state of the hypervisor or its
supporting environment. These sensitive instructions need to be intercepted by the hypervisor and replaced with safe
equivalents before they execute on the CPU. In order to easily support this, all sensitive instructions in the CPU's
instruction set architecture (ISA) must be privileged; that is, when run in user mode they must trap first so control can
be passed to the hypervisor and the sensitive instruction replaced with safe instructions that emulate the operation.
Unfortunately, the traditional x86 instruction set has seventeen sensitive but unprivileged instructions which do
not trap, which complicates the implementation of full virtualization on that architecture. The development of Intel
Virtualization Technology and AMD Virtualization on the most modern 32-bit and 64-bit x86 processors will help to
correct this issue on x86. In Linux, the Xen hypervisor can use hardware-assisted full virtualization on x86 with Intel
VT-x or AMD-V, and the KVM hypervisor requires it.
The IBM mainframe architecture does not have sensitive but unprivileged instructions, and the first production
implementation of full virtualization was on the IBM System/360 in the late 1960s. This implementation evolved into
the current z/VM system that is available on modern IBM System z9 EC mainframes (and which can run instances of
Red Hat Enterprise Linux for System z in virtual machines.)

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 1
6

Emulation and Dynamic Translation


Allows software to be run in virtual machine without modification
Simulates the virtual machine in software
Hypervisor analyzes code, replaces instructions before executed
Can be used to run non-native code, but faster if most code can run on the
CPU directly
Much slower than hardware-assisted virtualization

Dynamic translation techniques can help


Instructions converted as discovered
Conversions cached for later reuse

1-7
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Hardware emulation or full system simulation is a technique in which the hypervisor simulates the virtual machine in
software, by analyzing all instructions and converting each one appropriately before allowing it to run on the CPU.
Emulation techniques can be used to implement full virtualization on processors that do not trap sensitive instructions,
or to run virtual machines supporting binary code compiled for a different processor architecture.
Emulation-based approaches to full virtualization tend to be much slower than native full virtualization, since without
hardware support to trap sensitive instructions, every instruction must be examined by the hypervisor before it is
allowed to run to determine if it is sensitive. One technique used to improve performance is dynamic translation.
The hypervisor analyzes the binary instructions just before they run, allows safe instructions to run unmodified, but
converts sensitive instructions just before they execute. In addition, converted code sequences are cached in order to
speed future execution of the same code. Dynamic recompilation goes a step further and optimizes frequently reused
sequences on the fly. However, the more aggressive the optimizations, the more CPU time is needed by the hypervisor
to generate them, stealing time from running the virtual machines. Full virtualization by dynamic recompilation is the
basic technique traditionally used by VMware.
In addition, emulation techniques can be used to run code compiled for an alien processor architecture without
modification, simply by converting all instructions appropriately. For obvious reasons, full system simulation of nonnative processor architectures tends to be very slow. QEMU and Bochs are examples of emulators available for Linux.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 1
7

Paravirtualization
Requires modifications to the virtualized operating system
Kernel modified to interact with hypervisor to handle sensitive instructions
Most code runs unmodified directly on the CPU
Very fast method for less modern x86 processors
Can not use this technique with unmodified operating systems

Basic technique used by Red Hat Virtualization with Xen


Can use full virtualization as alternative on newer hardware

1-8
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The basic virtualization technique used by Xen is paravirtualization, in which hypervisor-aware code must be
integrated into the kernel of the operating systems running on the virtual machines. No changes are generally required
for the rest of the software on a virtual machine. In a sense, the kernel runs on the hypervisor as if it were some special
kind of CPU. This avoids the need for hardware-assisted trapping or dynamic translation of sensitive instructions, but
unmodified operating systems do not work. In other words, paravirtualization will not allow an unmodified Windows
Vista installation to use a virtual machine, but an open source operating system such as Red Hat Enterprise Linux can
readily be modified to use this method.
Paravirtualization has two major advantages. The first is that it is generally a much faster approach than emulation or
full virtualization through dynamic translation. The performance of an operating system running in a virtual machine
can rival that of an operating system running on the "bare metal" hardware. The second is that it works with CPUs that
do not support hardware-assisted full virtualization, such as all but the latest x86 processors. For these reasons, we will
focus on paravirtualized virtual machines based on Xen for most of this course.
Since paravirtualization already requires kernel modifications, one trick to improve performance further is to have the
hypervisor stop presenting devices that look like real hardware and instead present virtual devices that require special
paravirtualized drivers to work. The cooperation between the kernel and the hypervisor can allow paravirtualized
drivers to have much lower overhead than native drivers. In addition, paravirtualization allows the hypervisor to help
guest operating systems resolve timing issues that may arise as domains alternate running on a physical CPU with
sleeping. Therefore, even on systems which do not require paravirtualized approaches to virtualization, this sort of
cooperation between the kernel and the hypervisor can be useful. In this context, paravirtualization is sometimes
referred to as cooperative virtualization.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 1
8

Use Case: Consolidation


Modern data centers straining resources
Heat dissipation, power consumption, physical space

Modern servers tend to be underutilized


Tuned to run one critical application
Security compartmentalization

Multiple logical servers on one physical server


Need fewer physical servers, use existing servers more efficiently
Keep compartmentalization advantages

1-9
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

One major use case for virtualization is consolidation, converting multiple existing physical servers into virtual servers
running on one or a smaller number of physical servers.
Many modern data centers are reaching limits which make their further expansion highly challenging. Two of the
biggest problems are cooling and power. It is said that Google spends much more annually on these two issues in their
data centers than they do on hardware replacement, for example. In other cases, the existing physical space used for a
data center may be full, but the costs of expanding the data center and providing sufficient power and cooling supply
may be prohibitive.
Typically one reason why underutilized servers are becoming commonplace is that modern hardware has grown very
powerful, but often sites use separate servers to compartmentalize services from each other for security reasons. Also,
services may have conflicting requirements in the versions of software, libraries, or operating systems which they need
to function, and separate systems are needed for each configuration.
One way virtualization can help solve this problem is by consolidating many underutilized physical servers onto
virtual servers on one physical server. This immediately cuts down on the power consumption and heat generated in
the server room, and frees up rack space for other purposes.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 1
9

Use Case: Compartmentalization


Existing servers can have roles distributed
Separate services for security
Do not need new hardware to deploy new server

Hardware control
Users can be allowed to control regular domain
IT controls locked-down privileged domain for management access

1-10
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Another use case is compartmentalization. Virtualization can be used to divide up existing physical machines into
separate virtual machines without further hardware expenditures.
For example, a server currently being used to run several critical services can be divided into multiple virtual
machines, each of which runs one critical service. An attacker may compromise one virtual machine, but as long as
that machine does not have management access to the virtualization software, the other domains are still secure. Of
course, the single physical server is still a single point of failure, but other technologies such as live migration can help
ameliorate that problem.
Another approach is to give users full administrative control of a normal, non-privileged domain for their server
applications, which IT retains tight control of a locked-down privileged domain used to manage the system. This can
be used to take control of user domains for mandatory updates or auditing purposes.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 1
10

Use Case: Development and Testing


Develop software in a disposable environment
Development tools in management domain
Test code in separate virtual machines

Developers are able to be more efficient


Virtual machine can crash without disrupting main environment
Multiple test environments can be run on one workstation
Can rapidly "reinstall" from saved disk images

1-11
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Virtualization has great advantages for development and testing of software and network services. On one machine a
developer might be running several different test environments. If a test environment crashes, it does not necessarily
crash the developer's main working environment.
Using archived disk images or logical volume snapshots, reinstalling a test environment to a known state can be quite
rapid, minimizing time lost due to rebuilds. Developers can easily collect crash dumps from virtual machines that have
failed from the privileged domain, and analyze them with tools which have been set up in advance.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 1
11

Use Case: Virtual Appliances


Use virtual machines to deploy pre-configured network services
Purpose-built appliances
Configuration locked down, often control through web interface
Limited ability to configure or change

Reference implementation appliances


Pre-configured, but still a general-purpose system
More ability to change, still fairly streamlined environment

1-12
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

One very exciting application of virtualization are virtual appliances. Virtual appliances are virtual machine
configurations that are pre-configured to run some useful application in an easily deployed manner. Typically these
environments are streamlined so that no extraneous components of the operating system are shipped.
Virtual appliances are very attractive from a software vendor's perspective. Instead of shipping a software package
that must be installed and configured at the customer site, a virtual appliance is shipped as a pre-installed machine
with known configuration, software versions, and a tuned environment. IT departments can also use internal virtual
appliances to ease deployment of new services.
One basic type of virtual appliance is the purpose-built appliance. These appliances are typically extremely
stripped-down and locked-down configurations; control of the machine configuration is often through some form
of web interface. This help simplify the end-user management of the appliance, while limiting the user's ability to
misconfigure the system.
Another basic type of virtual appliance is the reference implementation appliance. A working virtual appliance is
shipped, but it is still a general-purpose operating system amenable to reconfiguration and customization. By shipping
it as a virtual appliance, it speeds installation and deployment of a baseline service.
Virtual appliances are generally designed to run on a particular virtualization platform. A number of vendors are
already making available virtual appliances that run in a Xen virtualization environment. Linux and other open source
software packages are particularly good for building virtual appliances due to the freedom and flexibility offered by
open source licensing.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 1
12

Other Types of Virtualization


OS-level virtualization
A single operating system divided into multiple virtual hosts which share the
same kernel (OpenVZ, FreeBSD jails, Solaris Containers)

Application virtual machines


A cross-platform virtual machine to run applications
Compile once, run anywhere (Java JRE)

API-level virtualization
A software compatibility layer to run code compiled for one operating system
under another (Wine, Mac OS X "Classic")

1-13
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Other forms of virtual machines exist which use techniques very different from system virtualization. These alternative
forms of virtualization compete with system virtualization approaches and have advantages and disadvantages of their
own.
OS-level virtualization solutions run a single kernel on the physical hardware which is shared by all virtual hosts. In
a sense, this is an extension of the standard Unix chroot mechanism (which restricts the files accessible by a process)
to further restrict processes in the jail to only see selected processes, IP addresses, and user accounts. This is "lighterweight" than system virtualization, but it does not allow different operating systems or kernels to be run in different
virtual hosts, and the direct visibility of the same kernel code by all containers may be a liability. Examples include
OpenVZ for Linux (http://www.openvz.org/), FreeBSD jails, and Solaris Containers.
Application virtual machines allow support of cross-platform applications. Applications are written for and run on a
standard virtual machine which does not necessarily reflect any real-world hardware design. This virtual machine may
then be ported to various operating systems and processor architectures. Code is interpreted or converted to a standard
bytecode that runs on the virtual machine independent of processor architecture. In many ways, these types of virtual
machines are very similar to emulators. The Java JRE and the UCSD Pascal p-System are both examples.
API-level virtualization implements a software compatibility layer to allow code written for one operating system or
API to run under another operating system or API on the same processor architecture. In essence, the user libraries
and to some extent the operating system are emulated, not the processor architecture and hardware, and the foreign
code still runs natively on the processor. Wine (http://www.winehq.org/) is an example of this approach, allowing
Win32 applications to run in Linux by converting Win32 API calls to POSIX/X11/OpenGL calls. Another well-known
example is Apple's "Classic environment" which allowed unmodified Mac OS 9 applications to run under Mac OS
X on PowerPC Apple Macintoshes. This method can be complex to implement, especially if the API is under active
development.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 1
13

End of Unit 1
Questions and Answers
Summary
What is virtualization?
Types of virtualization
Basic virtualization terminology
Use cases for virtualization

1-14
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 1
14

Unit 2

Basic Paravirtualized Domain Installation

2-1
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 2
15

Objectives
Upon completion of this unit, you should be able to:
Understand the basic architecture of Xen
Install and set up a privileged domain
Use virt-manager to install Red Hat Enterprise Linux in a
paravirtualized domain
Automatically start domains at boot
2-2
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 2
16

Xen
Xen is the basis for virtualization in RHEL 5
Supported architectures:
32-bit x86 with PAE support
Intel 64/AMD64

Paravirtualization with RHEL 5 and RHEL 4.5


Full virtualization for unmodified operating systems
Requires CPUs supporting Intel VT or AMD-V

2-3
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Basic support for system virtualization in Red Hat Enterprise Linux 5 is provided by Xen, GPL-licensed open source
virtualization software originally developed at the University of Cambridge. The 32-bit and 64-bit x86 architectures
are fully supported by Red Hat, while Intel Itanium 2 support is being made available as a Technology Preview only
at this time. Support for full virtualization requires that the node's CPUs support Intel Virtualization Technology or
AMD Virtualization extensions, which is true only for the most recent processors. Support for paravirtualization on
32-bit x86 requires that the node's CPUs support Physical Address Extension (PAE), which should be the case on most
processors since the Pentium Pro, with the recent exception of early Pentium M processors.
Using the Xen hypervisor shipped with RHEL 5 and a RHEL 5 Dom0, Red Hat currently supports paravirtualized
guests running RHEL 5 or RHEL 4.5 and later. Earlier versions of Red Hat Enterprise Linux or unmodified operating
systems are supported as fully virtualized guests only.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 2
17

Xen Architecture
Xen hypervisor runs on the hardware directly
Hypervisor boots first domain ("Dom0")
Dom0 is privileged, runs RHEL 5
xend and other supporting services run in Dom0

Dom0 is used to install and manage other domains


Other domains are called "DomU" domains
Domains are kept completely separate by the hypervisor

2-4
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The Xen hypervisor runs directly on the machine's hardware in place of an operating system. After GRUB loads the
hypervisor at boot, the initial Xen domain is created and the Multiboot standard is used to boot it using a modified
Linux kernel and initrd. This modified kernel then loads the initial domain with a standard Red Hat Enterprise Linux 5
installation.
This initial domain is called Domain-0, or Dom0 for short, and unlike all other domains managed by the hypervisor,
it has privileged access to the hypervisor's control interfaces. Some user-space services such as xend are started to
support utilities which can install and control other domains and manage the hypervisor. Security of Dom0 is critical,
since a compromise of Dom0 can compromise the hypervisor and all other virtual machines on the node.
Non-privileged domains are generically referred to as "Domain-U" or DomU domains for short, and are isolated by the
hypervisor from the real hardware and each other. An intrusion in one DomU should not affect the hypervisor or other
domains.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 2
18

Hardware Considerations
x86 processor with PAE support
pae on flags line in /proc/cpuinfo
vmx or svm also needed for full virtualization support

512 MB RAM per domain (recommended)


Sufficient disk space for each domain
2-5
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

In order to run paravirtualized guests, Red Hat Virtualization for x86/x86-64 requires CPUs that support PAE. Most
Intel and AMD processors have supported this since the release of the Pentium Pro. One notable exception are older
Pentium M processors. Whether or not your CPU supports PAE can be determined after Red Hat Enterprise Linux is
installed by looking at the contents of the file /proc/cpuinfo and verifying that pae appears on the flags line:
cat /proc/cpuinfo | grep pae works well for this. If fully virtualized guests will be needed, either vmx or svm will
also need to appear on the flags line to indicate Intel VT-x or AMD-V availability.
Each domain, including Dom0, needs enough physical RAM to run. Xen does not permit more physical RAM to be
assigned to domains than exists on the system. The recommended minimum per domain is 512 MB, which is the same
as the minimum supported memory for a RHEL 5 installation.
Sufficient disk space for each domain's operating system will also be required. The virtual disk for a domain can be
implemented as a file on an existing file system, an otherwise unused partition or disk, or even a logical volume or
snapshot. The official minimum requirement for RHEL 5 is 1 GB, but realistic installations will almost certainly
require more disk space.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 2
19

Basic Dom0 Installation


Ensure that hardware supports virtualization
Perform a normal installation of the machine
Ensure that kernel-xen, xen, and virt-manager are installed
Select Virtualization component at install-time
Verify subscribed to RHN "RHEL Virtualization" channel, install with yum

Configure xend to start on boot


Configure kernel-xen as default kernel and reboot
2-6
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

After ensuring that the node's hardware supports virtualization, perform a normal installation of Red Hat Enterprise
Linux. Make sure that enough disk space is reserved for the guest operating systems, either as unpartitioned disk
space, space for image files in /var/lib/xen/images, or space available for new logical volumes in existing
volume groups.
As part of the installation and configuration process, the kernel-xen, xen, and virt-manager packages should also be
installed. The kernel-xen package provides the hypervisor and Dom0 kernel, the xen package provides mandatory
user-space services for Dom0, and virt-manager provides a GUI for easy installation and management of DomU
domains. At install time, this can be done by choosing to install the Virtualization component, which will be displayed
if an installation number has been specified which selects an installation of Red Hat Enterprise Linux Advanced
Platform, Red Hat Enterprise Linux, or Red Hat Enterprise Linux Desktop with Multi-OS Option. Alternatively, after
installation subscribe the machine to Red Hat Network and ensure that it is subscribed to the RHEL Virtualization
software channel, then install the missing packages with yum.
After installing the xen package, run chkconfig xend on to insure that Xen management services will be available on
reboot. Also set DEFAULTKERNEL=kernel-xen in /etc/sysconfig/kernel and configure the Xen kernel
to be the default in GRUB. Reboot the system so that the Xen hypervisor and kernel and xend are all running.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 2
20

Dom0 Installation Considerations


Use a "thin Dom0" for virtualizing network servers
Minimal installation
Simple, Secure, Stable
Do not run other services in Dom0

A "thick Dom0" may be useful for development or testing


More complete operating system
Development tools, crash debuggers, etc.
Laptop or workstation environments

2-7
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The intended use of a virtualized node can have a significant effect on the best installation plan for Dom0. There are
two basic approaches, a "thin Dom0" that is minimally installed with just the core requirements for managing the
virtual hosts, and a "thick Dom0" that consists of a much more complete user environment.
For a server machine which will support domains used to provide network services, a minimal Dom0 is generally
better. No network services should actually be served by Dom0, with the possible exception of services such as ssh for
management access. Dom0 should only be used for virtualization management, and DomUs should provide network
services. If Dom0 is unstable, it can affect other domains; if it is compromised, the attacker can control the other
domains.
On the other hand, a "thick Dom0" can be useful for development, testing, demonstrations, or simply the ability to
work in two operating systems simultaneously. For example, a software developer may be testing new code in a
DomU that has the ability to crash the guest. From Dom0 the test DomU can be suspended and core dumps of main
memory collected, and then debugging tools can be used to track down the problem. If the virtual machine locks up, it
can be reset or analyzed from Dom0 without interrupting Dom0's operating system.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 2
21

virt-manager
GUI for virtual machine installation and management
Usage:
Applications->System Tools->Virtual Machine Manager
virt-manager

Easy wizard to help set up domain and install it


Access virtual machine's graphical or text console
2-8
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Virtual Machine Manager is a powerful and easy to use graphical tool for installation and management of virtual
machines. It provides a list of running domains and their performance statistics, and can be used to shutdown or
suspend domains, adjust their resource allocation, access their graphical or text consoles, and even automate the initial
setup process for a new domain.
To start the utility, from the System Tools menu item on the Applications menu, select Virtual Machine Manager, or
simply run virt-manager from the command-line.
Virtual Machine Manager is open source GPL-licensed code based on libvirt, which will allow future versions to
manage domains based on non-Xen virtualization mechanisms such as KVM or QEMU. For more information, visit
the website:
http://virt-manager.et.redhat.com/
It is important to note that while Virtual Machine Manager provides an easy and straightforward way to set up new
virtual machines, advanced configurations may require use of command-line tools.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 2
22

DomU Components
Configuration file in /etc/xen/domain
Name and UUID of domain
Virtual CPUs assigned to domain
Memory assigned to domain at startup
MAC address of virtual network card

Disks are simulated with a Virtual Block Device


Simple File is a disk image in /var/lib/xen/images
Normal Disk Partition is an unused disk partition, logical volume, or other
block device

2-9
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

A shut down DomU domain consists of two components; a configuration file specifying how the domain should
be created, and a virtual block device which simulates its hard disk. The configuration files for DomU domains are
located in the /etc/xen directory of Dom0. Each DomU domain normally has its own configuration file, but it is
possible to use tools such as xm to start a domain without one.
[root@stationX]# cat /etc/xen/example
# example domain-u config file
name = "example"
memory = "512"
disk = [ 'tap:aio:/var/lib/xen/images/example.img,xvda,w', ]
vif = [ 'mac=00:16:3e:66:06:aa', ]
vfb = ["type=vnc,vncunused=1"]
uuid = "d8353ff6-78cd-4931-928a-197a8f7ef3b9"
bootloader="/usr/bin/pygrub"
vcpus=1
on_reboot
= 'restart'
on_crash
= 'restart'
The configuration file is automatically generated when a new domain is installed using the virt-manager utility. The
file can also be written and edited by hand. The format of the configuration file is documented in the xmdomain.cfg(5)
man page.
Guest operating systems running in a DomU normally can not directly access the physical hardware underlying the
Xen environment. Thus a DomU domain may be moved from one Xen environment to another on the same processor
architecture without modification, even if the hardware otherwise differs, simply by moving the virtual block device
and configuration information.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 2
23

Basic Paravirtualized DomU Installation


virt-manager can quickly install a new paravirtualized DomU
guest running Red Hat Enterprise Linux
File->New Machine...
Will need to specify domain name, location of VBD, amount of RAM, number
of VCPUs, and a URL pointing to a RHEL installation server
Install media URL must point to a NFS or FTP or HTTP installation server
containing a RHEL 5 or RHEL 4.5 exploded tree
Can interactively install or specify a kickstart file

Run xm create domain on Dom0 to boot new DomU after it is


installed
2-10
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The virt-manager installation wizard can quickly and easily install new paravirtualized DomU guests running Red
Hat Enterprise Linux. Installation can either be performed interactively through a window displaying the new domain's
graphical console, or optionally by specifying a URL pointing to a kickstart file.
To create and install a new DomU, select New Machine... from the File menu. A new window will open and walk you
through a series of questions about the new domain's configuration. Among the questions which will be asked are the
name for the new domain, the type and location of its virtual block device, the maximum and initial amount of RAM
assigned to the domain, and the number of virtual CPUs it should have. The MAC address is initially semi-randomly
generated.
virt-manager will also ask for the URL of the installation server. This must be a standard "exploded-tree" installation
server, available through either FTP, HTTP, or NFS. ISO-based installation servers can not be used by virt-manager
at present. The installation server must serve out RHEL 5 or RHEL 4.5 content or later. Earlier versions of Red Hat
Enterprise Linux are not supported as paravirtualized guests.
Other techniques for installing and reinstalling domains are available, and will be discussed in upcoming units.
At the end of the installation process, the new domain may be shut down. To restart it immediately, run xm create
domain, where domain is the name of your new DomU domain.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 2
24

Activating Domains at Boot


Domains can be configured to start when Dom0 boots:
chkconfig xendomains on
ln -s /etc/xen/domain /etc/xen/auto/domain

xendomains also saves all running DomU domains when Dom0


shuts down
Similar to laptop hibernation; state is preserved
DomU domains automatically restored on next boot

2-11
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The xendomains service script is used to configure DomU domains to start automatically when Dom0 boots. If
it is on, the /etc/xen/auto directory is checked for Xen domain configuration files, and any domains with
a configuration file in that directory is automatically created. Normally, a symlink is created from /etc/xen/
auto/domain to the real /etc/xen/domain file.
In addition, when the xendomains service script is stopped on Dom0 shutdown, it saves the state of all running DomU
domains to /etc/xen/save/domain. When Dom0 is rebooted and the xendomains service script starts, the
DomU domains are all restored to their state at shutdown. From the point of view of the DomU, this process is similar
to the hibernate-to-disk feature used by many laptops, so the DomU domains do not have to reboot just because Dom0
did.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 2
25

End of Unit 2
Questions and Answers
Summary
Architecture of Xen-based virtualization
Installing and Configuring Dom0
Using virt-manager to install a PV DomU
Automatically starting domains with xendomains

2-12
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 2
26

Lab 2
Installing Red Hat Virtualization
Goal:

To install and configure support for system virtualization on your workstation,


and then to install your first virtual machine and configure it to start at boot.

System Setup:

A workstation installed with Red Hat Enterprise Linux 5.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 2
27

Sequence 1: Installing the Red Hat Virtualization environment


Scenario:

In this sequence, you will install and configure an existing Red Hat
Enterprise Linux system with the software and tools needed to use Red Hat
Virtualization.

Instructions:
1.

Configure yum to get software packages from server1.example.com. Download a repository


configuration file from ftp://server1.example.com/pub/gls/server1.repo
and save it as /etc/yum.repos.d/server1.repo on your workstation.

2.

Verify that your CPU supports the PAE feature required by Red Hat Virtualization.

3.

Edit /etc/sysconfig/kernel to set kernel-xen to be your preferred kernel


package.

4.

Install the packages required for Domain-0 in the virtualization environment: kernelxen, xen, and virt-manager.

5.

Verify that the kernel-xen kernel will be chosen by default at boot, and that the xend
service will start at boot.

6.

Reboot the hardware to enable virtualization. Verify that the correct kernel is running.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 2 Sequence 1


28

Sequence 2: Installing a Domain-U virtual machine


Scenario:

In this sequence, you will interactively install a new Domain-U virtual


machine with Red Hat Enterprise Linux using virt-manager.

Instructions:
1.

Using virt-manager, create a new virtual machine using the following configuration
information:

System Name: vm0

Root Password: redhat

Install Media URL: ftp://server1.example.com/pub

VM Max Memory: 500 MB

VCPUs: 1

Simple File with File Location: /var/lib/xen/images/vm0.img

Simple File Size: 3500 MB

Interactively install the machine. When prompted for an Installation Number, select
"Skip entering Installation Number". Choose default partitioning, installation options, and
packages. Set the root password to redhat.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 2 Sequence 2


29

Sequence 3: Configuring automatic restart of domains at boot


Scenario:

In this sequence, you will configure the xendomains init script to


automatically start domains at boot, resume saved domains at boot, and save
(hibernate, suspend-to-disk) running domains on shutdown. Unless specified
otherwise, all instructions in this sequence should be run on Domain-0.

Instructions:
1.

If it is not already open, run virt-manager so that you can monitor its management window.
If the installation in the previous sequence has completed, you should only see one domain
running, Domain-0. You may start work on this sequence, but do not reboot Domain-0
before the installation of vm0 completes.

2.

Set a soft link in /etc/xen/auto/vm0 pointing to /etc/xen/vm0, the configuration


file for your new domain.

3.

Configure the xendomains init script to run at boot.

4.

Reboot your Domain-0 system. When the system comes back up, log in and run virtmanager. You should see vm0 in the management window. If you double-click on the
window you should see your new virtual machine's graphical console.

5.

If you double-click on the entry for vm0 in the virt-manager window, you should see
either the initial post-installation setup screen or your new virtual machine's graphical
console. Complete post-installation if necessary, taking most defaults but not registering
your machine with Red Hat Network, then log into vm0 as root.

6.

Leave root logged in on vm0 through the virt-manager graphical console window. Reboot
Domain-0 a second time.

7.

When Domain-0 comes back up, log back in one more time, and run virt-manager.
You should see vm0 in the management window again. Double-click on vm0 to open its
graphical console window. The root user should still be logged in! The vm0 domain was
hibernated by xendomains on shutdown and restored on boot.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 2 Sequence 3


30

Sequence 1 Solutions
1.

Configure yum to get software packages from server1.example.com. Download a repository


configuration file from ftp://server1.example.com/pub/gls/server1.repo
and save it as /etc/yum.repos.d/server1.repo on your workstation.
# lftpget ftp://server1.example.com/pub/gls/server1.repo
# cp server1.repo /etc/yum.repos.d

2.

Verify that your CPU supports the PAE feature required by Red Hat Virtualization.
# grep --color pae /proc/cpuinfo
flags
: fpu tsc msr pae mce cx8 apic mtrr mca cmov pat
pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm
constant_tsc up pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm

3.

Edit /etc/sysconfig/kernel to set kernel-xen to be your preferred kernel


package.
Open /etc/sysconfig/kernel in a text editor and change the DEFAULTKERNEL
directive to read
DEFAULTKERNEL=kernel-xen
Save the file and exit.

4.

Install the packages required for Domain-0 in the virtualization environment: kernelxen, xen, and virt-manager.
# yum -y install kernel-xen xen virt-manager

5.

6.

Verify that the kernel-xen kernel will be chosen by default at boot, and that the xend
service will start at boot.
a.

If the Xen kernel is the first kernel listed in /boot/grub/grub.conf, verify that
default=0 is set.

b.

# chkconfig xend on

Reboot the hardware to enable virtualization. Verify that the correct kernel is running.
a.

# reboot

b.

# uname -r
2.6.18-8.el5xen

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 2 Sequence 1 Solutions


31

Sequence 2 Solutions
1.

Using virt-manager, create a new virtual machine using the following configuration
information:

System Name: vm0

Root Password: redhat

Install Media URL: ftp://server1.example.com/pub

VM Max Memory: 500 MB

VCPUs: 1

Simple File with File Location: /var/lib/xen/images/vm0.img

Simple File Size: 3500 MB

Interactively install the machine. When prompted for an Installation Number, select
"Skip entering Installation Number". Choose default partitioning, installation options, and
packages. Set the root password to redhat.
To create the new virtual machine, do the following:
a.

Run the Virtual Machine Manager.


[root@stationX]# virt-manager

b.

When the Open Connection dialog appears, select Local Xen host and click Connect.

c.

The Virtual Machine Manager window will open. Start the new virtual system
wizard by selecting New Machine... from the File menu. Click Forward.

d.

For System Name enter vm0 and click Forward.

e.

Select Paravirtualized and click Forward.

f.

For Install Media URL enter ftp://server1.example.com/pub. Leave the


Kickstart URL field empty. Click Forward.

g.

Select Simple File, enter /var/lib/xen/images/vm0.img as the File


Location, and set the File Size to 3500 MB. Leave Allocate entire virtual disk now?
checked. Click Forward.

h.

Set VM Max Memory and VM Startup Memory to 500 MB and the number of
VCPUs to 1. Click Forward.

i.

Review the summary screen and click Finish to boot the new virtual machine and start
the installer.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 2 Sequence 2 Solutions


32

j.

A new New Keyring Password dialog will open. Enter redhat as the password.
Click OK.

k.

A window will open and the installer will run. When prompted to enter an Installation
Number select Skip entering Installation Number. Choose the default options,
partitioning, and packages. Set the root password to redhat. When the installation
finishes, select Reboot in the virtual machine's window.
NOTE: The new virtual machine will actually be shut down even though Reboot is
selected.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 2 Sequence 2 Solutions


33

Sequence 3 Solutions
1.

If it is not already open, run virt-manager so that you can monitor its management window.
If the installation in the previous sequence has completed, you should only see one domain
running, Domain-0. You may start work on this sequence, but do not reboot Domain-0
before the installation of vm0 completes.
# virt-manager
Alternatively, in the GNOME desktop go to the Applications menu and select from System
Tools the item Virtual Machine Manager.

2.

Set a soft link in /etc/xen/auto/vm0 pointing to /etc/xen/vm0, the configuration


file for your new domain.
# ln -s /etc/xen/vm0 /etc/xen/auto/vm0

3.

Configure the xendomains init script to run at boot.


# chkconfig xendomains on

4.

Reboot your Domain-0 system. When the system comes back up, log in and run virtmanager. You should see vm0 in the management window. If you double-click on the
window you should see your new virtual machine's graphical console.

5.

If you double-click on the entry for vm0 in the virt-manager window, you should see
either the initial post-installation setup screen or your new virtual machine's graphical
console. Complete post-installation if necessary, taking most defaults but not registering
your machine with Red Hat Network, then log into vm0 as root.

6.

Leave root logged in on vm0 through the virt-manager graphical console window. Reboot
Domain-0 a second time.

7.

When Domain-0 comes back up, log back in one more time, and run virt-manager.
You should see vm0 in the management window again. Double-click on vm0 to open its
graphical console window. The root user should still be logged in! The vm0 domain was
hibernated by xendomains on shutdown and restored on boot.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 2 Sequence 3 Solutions


34

Unit 3

Virtual Machine Management

3-1
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 3
35

Objectives
Upon completion of this unit, you should be able to:
Manage virtual machines
Learn basic virsh operations
Learn basic xm operations
Understand other libvirt-based and native Xen tools
3-2
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 3
36

Virtual Machine Management


Domains must be managed from the privileged domain (Dom0)
libvirt-based tools may support future alternative virtualization
technologies
virt-manager
gnome-applet-vm
virsh

Native Xen-based tools give low-level access


xm
xentop

3-3
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

In the current Xen-based implementation of Red Hat Virtualization, not all domains are equal. Management of DomU
domains and the Xen hypervisor must be done from the privileged domain, Dom0.
Red Hat has developed a stable, portable API to isolate programs from the virtualization or emulation technology in
use, called libvirt. This allows virtualization management programs to be protected from future changes in version or
virtualization technology. The C and Python programming languages and Xen environment are well supported, with
bindings for Perl and OCaml and support for KVM, QEMU, and others on the way. API documentation is included in
the libvirt-devel and libvirt-python RPMs. Libvirt is licensed under the LGPL.
A number of applications have been written using libvirt. virsh is a command-line tool for controlling domains. virtmanager is a graphical tool for controlling domains and monitoring their status. The "VM Applet" in the gnomeapplet-vm package allows basic control of running domains and access to virt-manager from the GNOME Panel in
Dom0.
Native management tools developed by the Xen project are also available. These tools allow more detailed, lowerlevel access to domain management than the generic libvirt-based tools, but only work with Xen. The most important
of these tools is xm, a command-line tool for controlling Xen domains. Another important tool is xentop, which
provides real-time information about the status of a Xen-based virtualization system and its domains.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 3
37

Identifying Virtual Machines


Management tools need labels to identify virtual machine
domains
Three ways to identify domains
domain name
domain ID number (temporary)
domain UUID (persistent)

virsh and xm can display and convert between identifier labels


3-4
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Both libvirt-based management tools and native Xen management tools need some way to identify particular domains
being managed. Each domain has three labels which identify it; a domain name, a domain ID number temporarily
associated with the running domain, and a domain UUID permanently associated with the domain.
The domain name is usually the easiest identifier to use to manually identify and manage virtual machine domains.
This is normally a short string meaningful to human system administrators. For Xen, the name of the domain is read
from its configuration file each time the domain is created, except for the privileged domain which is always called
"Domain-0".
The domain ID number is a temporary identifier that is assigned to a domain once it is running. Some people think
of it like a process ID for a program; if a domain is destroyed and recreated, it will get assigned a new ID number.
The privileged domain is the first to start, and gets assigned ID 0; this is why it is often called Domain-0, or Dom0 for
short.
The domain UUID is a 128-bit universally-unique ID that is assigned to a particular domain persistently. The UUID
is used by system management tools to enable the same domain to be tracked over time even if its name is changed or
it is migrated to another node. A UUID is randomly generated and assigned to the domain when it is first installed by
virt-manager or similar tools, and is normally stored persistently in the domain's configuration file.
The name and domain ID of an active domain is clearly displayed in the graphical interface of virt-manager. The
UUID is visible if you select the machine of interest in the main window, then on the Edit menu selecting the Machine
Details... item. The virsh and xm tools also both support options to list and convert between the labels used to
identify domains. The command virsh domid name takes the name or UUID of the domain and prints its current
ID number; the equivalent command xm domid name only works with the domain's name. Likewise, the command
virsh domname id takes the ID or UUID of the domain and prints its name; the equivalent command xm domname
id only takes the domain's ID. The easiest way to display the UUID of a domain from the command-line is by using
virsh dominfo name|id.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 3
38

gnome-applet-vm
GNOME Panel applet to manage virtual machines
Select active virtual machine from menu to manage
Hibernate, resume, destroy, or shutdown virtual machine
Adjust resources for virtual machine (launches virt-manager)
Access virtual machine's graphical console

Can launch main virt-manager interface in monitoring mode or


new virtual machine installation mode
3-5
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The gnome-applet-vm package provides a GNOME Panel applet which simplifies the management of active virtual
machines. It can be added to the Panel by right-clicking on an unused area of a Panel bar and selecting Add to Panel...
from the pop-up menu; in the window that appears select VM Applet and close the window. (When first installed, the
applet may not appear until you logout and log in to GNOME or run killall -HUP gnome-panel.) The applet can be
slid into position by right-clicking on the applet and selecting Move, then by dragging the applet into position and
clicking the mouse to fix it in place.
The main drop-down menu of the applet displays the active virtual machines and their memory assignments. It also
has two items which can be used to launch virt-manager in domain installation or monitoring mode. By selecting an
active virtual machine, an additional menu appears which may be used to hibernate the machine, shut it down, destroy
it, change its configuration settings, or access its graphical console.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 3
39

virsh
Command-line tool for virtual machine management
Usage:
virsh [command] [domID|domain-name|UUID] [options]
virsh help [command]

Designed to work with multiple virtualization methods


Part of the libvirt package

Can be used in interactive shell-like mode


3-6
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

virsh is a command-line tool for managing virtual machine domains based on libvirt. It can be used to create, suspend,
resume, save, and shutdown domains. It can also be used to list running domains and to get information about them.
The advantage of virsh over other tools native to particular virtualization technologies (such as xm for Xen) is that
it is designed to provide a standard interface for managing whatever virtualization mechanism may be in use on
the system. Upstream versions of libvirt and virsh support KVM and QEMU, for example. Even if interfaces and
mechanisms change, the virsh interface remains stable. Use of virsh is encouraged as it is intended to be the long term
management tool for Red Hat Virtualization.
[root@station5]# virsh
Welcome to virsh, the virtualization interactive terminal.
Type:

'help' for help with commands


'quit' to quit

virsh # list
Id Name
State
---------------------------------0 Domain-0
running
1 webserver
blocked
virsh # dominfo
Id:
Name:
UUID:
OS Type:
State:
CPU(s):
CPU time:
Max memory:
Used memory:

webserver
1
webserver
70a309f0-909a-7cb0-5457-b439d04a00a9
linux
blocked
1
120.1s
262144 kB
261968 kB

virsh # quit
Copyright 2007 Red Hat, Inc.
All rights reserved

RH184-RHEL5-en-2-20070907, Unit 3
40

xm
Native Xen-based tool for domain management
Usage:
xm command [switches] [arguments] [variables]
xm [command] -h|--help

Includes Xen-specific functionality not yet available in virsh


Only works with Xen domains
3-7
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Xen provides a standard command-line tool, xm, for management of virtual machine domains. It can do many of the
same things that virsh can do, and it also provides native access to Xen-specific features.
xm sends its commands to the Xen control daemon, xend, which manages the hypervisor. xm must be run in Dom0,
and most commands must be run as the root user. Most xm commands are executed asynchronously, which means that
the command returns before the operation has completed. Many operations, especially domain creation and shutdown
(which may involve booting or halting an operating system), can take tens of seconds to actually complete. It is very
important to verify that an operation has completed before continuing, possibly by polling the system with xm.
A list of available xm commands are available using the command xm -h, and detailed help for each of them is
available with xm command -h.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 3
41

Booting Domains
xm create configfile
Instantiates a running instance of a virtual machine
-c option opens virtual serial console

Uses same domain configuration files created by virt-manager


Can override options from the domain's configuration file

3-8
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The xm create command is used to start a new Xen virtual machine domain. It takes as an argument the name of a
domain configuration file defined using the format documented in the xmdomain.cfg(5) man page. This is the same
format that is used when a new domain is initially set up by virt-manager. The command looks in /etc/xen for the
file unless an absolute path to the file is specified. The command returns as soon as the machine is started, but it will
probably still be booting its operating system at that point.
[root@station5]# xm create webserver
Using config file "/etc/xen/webserver".
Going to boot Red Hat Enterprise Linux Server (2.6.18-8.el5xen)
kernel: /boot/vmlinuz-2.6.18-8.el5xen
initrd: /boot/initrd-2.6.18-8.el5xen.img
Started domain webserver
Any value in a domain's configuration file may be overridden by specifying it in the xm create command.
[root@station5]# xm create webserver memory="512"
Note that this implies that a domain may be started without using a real configuration file if /dev/null is used as
a placeholder for the domain configuration file and all required values are specified on the command-line! We will
discuss the format of this file in more depth later in class.
Domains can also be created with virsh create config-file, but the format of its configuration file is different
from the format used by the xm version of the command. If you have an existing domain running, you can create an
XML configuration file for it suitable for use with virsh create by running the command virsh dumpxml domain
and redirecting the output to a file. Currently, there is no easy way to recreate a domain from the graphical virtmanager interface.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 3
42

Stopping and Rebooting Domains


virsh destroy domain
Crash the domain as if its virtual power cord is yanked out

virsh shutdown domain


Shutdown the domain cleanly, then destroy it

virsh reboot domain


Shutdown the domain cleanly, then destroy and recreate it

xm supports the same commands to stop and reboot domains


The xm versions of shutdown and reboot support a -w flag to wait until the
domain is destroyed before returning to the prompt

3-9
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Both virsh and xm provide similar commands to shutdown active domains. The main difference between the syntax
of the two tools is that xm shutdown domain and xm reboot domain support a -w option which causes xm to wait
until the domain is destroyed before returning to the prompt.
Some users new to the virtualization tools are intimidated by the term "destroy" used in the virsh destroy domain
command. The destroy command does not permanently destroy the virtual machine you have installed, it simply turns
off its virtual power switch. Generally, it is better to use virsh shutdown domain instead, which shuts down the
virtual machine gracefully before turning off the virtual power by "destroying" the domain.
Normally, virsh reboot domain shuts down and destroys the virtual domain, then recreates it again immediately,
simulating a warm reboot. However, Xen domains may be manually configured not to automatically restart on reboot.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 3
43

Suspending/Resuming Active Domains


virsh suspend domain
Keeps virtual machine in memory but execution paused
Xen-native equivalent is xm pause domain

virsh resume domain


Resumes paused virtual machine
Xen-native equivalent is xm unpause domain

3-10
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Sometimes it is desirable to temporarily stop a running virtual machine without shutting it down so that it can quickly
resume operation exactly where it left off. There are two different stopped states available to a system administrator
using either the graphical tools or the command-line tools.
The first stopped state is to "suspend" or "pause" the domain. In this state the virtual machine is still kept in RAM, but
it is no longer allowed to run---its current state is frozen until the machine is "resumed" or "unpaused", at which point
it continues from where it left off. From the command-line, virsh suspend domain pauses the domain and virsh
resume domain unpauses it. The Xen-specific equivalents are xm pause domain and xm unpause domain. The
Pause button in the virt-manager graphical console serves the same role, as do the Suspend and Resume items in
gnome-applet-vm's menus.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 3
44

Saving/Restoring Domain State on Disk


virsh save domain state-file
Saves state of active domain to state-file on disk
Roughly equivalent to laptop hibernation
Standard file location is /var/lib/xen/save/domain-name

virsh restore state-file


Restores domain saved in state-file

xm supports the same commands to save and restore domains


3-11
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

A domain may also have its state "saved" to disk, which is roughly equivalent to laptop hibernation. In this state, the
virtual machine is paused and the contents of its memory and other state is dumped to a file, and then the domain is
"destroyed". Then it can be restored, recreating the domain using the state information in the save file so that it can
continue execution from the exact point at which it was stopped. The advantage of save over virsh suspend or xm
pause is that the node itself can be powered off without losing any of the virtual domain's data.
From the command line, virsh save domain save-file saves the state of domain to the file specified by savefile. The normal path used for these files is /var/lib/xen/save/dom-name, where dom-name is the name
of the domain. If SELinux is in enforcing mode, Xen may have difficulties saving the state file in other non-standard
locations. To restore the domain, use virsh restore state-file, which will recreate the domain with the state it had
when saved.
The xm command has the same syntax as virsh and can be used interchangeably with it for Xen domains. Domains
may also be saved with virt-manager from the domain's graphical console, by selecting Save from the Virtual
Machine menu.
If the xendomains init script is on, Xen will attempt to save any domains configured for automatic startup to the
standard file on shutdown, to automatically restore them on reboot.
At the time of writing, fully virtualized domains could not be saved (paravirtualized domains work fine). The current
design of Xen does not allow Dom0 to be saved or paused.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 3
45

virsh list and xm list


Display information about domains
[root@station5]# virsh list
Id Name
State
-----------------------------0 Domain-0
running
4 fedora7
blocked
[root@station5]# xm list
Name
ID Mem(MiB) VCPUs State
Domain-0
0
940
1 r----fedora7
4
499
1 -b----

Time(s)
30028.8
600.2

3-12
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Both virsh list and xm list display information about active domains. The virsh list command only lists each domain
ID, domain name, and its state, and always lists all active domains. The xm list may take a domain name as an
argument to only print out information about that domain.
The following fields are displayed in the output of xm list:
Name
ID
Mem(MiB)
VCPUs
State

The name of the virtual machine.


The number of the domain ID of this virtual machine.
Memory size in megabytes.
The number of virtual CPUs used by each domain.
There are six state fields:
r running the domain is running
b blocked the domain is blocked, waiting on a resource or sleeping on I/O
p paused the domain is suspended; it is in memory but not executing
s shutdown the domain is in the process of gracefully shutting down, rebooting, or saving
(hibernating to disk)
c crashed the domain has crashed; normally a crashed domain is destroyed and drops off the
list, but domains can be configured to be kept in memory on crash for fault analysis

Time(s)

d dying the domain is dying but has not completely shutdown or crashed
How much CPU time (in seconds) has been used by each domain.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 3
46

Monitoring Domains with xentop


Displays real-time information about resource usage by Xen
domains
Detailed networking, VCPU time, VBD statistics available

Similar to UNIX top command


Use either xentop or xm top from the command-line
3-13
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The xentop utility provides a continuously updated interactive display of status and performance information for the
active domains. The text-based interface displays information much like the top utility displays information about
processes. xentop may also be invoked as xm top.
The following case-insensitive commands are available when xentop is running:
D
N
B
V
R
S
Q

adjust the number of seconds of delay between


updates
toggle display of detailed networking information
toggle display of detailed virtual block device
information
toggle display of virtual CPU information
toggle repeat of table header before each domain
change how domains are sorted in the interface
quit

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 3
47

End of Unit 3
Questions and Answers
Summary
Basics of virsh, xm and other management utilities
Booting, rebooting, and shutting down virtual machines
Suspending and saving virtual machines
Monitoring active domains

3-14
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 3
48

Lab 3
Virtual Machine Management
Goal:

To gain experience with graphical and command-line utilities for managing


virtual machines.

System Setup:

A Red Hat Virtualization enabled computer installed with Red Hat Enterprise
Linux, configured with a working Dom-U from an earlier lab named vm0 also
installed with Red Hat Enterprise Linux.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 3
49

Sequence 1: Working with VM Applet (gnome-applet-vm)


Scenario:

In this sequence, you will install and learn how to use a GNOME Panel applet
for managing virtual machines.

Instructions:
1.

Close virt-manager if it is still running from earlier lab work. Install the gnomeapplet-vm package using yum.

2.

Add VM Applet to the upper menu bar, and position it so that it is on the right side near the
Notification Area.

3.

Mouse over the applet's icon but do not click it. Note that it should display a tooltip reading
2 virtual domains.

4.

Use VM Applet to start the virt-manager interface.

5.

Use virt-manager to connect to the vm0 graphical console and suspend (pause) the virtual
machine.

6.

Resume the virtual machine using VM Applet.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 3 Sequence 1


50

Sequence 2: Working with virsh, xm, and xentop


Scenario:

In this sequence, you will continue working with management utilities for
virtualization. Three command-line utilities will be used, virsh, xm, and
xentop.

Instructions:
1.

Leaving the virt-manager graphical console for vm0 open, suspend the domain with virsh.

2.

Use virsh again to resume vm0.

3.

On the graphical console for vm0, log in. Now go back to a terminal on Domain-0 and
hibernate ("save") the domain to /var/lib/xen/save/vm0 with virsh.

4.

Restore vm0 and open its virt-manager graphical console to verify that your login session
is still intact.

5.

List the active domains with virsh, then use that tool to shut down vm0 and list them again.

6.

Recreate the vm0 domain using the xm command. Then use xm to list the active domains.

7.

Run xentop on Domain-0. Explore its interface and the information that is available. Both
running domains should have (very long) lines showing memory statistics, the number of
VCPUs they are assigned, networking information, and read/write activity on virtual block
devices.

8.

Optionally, if there is time remaining in the lab, you may go back over the virsh commands
above and repeat them using equivalent Xen-native xm commands.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 3 Sequence 2


51

Sequence 1 Solutions
1.

Close virt-manager if it is still running from earlier lab work. Install the gnomeapplet-vm package using yum.
# yum install gnome-applet-vm

2.

Add VM Applet to the upper menu bar, and position it so that it is on the right side near the
Notification Area.
Right-click in any unused area of the upper menu bar. In the pop-up menu, select Add to
Panel... and, in the window that opens, select VM Applet. Click Add to add it to the Panel,
then click Close to close the window.
Right-click on the VM Applet icon and select Move from the pop-up menu. Move the
mouse to the right to drag the icon into position, then click to fix it in place.

3.

Mouse over the applet's icon but do not click it. Note that it should display a tooltip reading
2 virtual domains.

4.

Use VM Applet to start the virt-manager interface.


Left-click the applet's icon and examine the menu that appears. Select Run Virtual Machine
Manager.... The virt-manager program should start.

5.

Use virt-manager to connect to the vm0 graphical console and suspend (pause) the virtual
machine.
In virt-manager, connect to Local Xen host. Double-click on vm0 to open its graphical
console. Click the Pause button on the graphical console. The virtual machine should
freeze, and the word paused should appear on the console.

6.

Resume the virtual machine using VM Applet.


Open its pop-up menu and left-click on vm0. Select Resume from the menu. The word
paused should disappear from the console and the virtual machine should resume
execution. If for some reason you accidentally select Destroy or Shutdown, the domain can
be restarted from the command-line with xm create vm0.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 3 Sequence 1 Solutions


52

Sequence 2 Solutions
1.

Leaving the virt-manager graphical console for vm0 open, suspend the domain with virsh.
# virsh suspend vm0
You should see the word paused appear on the console again.

2.

Use virsh again to resume vm0.


# virsh resume vm0

3.

On the graphical console for vm0, log in. Now go back to a terminal on Domain-0 and
hibernate ("save") the domain to /var/lib/xen/save/vm0 with virsh.
# virsh save vm0 /var/lib/xen/save/vm0
You should note that the graphical console closed and that vm0 is no longer displayed
as active in the management interface. It may take a short time for the save operation to
complete.
If you specify a path to a directory that does not exist or you do not have write access to
(through normal permissions or SELinux policy), then a bug in the current version of virsh
may cause it to report success and destroy the domain, but no state file will be saved! It is
also best to use absolute, not relative, paths to point to the state file for similar reasons.

4.

Restore vm0 and open its virt-manager graphical console to verify that your login session
is still intact.
# virsh restore /var/lib/xen/save/vm0

5.

List the active domains with virsh, then use that tool to shut down vm0 and list them again.
# virsh list
Id Name
State
---------------------------------0 Domain-0
running
2 vm0
running
# virsh shutdown vm0
# virsh list
Id Name
State
---------------------------------0 Domain-0
running

6.

Recreate the vm0 domain using the xm command. Then use xm to list the active domains.
# xm create vm0
Using config file "/etc/xen/vm0".
Going to boot Red Hat Enterprise Linux Server (2.6.18-8.el5xen)
kernel: /vmlinuz-2.6.18-8.el5xen

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 3 Sequence 2 Solutions


53

initrd: /initrd-2.6.18-8.el5xen.img
Started domain vm0
# xm list
Name
ID Mem(MiB) VCPUs State
Domain-0
0
1504
2 r----vm0
3
499
1 -b----

Time(s)
330.7
19.6

7.

Run xentop on Domain-0. Explore its interface and the information that is available. Both
running domains should have (very long) lines showing memory statistics, the number of
VCPUs they are assigned, networking information, and read/write activity on virtual block
devices.

8.

Optionally, if there is time remaining in the lab, you may go back over the virsh commands
above and repeat them using equivalent Xen-native xm commands.
a.

# xm pause vm0

b.

# xm unpause vm0

c.

# xm save vm0 /var/lib/xen/save/vm0

d.

# xm restore /var/lib/xen/save/vm0

e.

# xm list
# xm shutdown vm0

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 3 Sequence 2 Solutions


54

Unit 4

Paravirtualized Domain Configuration

4-1
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 4
55

Objectives
Upon completion of this unit, you should be able to:
Understand format of domain configuration file
Manage DomU resources
Understand architecture of virtual networking
Manage virtual graphical and serial consoles
Install domains from the command-line and by hand
4-2
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 4
56

Paravirtualized Domains
Most virtual machines should use paravirtualization
Generally faster than full virtualization
Works on wider variety of existing hardware
Fewer limitations in current implementation

Paravirtualization is not ideal for all domains


Does not work with unmodified operating systems

Can run a mix of paravirtualized and fully virtualized domains


4-3
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

In this unit, we will look at advanced configuration and management of paravirtualized domains. In general, most
domains should be configured to use paravirtualization instead of full virtualization if there is a choice.
One reason for this is that paravirtualization is generally faster than hardware-assisted full virtualization. This is
because the operating system has been modified to work more closely with the hypervisor, which allows it to be
more efficient. In a sense, it can "cheat", using special drivers and scheduling techniques to allow it to coexist
gracefully with the hypervisor and the other domains on the system. The hypervisor must simulate hardware for a
fully virtualized guest, and this simulation usually adds overhead which is avoided by paravirtualized drivers. (In
some cases, an unmodified guest operating system running in a fully virtualized domain can be sped up if special
paravirtualized drivers have been written and installed, which allows the operating system to bypass some of this
simulation.)
Paravirtualization also works on a wider variety of hardware at present. Only the latest x86 and x86-64 processors
support the new Intel VT or AMD-V features at this time.
The current implementation of hardware-assisted full virtualization in Red Hat Enterprise Linux 5.0 also has a number
of limitations when compared with paravirtualized domains. Some of these limitations may be lifted in Red Hat
Enterprise Linux 5.1 or other upcoming updates.
However, there are situations where paravirtualization is not the correct solution. If you want to run an unmodified,
proprietary operating system like Windows Server 2003 or Windows XP, you will need to use full virtualization, for
example. Hardware-assisted full virtualization will be discussed in more detail later in the course.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 4
57

Review of DomU Components


Configuration file in /etc/xen/domain
Name and UUID of domain
Virtual CPUs assigned to domain
Memory assigned to domain at startup
MAC address of virtual network card

Disks are simulated with a Virtual Block Device


Simple File is a disk image in /var/lib/xen/images
Normal Disk Partition is an unused disk partition, logical volume, or other
block device

4-4
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

A shut down DomU domain consists of two components; a configuration file specifying how the domain should
be created, and a virtual block device which simulates its hard disk. The configuration files for DomU domains are
located in the /etc/xen directory of Dom0. Each DomU domain normally has its own configuration file, but it is
possible to use tools such as xm to start a domain without one.
[root@stationX]# cat /etc/xen/example
# example domain-u config file
name = "example"
memory = "512"
disk = [ 'tap:aio:/var/lib/xen/images/example.img,xvda,w', ]
vif = [ 'mac=00:16:3e:66:06:aa', ]
vfb = ["type=vnc,vncunused=1"]
uuid = "d8353ff6-78cd-4931-928a-197a8f7ef3b9"
bootloader="/usr/bin/pygrub"
vcpus=1
on_reboot
= 'restart'
on_crash
= 'restart'
The configuration file is automatically generated when a new domain is installed using the virt-manager utility. The
file can also be written and edited by hand. The format of the configuration file is documented in the xmdomain.cfg(5)
man page.
Guest operating systems running in a DomU normally can not directly access the physical hardware underlying the
Xen environment. Thus a DomU domain may be moved from one Xen environment to another on the same processor
architecture without modification, even if the hardware otherwise differs, simply by moving the virtual block device
and configuration information.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 4
58

CPUs and Virtual CPUs


Each domain sees one or more virtual CPUs (VCPUs)
More VCPUs may be assigned than there are physical CPUs
Obvious negative performance implications

Domains may be restricted to certain physical CPUs


All domains have equal access to CPU time by default
Can dedicate real resources to critical domains

4-5
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Each domain is assigned a certain number of virtual CPUs (VCPUs) when the domain is created. The VCPUs are
scheduled to run on real CPUs on the system by the hypervisor. By default, all domains have equal access to real CPU
time.
It is possible to configure a domain so that it has more VCPUs than there are actual CPUs on the system. However,
for reasons which should be obvious this will generally have a negative effect on domain performance. It is also
possible to force a domain to start on a certain real CPU and only allow it to run on certain real CPUs. The /etc/
xen/domain configuration file supports three options to configure CPU usage:
vcpus=4
The number of virtual CPUs to present to the guest operating system in the domain.
cpu=0
The number of the physical CPU to start the domain on. Note that the Xen kernel does not distinguish between
physical cores, dual cores on the same CPU socket, or hyperthreads. Normally, Xen will search for CPUs "depth-first":
first hyperthreads, then cores on the same socket, then cores on separate sockets. So on a dual-core HT CPU, CPUs 0
and 1 might be the hyperthreads on the first core, with CPUs 2 and 3 the hyperthreads on the second core.
cpus=2,4-7
The physical CPUs that this domain is permitted to be run on. This setting can be useful so that individual domains can
be dedicated to particular processors.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 4
59

Memory
Domain needs enough memory for its guest operating system
512 MB recommended minimum

Domains take memory from Dom0 as they start up


Dom0 will not shrink below 256 MB by default
Memory not automatically returned when domain is destroyed

Hypervisor is not yet NUMA-aware


Systems supporting NUMA should set node interleaving in BIOS

4-6
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

A domain will need to be assigned enough memory for the needs of its guest operating system and applications. The
recommended minimum for Red Hat Virtualization is currently 512 MB, but smaller domains are possible which
function. In the /etc/xen/domain configuration file, the amount of memory is specified in MB:
memory = "512"
When Dom0 boots, it takes control of all system memory. As DomU domains are created, they take their memory
from Dom0. For example, if the system has 3 GB RAM, and a DomU starts that is assigned 1 GB RAM, then Dom0
suddenly sees 1 GB of RAM vanish leaving it with 2 GB. The amount of memory available to Dom0 is not permitted
to shrink below 256 MB by default. This memory is not automatically returned to Dom0 when the DomU is destroyed,
under the theory that a new domain will be started (or restarted) shortly which will need that memory. Any unassigned
memory will be used for new domains before more is taken from Dom0. To recover the memory for Dom0, use virsh
nodeinfo to find out how many kB are available on the system, then run the command virsh setmem 0 memory-inkB.
The Xen hypervisor currently used by Red Hat Virtualization is not yet able to optimize itself for systems which have
a non-uniform memory architecture (such as multi-processor AMD Opteron systems). On these systems, certain parts
of memory can be accessed faster from certain CPUs than others. The virsh nodeinfo command should be able to
detect NUMA; check to see if "NUMA Cell(s)" is greater than one. For reliable and predictable performance, the
BIOS on such systems should have NUMA set to "node interleave". Future updates may address this issue.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 4
60

Dynamic CPU and Memory Management


virsh setvcpus domain number-of-VCPUs
Adjusts number of VCPUs available to active domain
Xen-native equivalent is xm vcpu-set domain number-of-VCPUs

virsh setmem domain memory-in-kB


Adjusts amount of memory available to active domain
Xen-native equivalent is xm mem-set domain memory-in-MB

Initial number of VCPUs or amount of memory at domain start is


the maximum
4-7
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The number of VCPUs and the amount of main memory available to a given domain can be raised or lowered "on the
fly", while the domain is running. Extra care should be observed when shrinking memory for an active domain, as
setting memory too small may destabilize the operating system and crash the guest. The initial number of VCPUs and
the initial amount of memory a domain had when started controls the maximum amount which can be set dynamically
while the domain is running.
From the command-line, virsh setvcpus domain num-VCPUs adjusts the number of VCPUs available. The Xennative equivalent is xm vcpu-set domain num-VCPUs. Likewise, virsh setmem domain mem-kB adjusts the
amount of memory available. The Xen-native equivalent is xm mem-set domain mem-MB. Note well that virsh
expects memory to be specified in kilobytes while xm expects memory to be specified in megabytes. If a value greater
than the maximum allowed is specified, the maximum value is used.
The virt-manager graphical tool can also be used to adjust these values. In the management interface, right-click
on the domain of interest and in the pop-up menu that appears, select Details. In the window that opens, click on the
Hardware tab and select either Processor or Memory to make appropriate adjustments.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 4
61

Virtual Block Devices


Used to simulate hard drives and other storage devices
First virtual drive is normally /dev/xvda

Can use a simple disk image file kept on Dom0


Normally kept in /var/lib/xen/images/imagefile
Some tools create the file as part of DomU installation
Never use obsolete file: for a paravirtualized DomU!

Can use any physical block device on Dom0


Disk partition, logical volume or snapshot, whole disk, etc.

4-8
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Virtual block devices simulate hard drives and other block devices in a DomU. To the virtual machine, VBDs appear
to be normal disks which can be partitioned and formatted with file systems and swap spaces. The first of these drives
typically appears as /dev/xvda to its DomU, the second would be /dev/xvdb, and so on. Disk partitions are
named normally; the first partition on the first VBD in the domain is normally /dev/xvda1. The actual backend
storage devices are either simple disk image files or block devices belonging to Dom0.
Simple disk image files are normally kept in the directory /var/lib/xen/images; SELinux may prevent files
in other locations from being used. The following directive configures /var/lib/xen/images/vm1.img as a
writable VBD which will show up as /dev/xvda:
disk = [ 'tap:aio:/var/lib/xen/images/vm1.img,xvda,w', ]
Many old examples on the Internet use the obsolete file: implementation instead of tap:aio: for simple
file images. NEVER USE file: WITH PARAVIRTUALIZED GUESTS! The old paravirtualized file:
implementation is not stable under heavy load and can eat your data if Dom0 crashes; always use tap:aio:.
When installing a new domain which will use a simple image file, the file must exist before the domain is started.
Some tools such as virt-manager and virt-install create the image file as part of the installation process; otherwise dd
can be used; for example, to create a 4 GB image file:
dd if=/dev/zero of=/var/lib/xen/images/vm1.img bs=1M count=4000
As an alternative, any existing unused disk partition, whole disk device, logical volume, or other physical block device
available to Dom0 can be used as a VBD:
disk = [ 'phy:/dev/sdb1,xvda,w', ]
Multiple disks may also be provided:
disk = [ 'phy:/dev/vg0/vm1,xvda,w',
'tap:aio:/var/lib/xen/images/vm1-data.img,xvdb,w', ]

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 4
62

Logical Volumes and VBDs


A logical volume makes a fantastic VBD
Easily manage Dom0 disk space used for domains
Snapshot volumes!
Instant backup of domain's VBD
Instant rebuild/reinstall by using a snapshot of a template volume as the VBD

Less portable between hosts than image files


Cluster LVM can be used with SAN-based devices

4-9
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

One excellent choice for a DomU's VBD is to use a logical volume on Dom0. Logical volumes have many advantages
and relatively few drawbacks as a VBD.
The first and perhaps most obvious advantage is increased management flexibility of Dom0's disk space. Space from
the volume group can be allocated and freed as domains are set up and removed from use. With care, VBDs can even
be resized non-destructively; tools such as pvresize can be used from within DomU to help.
Logical volume snapshots provide some of the most interesting advantages of this approach. For instance, a snapshot
can be taken almost instantly, backing up the domain's file systems. Also, modern snapshots in RHEL 5 are readwritable, which can be very useful for experimenting with and reinstalling/rebuilding DomU domains. One scenario
would be to install a system using a VBD as the logical volume, configuring it as desired. Then that domain is shut
down. A snapshot is taken of the domain, and the domain's configuration file is edited to use the snapshot instead
of the original logical volume as its VBD. The domain is allowed to run normally, and changes can be made to the
domain's file systems normally. To return the domain to its initial state, shutdown the domain, remove the snapshot,
take a new snapshot from the original logical volume, and restart the domain. This process is much quicker than
kickstarting the domain from scratch, even if a Kickstart %post script handles all reconfiguration of the machine.
Note that multiple snapshots can be taken of the same original logical volume, and used to clone multiple domains
with the same configuration.
One downside of using logical volume-based virtual block devices is that they are less easy to replicate from one
machine to another than simple image files are. This can be partially solved if the nodes both have access to the same
iSCSI or Fibre Channel SAN, in which case the cluster-aware CLVM system provided as part of Red Hat Enterprise
Linux Advanced Platform can be used to make relevant logical volumes visible to both nodes. Care must still be taken
that the same logical volume is not used by multiple nodes at the same time.
Also, it is a good practice not to allow the name of any volume group used by a DomU domain to be the same as
the name of a volume group used by Dom0. This makes it simpler to access those volume groups from Dom0 for
troubleshooting purposes.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 4
63

Virtual Machines and Networking


Up to 3 virtual NICs per domain by default
A virtual network bridge in Dom0 connects all domains
Physical peth0 is uplink to LAN
Virtual vifD.N interfaces are bridge ports

Virtual bridge ports connected by virtual patch cables to


domain's virtual network interfaces
vif2.0 on bridge connects to eth0 in Dom2

4-10
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Up to three virtual network interface cards can be defined per domain by default. The MAC address for each network
card is persistently stored in the domain's configuration file. This is selected at random from the 00:16:3E vendor code
at domain setup time by virt-manager and virt-install.
vif = [ 'mac=00:16:3E:27:07:a0, bridge=xenbr0', ]
To support virtual network interfaces in the DomU domains, a virtual network bridge xenbr0 is set up by Dom0
automatically. This acts like a hub connecting all the domains with each other and the LAN. The brctl command can
be used to get information about the bridge configuration from Dom0.
The physical eth0 interface is renamed peth0 and becomes the uplink to the LAN for the bridge. Virtual vifD.N
interfaces are created to represent the bridge's network connectors. These are connected to virtual interfaces in the
node's domains; in the interface name, D is the domain's ID number and N is the number of the virtual interface. For
example, vif3.0 is connected by a virtual patch cable to the eth0 interface in Domain-3, and vif0.0 is connected
to the virtual eth0 in Domain-0.
This separation of peth0 and eth0 in Dom0 allows the privileged domain to set firewall rules on eth0 which do
not block other domains from communicating through the hub.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 4
64

DomU Graphical Consoles


Each DomU has a virtual graphical console
Emulates a physical monitor, keyboard, mouse
Can access virtual terminals just like a real graphical console

Implemented using VNC by default


Can use virt-manager, or vncviewer or other VNC clients
Access restricted to Dom0 localhost by default

Can use SDL as an alternative implementation


Can only use from Dom0 physical graphical console

4-11
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Xen in Red Hat Virtualization provides a virtual graphical console for each DomU. This emulates a physical monitor
and associated virtual frame buffer, keyboard, and mouse. This graphical console works just like the physical one on a
normal workstation; virtual terminals can be accessed and it can run the X Window System.
A VNC-based implementation of the graphical console is used by default. This is enabled by the following line in the
domain's configuration file:
vfb = ["type=vnc,vncunused=1"]
A DomU's graphical console can be accessed with virt-manager or any normal VNC client. In virt-manager, simply
double-click on the node in the management window. To get the port number in use by a domain for other VNC
clients, run the command virsh dumpxml domain and look for the graphics type line; subtract 5900 from the
port number to get the correct VNC display number. The xen-vncfb server for the domain is only accessible from
localhost on Dom0 by default for security reasons. The ssh utility can be used to securely access the VNC interface
remotely. For example, ssh -via user@domain0 localhost:1 would securely access the DomU listening on port
5901 from a remote host. Note that any user on Dom0 can access the graphical console of any DomU as there is no
VNC password set.
Accessing virtual terminals can be tricky because the normal Ctrl-Alt-Fn sequence is interpreted by the local X server
and not sent to the VNC client. In the virt-manager interface, send Ctrl-Ctrl-Ctrl, then send Alt-Fn to change virtual
terminals in the virtual graphical console. Using vncviewer, in the interface press F8, check Ctrl in the pop-up menu,
then press F8 again and check Alt, then press Fn. If you need to send F8, press it twice. Do not forget to uncheck Ctrl
and Alt in the pop-up menu after you change virtual terminals!
A second implementation of the graphical console is available based on SDL, the Simple DirectMedia Layer. This
solves the mouse-tracking/double-pointer issue seen in VNC, and may perform better, but can only be used in X on
the physical graphical console used by Dom0. To use this implementation instead of VNC, replace the vfb line in the
domain's configuration file as follows:
vfb = ["type=sdl"]
If the guest has been running using VNC in the past, you will probably need to destroy or shutdown (not reboot) the
domain to force the change to take effect.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 4
65

DomU Virtual Serial Console


Each DomU also has a virtual "serial" console, /dev/xvc0
Can access in virt-manager
From command-line with xm console domain

Can set up /dev/xvc0 as DomU's system console


Kernel option console=xvc0
Edit /etc/inittab and /etc/securetty

4-12
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Each DomU is also provided with a virtual "serial" console called /dev/xvc0. This console may be accessed
from Dom0 in several ways. Using virt-manager, from the View menu on the graphical console window for the
domain, select Serial Console. From the command-line, run xm console domain; to return to the prompt type Ctrl-].
Alternatively, the console can be opened when the domain is created with xm create -c domain.
The virtual serial console can be used as the system console in place of the graphical console. The graphical console
is still available and is still sent kernel messages, but boot messages sent by init to /dev/console are sent to the
serial console instead of the graphical console. To configure this, edit the /boot/grub/grub.conf file for the
DomU to include console=xvc0 as the last console= option on the kernel command-line:
title Red Hat Enterprise Linux Server (2.6.18-8.el5xen)
root (hd0,0)
kernel /boot/vmlinuz-2.6.18-8.el5xen ro root=LABEL=/ rhgb quiet console
=tty0 console=xvc0
initrd /boot/initrd-2.6.18-8.el5xen.img
The console=tty0 option points to the graphical console in this context. If multiple console= options are specified, all
will get kernel messages but only the last one is sent /dev/console messages. The default configuration assumes
console=xvc0 console=tty0 if no console= options are specified.
To allow logins on the virtual serial console, ensure the following line is in the DomU's /etc/inittab file:
co:2345:respawn:/sbin/agetty xvc0 9600 vt100-nav
In addition, to allow root to log in at that prompt, add the line xvc0 to the DomU's /etc/securetty file.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 4
66

Dom0 Serial Console


Dom0 can use node's physical serial console
Device is /dev/ttySn
With a console server, useful for remote administration

Configuration is similar to DomU virtual console


Edit hypervisor and kernel options in /boot/grub/grub.conf
Edit /etc/inittab and /etc/securetty
May also be able to redirect BIOS messages

4-13
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Like a normal server, Domain-0 can be set up to use the physical serial console for log messages and logins. This is
useful for a "headless" server with no graphical console but which is attached to a terminal or a device called a console
server which provides access to the serial console of one or more systems to remote users over the network.
The basic configuration is similar to /dev/xvc0, except that the appropriate serial port (/dev/ttySn) is used
instead. Also, GRUB, the hypervisor, and the kernel need to have their consoles redirected. There must not be a
splashimage directive defined if the GRUB menu is to appear on the serial console:
serial --unit=0 --speed=9600
terminal --timeout=10 serial console
default=0
timeout=5
#splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux Server (2.6.18-8.el5xen)
root (hd0,0)
kernel /boot/xen.gz-2.6.18-8.el5 com1=9600,8n1
module /boot/vmlinuz-2.6.18-8.el5xen ro root=LABEL=/ rhgb quiet console
=tty0 console=ttyS0,9600n8
module /boot/initrd-2.6.18-8.el5xen.img
On commodity PC hardware, the BIOS normally sends its output to the graphical console. Some server hardware does
support "serial console redirection", and if available this should be turned on in the BIOS as well.
The /etc/inittab file should also be edited if logins are to be allowed over the serial console, and the appropriate
ttySn line added to /etc/securetty if direct root logins will be allowed:
S0:2345:respawn:/sbin/agetty 9600 ttyS0 vt100

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 4
67

virt-install
Command-line DomU installation tool based on libvirt
Use options or answer questions interactively
Example:
virt-install -p -n testing -r 512 -l ftp://192.168.0.254/pub -x
ks=ftp://192.168.0.254/pub/ks.cfg -f /dev/vg0/testing --vnc
virt-install --help

4-14
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The virt-install tool is a command-line utility for installing DomU domains, either non-interactively through options
or interactively if mandatory options are omitted. It is provided as part of the python-virtinst package, and is based on
libvirt.
This tool provides an easy way to set up the domain configuration file and set up the domain without user interaction
or manual creation of the configuration file. Like the virt-manager domain set up wizard, this tool will create the
configuration file as part of the set up process. Documentation for this utility is sparse, run virt-install --help for
details on what options are available. The utility is written in Python, so the scripts themselves are also available to
analyze.
The example on the slide creates a paravirtualized guest named "testing" with 512 MB of RAM, using the /dev/vg0/
testing logical volume as a VBD, and VNC for the graphical console. It will be kickstarted with the version of the
distribution at the URL ftp://192.168.0.254/pub, and provides the kernel command line with the location of
the kickstart file, ks=ftp://192.168.0.254/pub/ks.cfg.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 4
68

Manual Domain Installation


Can install by hand if none of the standard set up tools are
available
RHEL Domain Installation Procedure
Acquire Stage 1 installer kernel/initrd files
Prepare VBDs as necessary
Create /etc/xen/domain configuration file (optional)
Use xm create to install domain

4-15
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

If virt-manager and virt-install are not available for some reason, it is possible to set up a domain with nothing more
than xm. The basic procedure is as follows:
1.

Get a copy of the installer kernel and initial RAM disk from install media. The files for use with PXE should
work:
[root@station5]# cp /media/cdrom/images/pxeboot/vmlinuz /var/lib/xen
[root@station5]# cp /media/cdrom/images/pxeboot/initrd.img /var/lib/xen

2.

Prepare the devices that will serve as the backend storage. The following command will create a 4 GB disk
image:
[root@station5]# dd if=/dev/zero of=/var/lib/xen/images/hd.img bs=1M count=4000

3.

The next step is to create a configuration file for the new DomU. Technically this step is optional, as all the
information in the file may instead be passed at the command-line, but this will greatly simplify restarting the
domain later.
[root@station5]# cat /etc/xen/manual
name = "manual"
memory = "512"
disk = [ 'tap:aio:/var/lib/xen/images/hd.img,xvda,w', ]
vif = [ 'mac=00:16:3e:44:40:a7, bridge=xenbr0', ]
uuid = "66f75ee1-3b0f-456e-947f-143f668c3b89"
vcpus=2
on_reboot = 'restart'
on_crash = 'restart'

4.

Finally, to bootstrap the install use the xm create command and include any options that are required to boot the
installer, such as the location of the kernel and ramdisk images.
[root@station5]# xm create -c -f /etc/xen/manual on_reboot=destroy \
> kernel=/var/lib/xen/vmlinuz \
> ramdisk=/var/lib/xen/initrd.img \
> extra="ks=http://192.168.0.254/ks.cfg vnc askmethod" \
> bootloader=""

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 4
69

End of Unit 4
Questions and Answers
Summary
Format of /etc/xen/domain
Managing CPUs, memory, and VBDs
Xen networking architecture
Managing virtual graphical and serial consoles
Installing with virt-install and xm

4-16
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 4
70

Lab 4
Advanced Paravirtualized Domain Management
Goal:

To gain experience with dynamically adjusting domain memory and managing


virtual serial consoles, and to explore advanced installation techniques.

System Setup:

A computer installed with Red Hat Enterprise Linux running Red Hat
Virtualization, with a single vm0 Domain-U installed from previous labs.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 4
71

Sequence 1: Dynamic adjustment of domain memory


Scenario:

In this sequence, you will adjust the amount of memory assigned to


Domain-0 and vm0 while the domains are running. You will learn how to
make Domain-0 recover memory released by domains that have been shut
down.

Instructions:
1.

Open the virt-manager management window (or use xm list) to view the current memory
usage of Domain-0 and vm0. (If vm0 is not running, start it with xm create vm0.) On
Domain-0, use free to verify the amount of memory available to the domain.

2.

Shutdown vm0 gracefully. Is its memory reassigned to Domain-0?

3.

Adjust Domain-0 so that it is using as much of the system's memory as possible.

4.

Verify that the memory is visible to Domain-0.

5.

Recreate vm0 with 640 MB of memory. (You can pass arguments to xm to accomplish this
without editing the configuration file.

6.

Log into vm0 and verify that it sees the memory.

7.

Reduce the amount of memory in vm0 to 256 MB.

8.

Verify that vm0 only sees 256 MB.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 4 Sequence 1


72

Sequence 2: Configuring Virtual Serial Consoles


Scenario:

In this sequence, you will configure vm0 so that its virtual serial console is the
system console and displays all boot messages from init. You will also ensure
that logins are available on the virtual serial console while still being available
on the graphical console as well.

Instructions:
1.

Log in to vm0 as root.

2.

Configure the appropriate kernel arguments in GRUB to send messages to both the
graphical console and to the virtual serial console. Make the virtual serial console the
system console.

3.

Edit the /etc/inittab file on vm0 to enable logins on the virtual console.

4.

Edit the /etc/securetty file on vm0 to enable root logins on the virtual console.

5.

Go back to a terminal on Domain-0. Shut down vm0. Once it has stopped, restart vm0
with a virtual serial console window open. Do you see the boot messages?

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 4 Sequence 2


73

Sequence 3: Installing a new DomU with virt-install


Scenario:

In this sequence, you will use the virt-install command-line utility to install a
new virtual machine, vm1, on your workstation. You will also configure a new
logical volume in a new volume group for use as the domain's virtual block
device.

Instructions:
1.

Log into Domain-0 as root. Create a large partition on the main hard drive using all the
remaining unpartitioned space as a new physical volume, and then use pvcreate to register
it with the LVM database.

2.

Create a new volume group, vbds, using /dev/sda4.

3.

Create a new 3.5 GB logical volume named vm1 in the volume group vbds, which will be
used as the virtual disk for the new domain you are about to create.

4.

Ensure the vnc package is installed on Domain-0, for the vncviewer utility.

5.

Use virt-install to Kickstart a new Red Hat Enterprise Linux 5 paravirtualized domain from
the command-line that meets the following specifications:

System Name: vm1

Install Media URL: ftp://server1.example.com/pub

Kickstart URL: ftp://server1.example.com/pub/gls/vm1.cfg

Disk Image File: /dev/vbds/vm1

VM Memory: 256 MB

VCPUs: 1

Fixed MAC address: 00:16:3e:00:XX:01

The XX in your fixed MAC address should be your station number in normal decimal;
station1 should use 01, station10 should use 10 (not 0a), and so on.
(If the install refuses to start because of insufficient system memory, you may wish to shut
down vm0 to free up some RAM.)
6.

Monitor the installation and ensure that it completes successfully. When it is finished, select
Reboot; the domain will actually shut down. Restart vm1 and log in. The password for the
root account is redhat.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 4 Sequence 3


74

Sequence 4: Rapid domain cloning from LVM-based template


Scenario:

In this sequence, you will use your newly-installed vm1 domain as a template
to create two more domains extremely rapidly.

Instructions:
1.

First, pre-configure vm1 as you want it set up. Make sure vm1 is running and log in as root.
Verify that /etc/hosts reads as follows:
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1
localhost.localdomain localhost
::1
localhost6.localdomain6 localhost6

2.

Next, on vm1 verify that HOSTNAME=localhost.localdomain is set in /etc/


sysconfig/network so that the host will get its hostname from DNS based on its
DHCP address.

3.

On vm1, remove the HWADDR line from /etc/sysconfig/network-scripts/


ifcfg-eth0 so that whatever virtual Ethernet card the kernel detects first will be eth0.

4.

Finally, make a configuration change to the vm1 template system. Enable use
of the server1.example.com YUM repository by downloading the ftp://
server1.example.com/pub/gls/server1.repo file and installing it in /etc/
yum.repos.d on vm1.

5.

On Domain-0, copy /etc/xen/vm1 to /etc/xen/vm2 and /etc/xen/vm3.

6.

Edit /etc/xen/vm2 so that:

Its name is vm2

It uses /dev/vbds/vm2 as its disk

Its MAC address is 00:16:3e:00:XX:02

Its uuid is replaced with a new one generated using uuidgen

Save the file and exit the editor.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 4 Sequence 4


75

7.

Edit /etc/xen/vm3 so that:

Its name is vm3

It uses /dev/vbds/vm3 as its disk

Its MAC address is 00:16:3e:00:XX:03

Its uuid is replaced with a new one generated using uuidgen

Save the file and exit the editor.


8.

Shut down vm1. Make absolutely certain that it has completely shut down cleanly before
continuing to the next step!

9.

Verify that vm1 is not running. Make two logical volume snapshots of /dev/vbds/vm1;
vm2 which is 3500 MB in size, and vm3 which is 1000 MB in size.

10.

You have just created two clones of vm1! Start domains vm2 and vm3. You may have to
shut down vm0 on your system in order to free enough memory.

11.

At this point you can log into vm2 and vm3 and see that they have identical configurations
to vm1, but unique MAC addresses on their network cards. Use yum to install some
packages on vm2, and then use rpm to compare the package list on vm3, and you will see
their configuration is now different.

12.

Reinstall vm2 to bring it back to stock configuration. Destroy the vm2 domain and remove
/dev/vbds/vm2. Make sure vm1 is still stopped and make a new /dev/vbds/vm2
snapshot of /dev/vbds/vm1. Start vm2, log in, and check to see if your extra packages
are still installed.

13.

You may have noticed that even though we made /dev/vbds/vm3 only 1 GB in size, its
/dev/xvda still appears to be 3.5 GB in size just like vm1 and vm2. This is one of the
advantages of snapshot volumes; the snapshot used for the block device only needs to be
large enough to hold the differences between it and its parent volume. On Domain-0, run the
lvs command to see how much of the snapshot is in use (under Snap%).
If the differences accumulate to more data than the snapshot can hold, it becomes invalid
and the virtual disk is effectively destroyed, which is the downside of using a snapshot
smaller than the parent volume. But a snapshot can be resized before it becomes full to
avoid this. Use lvresize on /dev/vbds/vm3 to give it 3.5 GB of space.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 4 Sequence 4


76

Sequence 5: Manual domain installation with xm


Scenario:

For this final sequence, you will manually install one more paravirtualized
domain, vm4, with the xm command.

Instructions:
1.

On Domain-0, create a new 3.5 GB volume, vm4, in the vbds volume group.

2.

Create the directory /var/lib/xen/install. Download the vmlinuz and


initrd.img files from ftp://server1.example.com/pub/images/xen/ to
that directory.

3.

Copy /etc/xen/vm1 to /etc/xen/vm4. Edit /etc/xen/vm4 so that:

Its name is vm4

It uses /dev/vbds/vm4 as its disk

Its MAC address is 00:16:3e:00:XX:04

Its uuid is replaced with a new one generated using uuidgen

Save the file and exit the editor.


4.

Use xm to start the Kickstart of vm4. Use ftp://server1.example.com/pub/


gls/vm1.cfg as your kickstart file. Use kernel= to pass the path to the installer kernel,
ramdisk= for the path to the initrd, and extra= to pass Kickstart options. Set bootloader=
to an empty string. Set on_reboot=destroy so that the domain uses its new kernel and
not the installer kernel after installation. (You may need to shut down one of your other
domains first if your system is running low on available RAM.)

5.

After the installation finishes, you may elect to restart vm4 to verify that the installation
worked.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 4 Sequence 5


77

Sequence 1 Solutions
1.

Open the virt-manager management window (or use xm list) to view the current memory
usage of Domain-0 and vm0. (If vm0 is not running, start it with xm create vm0.) On
Domain-0, use free to verify the amount of memory available to the domain.
[root@stationX]# xm list
Name
ID Mem(MiB) VCPUs State
Time(s)
Domain-0
0
1504
2 r----330.7
vm0
2
499
1 -b---19.6
[root@stationX]# free
total
used
free
shared buffers
cached
Mem:
1540096
588396
951700
0
52848
312116
-/+ buffers/cache:
223432 1316664
Swap:
522104
0
522104

2.

Shutdown vm0 gracefully. Is its memory reassigned to Domain-0?


[root@stationX]# virsh shutdown vm0
Domain vm0 is being shutdown
[root@stationX]# xm list
Name
Domain-0

ID Mem(MiB) VCPUs State


Time(s)
0
1504
2 r----- 1874.3

No, the memory formerly used by vm0 is not reassigned to Domain-0 automatically,
under the assumption that a new domain will need it shortly.
3.

Adjust Domain-0 so that it is using as much of the system's memory as possible.


Use virsh nodeinfo or xm info to determine available memory, then adjust Dom0
accordingly.
[root@stationX]# virsh nodeinfo
CPU model:
i686
CPU(s):
2
CPU frequency:
1994 MHz
CPU socket(s):
1
Core(s) per socket: 2
Thread(s) per core: 1
NUMA cell(s):
1
Memory size:
2086912 kB
[root@stationX]# virsh setmem Domain-0 2086912
The xm command or virt-manager may be used as alternative approaches. It is possible
that not all free memory available can be reassigned to Dom0 for various reasons, but that
memory is still generally available for assignment to the next virtual machine that is started.
Verify that the memory is visible to Domain-0.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 4 Sequence 1 Solutions


78

4.

5.

[root@stationX]# free
total
used
Mem:
1931068
588396
-/+ buffers/cache:
223432
Swap:
522104
0

free
951700
1316664
522104

shared
0

buffers
52848

cached
312116

Recreate vm0 with 640 MB of memory. (You can pass arguments to xm to accomplish this
without editing the configuration file.
[root@stationX]# xm create vm0 memory=640

6.

Log into vm0 and verify that it sees the memory.


(Here is a slightly different solution than using free again:)
[root@vm0]# cat /proc/meminfo | grep MemTotal
MemTotal:
655532 kB

7.

Reduce the amount of memory in vm0 to 256 MB.


[root@stationX]# virsh setmem vm0 262144

8.

Verify that vm0 only sees 256 MB.


[root@vm0]# cat /proc/meminfo | grep MemTotal
MemTotal:
262144 kB

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 4 Sequence 1 Solutions


79

Sequence 2 Solutions
1.

Log in to vm0 as root.

2.

Configure the appropriate kernel arguments in GRUB to send messages to both the
graphical console and to the virtual serial console. Make the virtual serial console the
system console.
Edit /boot/grub/grub.conf to add appropriate console directives. The last one
becomes the system console, /dev/console, but all get kernel messages.
title Red Hat Enterprise Linux Server (2.6.18-8.el5xen)
root (hd0,0)
kernel /vmlinuz-2.6.18-8.el5xen ro
root=/dev/VolGroup00/LogVol00 rhgb quiet console=tty0
console=xvc0
initrd /initrd-2.6.18-8.el5xen.img

3.

Edit the /etc/inittab file on vm0 to enable logins on the virtual console.
Add a line configuring agetty to listen on /dev/xvc0 immediately before the lines
starting mingetty on the virtual terminals:
co:2345:respawn:/sbin/agetty xvc0 9600 vt100-nav
1:2345:respawn:/sbin/mingetty tty1
2:2345:respawn:/sbin/mingetty tty2
You can run init q to test this change without rebooting. You should see a login prompt on
the virtual serial console provided by virt-manager, xm console vm0, or similar tools. You
may need to hit Return in a virtual serial console viewer to make the prompt appear. To
detach from xm console and return to the shell prompt type Ctrl-].

4.

Edit the /etc/securetty file on vm0 to enable root logins on the virtual console.
Add the following line to the end of /etc/securetty:
xvc0

5.

Go back to a terminal on Domain-0. Shut down vm0. Once it has stopped, restart vm0
with a virtual serial console window open. Do you see the boot messages?
[root@stationX]# virsh shutdown vm0
Ensure that the domain has stopped before continuing with the next step.
[root@stationX]# xm create -c vm0
You should see pyGRUB, then the boot messages from the kernel and init. If you do not,
go back and check your work. Make sure your edits were all on vm0 and not Domain-0 or
some other virtual machine.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 4 Sequence 2 Solutions


80

Sequence 3 Solutions
1.

Log into Domain-0 as root. Create a large partition on the main hard drive using all the
remaining unpartitioned space as a new physical volume, and then use pvcreate to register
it with the LVM database.
The following instructions assume that the main hard disk on your system is /dev/sda,
that /dev/sda4 does not yet exist, and that significant space is still unpartitioned on the
disk. Exact cylinder numbers may vary on your system.
[root@stationX]# fdisk /dev/sda
The number of cylinders for this disk is set to 12161.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): n
Command action
e
extended
p
primary partition (1-4)
p
Selected partition 4
First cylinder (1226-12161, default 1226): Return
Using default value 1226
Last cylinder or +size or +sizeM or +sizeK (1226-12161, default
12161): Return
Using default value 12161
Command (m for help): t
Partition number (1-4): 4
Hex code (type L to list codes): 8e
Changed system type of partition 4 to 8e (Linux LVM)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16:
Device or
resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@stationX]# partprobe /dev/sda

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 4 Sequence 3 Solutions


81

[root@stationX]# pvcreate /dev/sda4


Physical volume "/dev/sda4" successfully created
2.

Create a new volume group, vbds, using /dev/sda4.


[root@stationX]# vgcreate vbds /dev/sda4

3.

Create a new 3.5 GB logical volume named vm1 in the volume group vbds, which will be
used as the virtual disk for the new domain you are about to create.
[root@stationX]# lvcreate -L 3.5G -n vm1 vbds

4.

Ensure the vnc package is installed on Domain-0, for the vncviewer utility.
[root@stationX]#yum -y install vnc

5.

Use virt-install to Kickstart a new Red Hat Enterprise Linux 5 paravirtualized domain from
the command-line that meets the following specifications:

System Name: vm1

Install Media URL: ftp://server1.example.com/pub

Kickstart URL: ftp://server1.example.com/pub/gls/vm1.cfg

Disk Image File: /dev/vbds/vm1

VM Memory: 256 MB

VCPUs: 1

Fixed MAC address: 00:16:3e:00:XX:01

The XX in your fixed MAC address should be your station number in normal decimal;
station1 should use 01, station10 should use 10 (not 0a), and so on.
(If the install refuses to start because of insufficient system memory, you may wish to shut
down vm0 to free up some RAM.)
[root@stationX]# virt-install -p -n vm1 -r 256 -f /dev/vbds/vm1
-m 00:16:3e:00:XX:01 -l ftp://server1.example.com/pub -x
ks=ftp://server1.example.com/pub/gls/vm1.cfg --vnc
Do not forget to replace the XX in the MAC address with your station number! Be careful
not to use another student's MAC address!
If the VNC window fails to open with some errors ending with:
Domain installation still in progress. You can reconnect
to the console to complete the installation process.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 4 Sequence 3 Solutions


82

the installation is probably running fine, and you may open the VNC graphical console
window manually from virt-manager.
6.

Monitor the installation and ensure that it completes successfully. When it is finished, select
Reboot; the domain will actually shut down. Restart vm1 and log in. The password for the
root account is redhat.
[root@stationX]# xm create vm1

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 4 Sequence 3 Solutions


83

Sequence 4 Solutions
1.

First, pre-configure vm1 as you want it set up. Make sure vm1 is running and log in as root.
Verify that /etc/hosts reads as follows:
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1
localhost.localdomain localhost
::1
localhost6.localdomain6 localhost6

2.

Next, on vm1 verify that HOSTNAME=localhost.localdomain is set in /etc/


sysconfig/network so that the host will get its hostname from DNS based on its
DHCP address.

3.

On vm1, remove the HWADDR line from /etc/sysconfig/network-scripts/


ifcfg-eth0 so that whatever virtual Ethernet card the kernel detects first will be eth0.
The file should read something like:
[root@vm1]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=dhcp
DHCPCLASS=
ONBOOT=yes

4.

Finally, make a configuration change to the vm1 template system. Enable use
of the server1.example.com YUM repository by downloading the ftp://
server1.example.com/pub/gls/server1.repo file and installing it in /etc/
yum.repos.d on vm1.
[root@vm1]# lftpget ftp://server1/pub/gls/server1.repo
[root@vm1]# cp server1.repo /etc/yum.repos.d

5.

On Domain-0, copy /etc/xen/vm1 to /etc/xen/vm2 and /etc/xen/vm3.


[root@stationX]# cp /etc/xen/vm1 /etc/xen/vm2
[root@stationX]# cp /etc/xen/vm1 /etc/xen/vm3

6.

Edit /etc/xen/vm2 so that:

Its name is vm2

It uses /dev/vbds/vm2 as its disk

Its MAC address is 00:16:3e:00:XX:02

Its uuid is replaced with a new one generated using uuidgen

Save the file and exit the editor.


[root@stationX]# uuidgen
Copyright 2007 Red Hat, Inc.
All rights reserved

RH184-RHEL5-en-2-20070907, Lab 4 Sequence 4 Solutions


84

52b46335-982b-46e3-e9c6-cb2adf591829
The /etc/xen/vm2 file should read something like this:
name = "vm2"
memory = "256"
disk = [ 'phy:/dev/vbds/vm2,xvda,w', ]
vif = [ 'mac=00:16:3e:00:XX:02, bridge=xenbr0', ]
vfb = ["type=vnc,vncunused=1"]
uuid = "52b46335-982b-46e3-e9c6-cb2adf591829"
bootloader="/usr/bin/pygrub"
vcpus=1
on_reboot
= 'restart'
on_crash
= 'restart'
It is very critical that two domains do not share the same UUID or xend may get confused
about which domain is which due to old information in xenstored! It is better to have
no uuid set than to have two domains with the same UUID; then the Xen software will
generate a temporary one on domain creation.
7.

Edit /etc/xen/vm3 so that:

Its name is vm3

It uses /dev/vbds/vm3 as its disk

Its MAC address is 00:16:3e:00:XX:03

Its uuid is replaced with a new one generated using uuidgen

Save the file and exit the editor.


[root@stationX]# uuidgen
db176aef-ab23-4733-bc74-0ef8a7bf3b97
The /etc/xen/vm3 file should read something like this:
name = "vm3"
memory = "256"
disk = [ 'phy:/dev/vbds/vm3,xvda,w', ]
vif = [ 'mac=00:16:3e:00:XX:03, bridge=xenbr0', ]
vfb = ["type=vnc,vncunused=1"]
uuid = "db176aef-ab23-4733-bc74-0ef8a7bf3b97"
bootloader="/usr/bin/pygrub"
vcpus=1
on_reboot
= 'restart'
on_crash
= 'restart'

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 4 Sequence 4 Solutions


85

8.

Shut down vm1. Make absolutely certain that it has completely shut down cleanly before
continuing to the next step!
[root@stationX]# virsh shutdown vm1

9.

Verify that vm1 is not running. Make two logical volume snapshots of /dev/vbds/vm1;
vm2 which is 3.5 GB in size, and vm3 which is 1 GB in size.
[root@stationX]# virsh list
Id Name
State
---------------------------------0 Domain-0
running
[root@stationX]#
Logical volume
[root@stationX]#
Logical volume

10.

lvcreate -s -L 3.5G -n vm2 /dev/vbds/vm1


"vm2" created
lvcreate -s -L 1G -n vm3 /dev/vbds/vm1
"vm3" created

You have just created two clones of vm1! Start domains vm2 and vm3. You may have to
shut down vm0 on your system in order to free enough memory.
# virsh shutdown vm0
[root@stationX]# xm create vm2
[root@stationX]# xm create vm3

11.

At this point you can log into vm2 and vm3 and see that they have identical configurations
to vm1, but unique MAC addresses on their network cards. Use yum to install some
packages on vm2, and then use rpm to compare the package list on vm3, and you will see
their configuration is now different.
On vm2:
[root@vm2]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: sit0: <NOARP> mtu 1480 qdisc noop
link/sit 0.0.0.0 brd 0.0.0.0
3: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop qlen 1000
link/ether 00:16:3e:00:XX:02 brd ff:ff:ff:ff:ff:ff
[root@vm1]# yum install vsftpd
[root@vm1]# rpm -q vsftpd
vsftpd-2.0.5-10.el5
On vm3:
[root@vm3]# rpm -q vsftpd
package vsftpd is not installed

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 4 Sequence 4 Solutions


86

12.

Reinstall vm2 to bring it back to stock configuration. Destroy the vm2 domain and remove
/dev/vbds/vm2. Make sure vm1 is still stopped and make a new /dev/vbds/vm2
snapshot of /dev/vbds/vm1. Start vm2, log in, and check to see if your extra packages
are still installed.
On Domain-0:
[root@stationX]#
[root@stationX]#
[root@stationX]#
[root@stationX]#

virsh destroy vm2


lvremove /dev/vbds/vm2
lvcreate -s -L 3.5G -n vm2 /dev/vbds/vm1
xm create vm2

On vm2:
[root@vm2]# rpm -q vsftpd
package vsftpd is not installed
13.

You may have noticed that even though we made /dev/vbds/vm3 only 1 GB in size, its
/dev/xvda still appears to be 3.5 GB in size just like vm1 and vm2. This is one of the
advantages of snapshot volumes; the snapshot used for the block device only needs to be
large enough to hold the differences between it and its parent volume. On Domain-0, run the
lvs command to see how much of the snapshot is in use (under Snap%).
If the differences accumulate to more data than the snapshot can hold, it becomes invalid
and the virtual disk is effectively destroyed, which is the downside of using a snapshot
smaller than the parent volume. But a snapshot can be resized before it becomes full to
avoid this. Use lvresize on /dev/vbds/vm3 to give it 3.5 GB of space for differences.
[root@stationX]# lvs
LV
VG
Attr
LSize
Origin
vm1 vbds owi-a3.50G
vm2 vbds swi-ao
3.50G vm1
vm3 vbds swi-ao
1.00G vm1
home vol0 -wi-ao 512.00M
root vol0 -wi-ao
8.00G
[root@stationX]# lvresize -L 3.5G
Extending logical volume vm3 to
Logical volume vm3 successfully
[root@stationX]# lvs
LV
VG
Attr
LSize
Origin
vm1 vbds owi-a3.50G
vm2 vbds swi-ao
3.50G vm1
vm3 vbds swi-ao
3.50G vm1
home vol0 -wi-ao 512.00M
root vol0 -wi-ao
8.00G

Snap%

Move Log Copy%

0.32
1.34

/dev/vbds/vm3
3.50 GB
resized
Snap%

Move Log Copy%

0.33
0.38

A bug in RHEL 5.0 may cause lvs not to report information on all logical volumes or to
otherwise misbehave. To fix this, try rebuilding the LVM cache file:

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 4 Sequence 4 Solutions


87

[root@stationX]# rm -f /etc/lvm/.cache; pvscan; vgscan; lvscan

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 4 Sequence 4 Solutions


88

Sequence 5 Solutions
1.

On Domain-0, create a new 3.5 GB volume, vm4, in the vbds volume group.
[root@stationX]# lvcreate -L 3.5G -n vm4 vbds

2.

Create the directory /var/lib/xen/install. Download the vmlinuz and


initrd.img files from ftp://server1.example.com/pub/images/xen/ to
that directory.
[root@stationX]#
[root@stationX]#
[root@stationX]#
[root@stationX]#

3.

mkdir /var/lib/xen/install
cd /var/lib/xen/install
lftpget ftp://server1/pub/images/xen/vmlinuz
lftpget ftp://server1/pub/images/xen/initrd.img

Copy /etc/xen/vm1 to /etc/xen/vm4. Edit /etc/xen/vm4 so that:

Its name is vm4

It uses /dev/vbds/vm4 as its disk

Its MAC address is 00:16:3e:00:XX:04

Its uuid is replaced with a new one generated using uuidgen

Save the file and exit the editor.


[root@stationX]# uuidgen
467197b9-3b04-43e3-9163-326d96ad86b9
The /etc/xen/vm4 file should read something like this:
name = "vm4"
memory = "256"
disk = [ 'phy:/dev/vbds/vm4,xvda,w', ]
vif = [ 'mac=00:16:3e:00:XX:04, bridge=xenbr0', ]
vfb = ["type=vnc,vncunused=1"]
uuid = "467197b9-3b04-43e3-9163-326d96ad86
b9"
bootloader="/usr/bin/pygrub"
vcpus=1
on_reboot
= 'restart'
on_crash
= 'restart'
4.

Use xm to start the Kickstart of vm4. Use ftp://server1.example.com/pub/


gls/vm1.cfg as your kickstart file. Use kernel= to pass the path to the installer kernel,
ramdisk= for the path to the initrd, and extra= to pass Kickstart options. Set bootloader=
to an empty string. Set on_reboot=destroy so that the domain uses its new kernel and
not the installer kernel after installation. (You may need to shut down one of your other
domains first if your system is running low on available RAM.)

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 4 Sequence 5 Solutions


89

[root@stationX]# xm create -c -f /etc/xen/vm4


kernel=/var/lib/xen/install/vmlinuz
ramdisk=/var/lib/xen/install/initrd.img
extra="ks=ftp://server1.example.com/pub/gls/vm1.cfg noipv6"
bootloader="" on_reboot=destroy
The -c option is not necessary, but it may be useful to open the vm4 virtual serial
console for debugging the start of the install process. You may close the console without
interrupting the install by typing Ctrl-]. You may also open the graphical console in virtmanager to follow along.
The noipv6 option to extra= disables configuration of IPv6 networking and may speed your
Kickstart in this classroom.
5.

After the installation finishes, you may elect to restart vm4 to verify that the installation
worked.
[root@stationX]# xm create vm4

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 4 Sequence 5 Solutions


90

Unit 5

xend Configuration and Live Migration

5-1
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 5
91

Objectives
Upon completion of this unit, you should be able to:
Configure the xend service
Implement custom network environments
Understand and implement live migration
5-2
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 5
92

Xen Dom0 User-Space Components


xend
Daemon which provides management access to hypervisor

xenstored
Manages database of information about active domains

xenconsoled and xen-vncfb


Manages access to virtual serial and graphical consoles

5-3
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The user-space components of Xen which run in Dom0 provide critical functionality for the virtualization system.
The xend daemon is a Python script which acts as an intermediary between user management utilities such as xm and
virsh and the hypervisor. The xenstored daemon manages the XenStore database, which contains information about
active domains on the node and their drivers. The xenconsoled and xen-vncfb services start with individual domains
and manage access to their virtual serial and graphical consoles respectively.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 5
93

/etc/xen/xend-config.sxp
Configuration file for xend
Local and remote management interfaces
Networking environment setup
Live migration configuration

5-4
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The xend daemon is configured through the file /etc/xen/xend-config.sxp. The format of the file is
documented in xend-config.sxp(5) and in the sample file shipped with Red Hat Enterprise Linux 5.
Parameters in the file are specified in S-expression format. This is a format for representing structured data in text
form, much like XML in concept, developed by Ron Rivest in 1997.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 5
94

Management Interfaces
Unix socket interface is mandatory for tools to work
(xend-unix-server yes)

HTTP remote management interface available


Insecure and unencrypted in Xen 3.0.3
Upcoming version of libvirt supports its own secure interoperable mechanisms

5-5
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Local Dom0 utilities communicate with the hypervisor through xend. For this purpose, xend listens on a Unix domain
socket, normally /var/lib/xend/xend-socket, by default. This is enabled by the xend-unix-server
directive in the xend-config.sxp file, and should not be disabled.
Xen also provides a HTTP-based remote management interface, which can be enabled with the directive xendhttp-server. However, in Xen 3.0.3, this mechanism is utterly insecure; there is no authentication of the remote
client at all, and no protection or encryption of the commands sent. Upcoming versions of the libvirt library will
support a number of secure remote management methods which are preferable to the current HTTP-based Xen
mechanism.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 5
95

Creating Local Private Networks


A local bridge restricted to domains on the node
Create custom network script
Start normal bridge and extra private bridge

Edit /etc/xen/xend-config.sxp to use it


(network-script network-custom)

Configure DomU virtual NICs to use the private bridge


5-6
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The Red Hat Virtualization environment may be customized by adjusting the scripts used to set up the networking
configuration. Custom network scripts are stored in the /etc/xen/scripts directory; the standard script
supported is /etc/xen/scripts/network-bridge.
One simple example is to set up a second network bridge which is not attached to peth0, but is a private network
used only by local domains on the hardware node. First, a script must be written to bring up the default networking
environment and then add an extra virtual bridge to Dom0. Create the following script as /etc/xen/scripts/
network-custom and make it executable:
#!/bin/bash
# network-custom
# custom network script to create a private virtual network
#
# first we must active the default network bridge, xenbr0
/etc/xen/scripts/network-bridge netdev=eth0 start
# next we create and activate a private bridge, private0
brctl addbr private0
ifconfig private0 up
To use this script, edit /etc/xen/xend-config.sxp and change the directive (network-script
network-bridge) to read (network-script network-custom).
Finally, each DomU that will use the private bridge needs to edit its /etc/xen/domain configuration file to add a
virtual network interface that is bound to that bridge:
vif = [ 'mac=00:16:3e:aa:aa:a0, bridge=xenbr0',
'mac=00:16:3e:aa:aa:a1, bridge=private0', ]

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 5
96

Physical Network Separation


Normally, only one bridge is set up, associated with peth0
A custom network script can be used to set up separate bridges
Easy for up to four physical NICs/bridges

Basic procedure is the same as for a private bridge


5-7
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

By default, one xenbr0 bridge is set up to connect the DomU and Dom0 virtual network interfaces to the physical
eth0 interface on the hardware node. If a node has multiple network cards, it is not difficult to modify the network
setup process to configure multiple bridges, one for each physical NIC. Create a /etc/xen/scripts/networkcustom script that reads something like the following:
#!/bin/bash
# network-custom
# custom network script to create a bridge for each physical NIC
#
# first define a list of physical ports
PORTS="eth0 eth1 eth2 eth3"
# active a network bridge for each port
for P in $PORTS ; do
/etc/xen/scripts/network-bridge netdev=$P start
done
This approach will create one bridge for each physical network port listed in the script. The bridges will be named
xenbrN, where N is the number of the physical ethernet device name.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 5
97

Masquerading the Private Bridge


Uses Dom0 as NAT router for private bridge
Allows IP on Dom0's eth0 to change
Dom0 firewall rules block traffic to domains on xenbr0

Create dummy0 dummy NIC on Dom0


Edit xend-config.sxp to bind dummy0 to xenbr0
Configure IP forwarding and masquerading on Dom0
Configure DomU default route to point to IP of dummy0
5-8

For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

This advanced example modifies the private bridge configuration to allow domains on the bridge to use Dom0 as a
NAT router. One advantage of this mode is that DomU domains on the node are not part of the physical network that
eth0 is on. This is useful on a laptop; the IP address of the eth0 interface can change in Dom0 without requiring
changes of the private virtual network used by the DomUs. Firewall rules on Dom0 masquerade and can protect the
DomUs in this configuration.
First, in Dom0 set up a dummy network interface, dummy0. To create a dummy device, edit /etc/
modprobe.conf and add the following lines:
alias dummy0 dummy
options dummy numdummies=1
Next, create a normal /etc/sysconfig/network-scripts/ifcfg-dummy0 configuration file for dummy0,
setting its IP address to something on an appropriate private RFC1918 network.
Edit /etc/xen/xend-config.sxp to bind dummy0 to the xenbr0 bridge instead of the physical eth0
interface. When xend starts, the Dom0 eth0 interface is now the external physical interface, and the Dom0 dummy0
interface is plugged into the xenbr0 hub through the virtual vif0.0 port.
(network-script 'network-bridge bridge=xenbr0 netdev=dummy0')
Now turn Dom0 into a router by setting the sysctl net.ipv4.ip_forward=1 in /etc/sysctl.conf. Set a
firewall rule on Dom0 to masquerade traffic from the private bridge to the Internet:
/sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
service iptables save
Finally, ensure that the IP addresses used by your DomU domains on the xenbr0 bridge are on the same private
RFC1918 network as the one you used for dummy0, and that they are using the IP address of dummy0 for their default
router.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 5
98

Virtual Machine Migration


Migration
A domain may be moved from one computer to another

Live migration
A domain may be migrated while still running

Requirements for domain migration include:


Same shared storage: Domain's VBDs must be visible to both nodes
Same network: Both nodes must be on the same link and subnet
Same architecture: Both nodes must have the same CPU architecture

5-9
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Migration is the ability to move a running domain from one hardware node to another. Normal migration suspends the
domain, copies its memory state to the other node, then resumes the domain on the new node. Live migration allows
the domain to continue running throughout the migration process, typically only suspending the machine at the end
of the process to complete the memory transfer and to resume the domain on the new node. From the point of view of
remote clients, they should not even notice that the domain has changed computers when live migration is used.
Red Hat Virtualization currently requires that several conditions must be met in order to use migration to relocate a
domain:

Migration only transfers memory state, not file system data. Both hardware nodes must have access to the VBDs
through some shared storage device.

The MAC address and IP addresses of the virtual network interfaces travel with the relocated domain. Since the
guest operating system does not know the domain has migrated, the migration must be performed between two
hardware nodes on the same network link, with access to the same broadcast domain, so that network traffic can
get properly redirected.

The processor architecture of the two hardware nodes must be the same. Migration from a 64-bit x86 node to a
32-bit x86 node will not work. In some cases, if the processor supports certain features on one machine which
the other does not even though the basic architecture is the same, there may be issues as well. If the guest can see
physical hardware, using advanced methods such as PCI pass-through, migration will not work because some state
is in the hardware. Currently, fully virtualized guests can not be migrated due to related issues, but future updates
to RHEL 5 may address that particular problem.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 5
99

Shared Storage
Shared storage devices allow multiple machines to see the
same hard disk, RAID array, or logical volume
iSCSI over Gigabit Ethernet
Cheaper than competing SAN solutions such as Fibre Channel
Software for host initiator included with RHEL 5
Software for target as Technology Preview in RHEL 5.1 beta

5-10
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Xen-based migration only copies the current state of the domain's main memory from the old computer to the new
computer. It does not copy the contents of the file systems on the domain's VBDs to the new computer. It is generally
not a good idea to store a domain's disk image on an NFS-mounted file system. Therefore, both hosts must have a
shared block device that gives them the ability to see the partition, disk, logical volume, or file on a cluster-aware file
system being used for the domain's virtual block device so that it is available after migration.
One approach is to use an external, direct-attached SCSI array supporting multiple-bus, single-initiator operation. A
more common approach is to use Storage Area Network technologies such as Fibre Channel or iSCSI to make disks in
a storage array available to multiple machines at the same time.
The shared storage technology that we will use in the live migration lab for this course is iSCSI. The iSCSI protocol,
specified by RFC 3720, transports SCSI commands over TCP/IP. It is usually paired with Gigabit Ethernet or 10
Gigabit Ethernet for maximum performance. Red Hat Enterprise Linux 5 ships with a software-based iSCSI initiator
which can access disks advertised by a network-attached iSCSI target on TCP port 3260. Commercial hardware-based
iSCSI targets are available; as this course goes to press, a package providing support for a Linux server to act as a
software-based iSCSI target is shipping as a Technology Preview in the beta of Red Hat Enterprise Linux Advanced
Platform 5.1.
iSCSI-based hard drives generally appear just like SCSI-based hard drives from the point of view of the operating
system, using names of the form /dev/sdX.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 5
100

Shared Storage and iSCSI Considerations


Migration requires consistent file or device names
Using a logical volume on shared storage solves this
Otherwise work with udev, or use a file on a cluster-aware file system

Make sure VBD is only used by one domain at a time


Otherwise data corruption of the virtual disk is likely

iSCSI is a cleartext protocol


Put iSCSI traffic on an isolated network
Consider using mutual authentication and IPsec if available

5-11
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

There are a number of considerations that should be kept in mind when setting up iSCSI-based shared storage for use
with domain migration. Some of these considerations are true no matter what shared storage technology is used, and
some are iSCSI-specific.
The name of the file or block device used for the VBD on the new hardware must be the same as the name on the old
hardware. This may be reliably true by default. If not, then steps must be taken to ensure that this is true. One approach
is to configure udev to rename the block device on both nodes appropriately. Another approach is to use a logical
volume on the shared block device as the VBD, since the volumes's name will be the same in both places. A third
approach is to format the shared block device with a cluster-aware file system such as GFS, mount it on all Dom0
domains, and to use a file on that file system as the VBD. Most of these solutions involve configurations beyond the
scope of this class. Red Hat offers an advanced course on these topics, RH436: Red Hat Enterprise Clustering and
Storage Management. Interested students can also investigate the free manuals Configuring and Managing a Red
Hat Cluster and the LVM Administrator's Guide, both available from the http://www.redhat.com/docs/
manuals/enterprise/ website.
It is also critical to ensure that only one domain at a time is using a particular virtual block device. If more than one
running domain shares a VBD, or if the same domain is somehow running on multiple hardware nodes at the same
time, file system corruption is almost certain to occur as both domains try to change it at the same time.
iSCSI is a cleartext protocol. Data passed between the initiator and target is not encrypted, and often is not
authenticated by default. It is a good practice to dedicate an isolated network for iSCSI traffic. The initiator has two
NICs, one for normal network communication and one for iSCSI only. The only hosts on the iSCSI network should
be those which need access to shared targets. If available, consider enabling mutual initiator/target authentication, and
possibly IPsec, to improve security at the cost of some performance, especially if iSCSI traffic can not be isolated to a
secure network.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 5
101

Enabling Live Migration


Edit /etc/xen/xend-config.sxp on Dom0
Set (xend-relocation-server yes)
Set xend-relocation-address to IP of local interface receiving migrations
Set xend-relocation-hosts-allow to regexps matching IPs or
hostnames of senders

Restart the xend service


5-12
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

To enable migration, on Dom0 of the machine receiving the migration, open /etc/xen/xend-config.sxp in
a text editor. Enable the xend-relocation-server directive. By default, xend will listen on port 8002/tcp on
all Dom0 network interfaces. To restrict this to one particular IP address, set xend-relocation-address. To
change the port used for relocation, set xend-relocation-port appropriately.
(xend-relocation-server yes)
(xend-relocation-address 10.0.0.2)
(xend-relocation-port 8002)
To add some semblance of security, xend-relocation-hosts-allow should be set to a space-separated list
of Python (Perl-style) regular expressions matching hostnames and IP addresses permitted to relocate their domains
to this host. If it is not set, which is the default, all hosts which can communicate with the relocation address and port
may migrate their domains to this host!
(xend-relocation-hosts-allow ^avon\.example\.com$ ^10\.0\.0\.\d{1,3}$)

After making all changes, restart the xend service.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 5
102

Performing Domain Migration


xm migrate domain hostname
Migrate domain to hostname
Suspend domain, transfer to new host, resume on new host

xm migrate domain hostname -l


Live migrate domain to hostname
Minimizes down time, transfer may take longer

-r Mb option used to rate limit data transfer


5-13
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The xm migrate command is used to migrate a domain. With no options, it suspends the domain, then transfers the
memory state to the destination host, then resumes domain execution on the destination.
With the -l option, a live migration is performed. The memory transfer begins while the system is live, then regions
of memory that have changed since the transfer started are re-transferred, and this is repeated a few times until finally
the domain is suspended, remaining state is transferred, and the domain resumes on the target system. The advantage
of live migration is that the domain is generally only down for a few hundred milliseconds as the transfer completes;
the disadvantage is that if large regions of main memory are changing, large parts of the migration may need to be
repeated several times, delaying completion of the migration process.
The -r Mb option allows a maximum transfer rate to be specified to control the network bandwidth consumed by the
migration process.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 5
103

Security Considerations of Migration


Current implementation is insecure
Domain's memory is transmitted unencrypted
Relocation server does not authenticate client

Do not enable migration if not needed


Use isolated network for relocation if possible
5-14
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The implementation of the migration mechanism in Xen 3.0.3 is not secure. Therefore, if the migration feature will be
used, it should be carefully configured; if there is no plan to use it, it should remain disabled. Migration is disabled by
default.
Migration network traffic is not encrypted in any way. An attacker with appropriate tools can capture or tamper with
the contents of main memory of the migrating domain as it is transferred.
Likewise, the relocation server accepting the domain does not authenticate the client sending the domain in any way.
This means that any client that can transmit to the relocation address and port can start a domain with an arbitrary
configuration on the relocation server. The xend-relocation-hosts-allow directive can help to control this
to some extent; remember that it takes a space-separated list of Perl-style regular expressions matching host names
and IP addresses.
The relocation server can be bound to listen to only one IP address on the system, and can be configured to use any
arbitrary TCP port. One useful approach to make migrations more secure is to have two networks; a public network
or VLAN, and an isolated migration network or VLAN. Dom0 on the hardware nodes has an IP address on both
networks; the DomUs only have virtual interfaces attached to xenbr0, which is bridged with the interface on the
public network. The relocation address on the hardware nodes should be bound to the IP of the interface on the
isolated migration network, so all migration traffic stays off the public network.
Note the similarity to an iSCSI+GbE storage network configuration. Normally, Dom0 is the only domain that needs
to have interfaces on the isolated migration network and the isolated storage network, since the iSCSI devices seen by
Dom0 can be turned into VBDs using the same mechanism as normal physical block devices. If a DomU uses iSCSI
devices directly instead, it will also need a virtual interface on the storage network. DomUs generally should not have
an interface on the migration network.
Migration should only be performed over secure networks.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 5
104

End of Unit 5
Questions and Answers
Summary
Xen user-space components
Format of /etc/xen/xend-config.sxp
Management interfaces
Custom networking environment
Live migration

5-15
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 5
105

Lab 5
Live Migration
Goal:

To configure a virtual host installed on a shared storage device for live


migration between hardware nodes.

System Setup:

In this lab you will need to work with a partner who will configure the other
hardware node involved in the live migration process.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 5
106

Sequence 1: Configuring an iSCSI target for shared storage


Scenario:

In order to support live migration, a virtual machine must have its virtual
disks available to any hardware node it might run on. This generally means
some form of "shared storage" device is required. There are a variety of SAN
technologies to implement shared storage, including Fibre Channel, iSCSI,
GNBD, and so on.
In this sequence, you will configure a software-based iSCSI target using
software from http://iscsitarget.sourceforge.net. This
software does not ship with Red Hat Enterprise Linux, although a successor to
it is being investigated as a Technology Preview in upcoming versions of Red
Hat Enterprise Linux 5. The goal of this exercise is not to learn this particular
implementation of iSCSI target service, but is simply to set up a shared storage
device for the live migration lab.
An iSCSI target uses the TCP/IP network connection as a replacement for the
SCSI bus, to serve access to a hard drive to one or more clients configured
with iSCSI initiators. This will allow you to serve the same block device to
two systems at the same time, and we can use that block device to act as the
virtual disk for a domain that can be migrated from host to host.
You and your partner will work together to configure the iSCSI target (shared
storage) on one of your machines. After configuring the target, you will each
return to your own machine to configure the iSCSI initiators.

Deliverable:

A working iSCSI software target you and your partner can use for the live
migration exercise.

Lab Setup:

Note: In this lab stationX.example.com is always the machine running the


iSCSI target, and stationY.example.com is always the machine not running
the iSCSI target. Anywhere an X or Y appears in the instructions refers to the
station number of the appropriate machine.

System Setup:

This lab assumes a virtual machine has already been created on stationY named
vm1, and that the volume group vbds still has at least 1 GB of remaining
space.

Instructions:
1.

On the machine designated as your target machine (stationX), download and install the
iSCSI target RPMs provided by the instructor's YUM repository.

2.

On stationX, use depmod to probe all modules for dependencies.

3.

On stationX, create a 1 GB snapshot of /dev/vbds/vm1 named /dev/vbds/


iscsi.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 5 Sequence 1


107

4.

Delete the entire contents of /etc/ietd.conf and replace with the following:
Target iqn.2007-07.com.example:disk1
Lun 01 Path=/dev/vbds/iscsi,Type=fileio
Make certain that the second line is indented with a Tab! This will configure our /dev/
vbds/iscsi volume to be exported by our target to one or more initiators.

5.

We can limit which hosts have access to our iSCSI target by using a combination of two
files. Add a line to /etc/initiators.allow to permit your machine and your
partner's machine to access the shared storage:
iqn.2007-07.com.example:disk1 192.168.0.X, 192.168.0.Y
Also add a line to the file /etc/initiators.deny to deny access to any host on the
classroom network not listed in /etc/initiators.allow:
iqn.2007-07.com.example:disk1 192.168.0.0/24

6.

Use the iscsi-target init script to start the iSCSI target service and configure it to restart at
boot.

7.

Verify the iSCSI target is being exported properly:


[root@stationX]# cat /proc/net/iet/volume
tid:1 name:iqn.2007-07.com.example:disk1
lun:1 state:0 iotype:fileio path:/dev/vbds/iscsi

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 5 Sequence 1


108

Sequence 2: Configuring an iSCSI initiator


Scenario:

In this sequence, you will set up a /dev/shared symlink in Domain-0 that


points to an iSCSI disk device shared with your partner.

System Setup:

It is assumed that you have a working iSCSI target from the previous exercise.
Both you and your partner should follow the configuration steps in this
exercise on both stationX and stationY so that you are sharing the same storage
device being delivered by the iSCSI target.

Instructions:
1.

Configure a udev rule to ensure that both nodes have a consistent device name, /dev/
shared, for the iSCSI target.

2.

Install the iscsi-initiator-utils package.

3.

Create an alias name for the initiator, using your machine's hostname for the alias.

4.

Use the iscsi init script to start the iSCSI initiator. Configure it to start on boot.
Check the command output and /var/log/messages for any errors and correct them
before continuing with the lab.

5.

Use iscsiadm to discover any targets being offered to your initiator by the target.

6.

Use iscsiadm to "login" to the iSCSI target.

7.

Use fdisk to see the newly available device. It should appear as an unpartitioned 1 GB
volume.

8.

Verify that the /dev/shared symlink is present.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 5 Sequence 2


109

Sequence 3: Live Migration


Scenario:

In this sequence, you will set up a new virtual machine which uses your iSCSI
disk and live migrate it from stationY to stationX.

System Setup:

Two classroom machines each configured with a /dev/shared symlink that


points to a shared iSCSI disk device.

Instructions:
1.

On stationY, copy /etc/xen/vm1 to /etc/xen/vm5. Edit /etc/xen/vm5 so that:

Its name is vm5

It uses /dev/shared as its disk

Its MAC address is 00:16:3e:00:YY:05

Its uuid is replaced with a new one generated using uuidgen

Save the file and exit the editor.


2.

Start vm5 on stationY.

3.

Edit the xend configuration files to enable the relocation server on both machines.
Configure them to allow relocations to and from each other but not from other hosts in the
classroom.

4.

Restart the xend service so the changes take effect.

5.

From stationX, use ssh to log in to the virtual machine, vm5.stationY.example.com. Run a
command on vm5 that produces continuous output, such as watch -n1 date.

6.

Live migrate the vm5 virtual machine to the other machine. Monitor the ssh session while
you do this.

7.

Assuming /etc/xen/xend-config.sxp was edited on both machines, migrate vm5


back to the original machine.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 5 Sequence 3


110

Sequence 1 Solutions
1.

On the machine designated as your target machine (stationX), download and install the
iscsitarget and iscsitarget-kernel packages provided by the instructor's YUM
repository.
[root@stationX]# yum install iscsitarget iscsitarget-kernel

2.

On stationX, use depmod to probe all modules for dependencies.


[root@stationX]# depmod -a

3.

On stationX, create a 1 GB snapshot of /dev/vbds/vm1 named /dev/vbds/


iscsi.
[root@stationX]# lvcreate -s -L 1G -n iscsi /dev/vbds/vm1

4.

Delete the entire contents of /etc/ietd.conf and replace with the following:
Target iqn.2007-07.com.example:disk1
Lun 01 Path=/dev/vbds/iscsi,Type=fileio
Make certain that the second line is indented with a Tab! This will configure our /dev/
vbds/iscsi volume to be exported by our target to one or more initiators.

5.

We can limit which hosts have access to our iSCSI target by using a combination of two
files. Add a line to /etc/initiators.allow to permit your machine and your
partner's machine to access the shared storage:
iqn.2007-07.com.example:disk1 192.168.0.X, 192.168.0.Y
Also add a line to the file /etc/initiators.deny to deny access to any host on the
classroom network not listed in /etc/initiators.allow:
iqn.2007-07.com.example:disk1 192.168.0.0/24

6.

Use the iscsi-target init script to start the iSCSI target service and configure it to restart at
boot.
[root@stationX]# service iscsi-target start
[root@stationX]# chkconfig iscsi-target on

7.

Verify the iSCSI target is being exported properly:


[root@stationX]# cat /proc/net/iet/volume
tid:1 name:iqn.2007-07.com.example:disk1
lun:1 state:0 iotype:fileio path:/dev/vbds/iscsi

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 5 Sequence 1 Solutions


111

Sequence 2 Solutions
1.

Configure a udev rule to ensure that both nodes have a consistent device name, /dev/
shared, for the iSCSI target.
Create a file named /etc/udev/rules.d/75-rh184.rules containing the
following text:
BUS=="scsi", KERNEL=="sd*[!0-9]", SYSFS{vendor}=="IET*",
SYSFS{model}=="VIRTUAL-DISK*", SYMLINK+="shared"

2.

Install the iscsi-initiator-utils package.


# yum -y install iscsi-initiator-utils

3.

Create an alias name for the initiator, using your machine's hostname for the alias.
[root@stationX]# echo "InitiatorAlias=stationX" >>
/etc/iscsi/initatorname.iscsi
[root@stationY]# echo "InitiatorAlias=stationY" >>
/etc/iscsi/initatorname.iscsi

4.

Use the iscsi init script to start the iSCSI initiator. Configure it to start on boot.
# service iscsi start
# chkconfig iscsi on
Check the command output and /var/log/messages for any errors and correct them
before continuing with the lab.

5.

Use iscsiadm to discover any targets being offered to your initiator by the target.
# iscsiadm -m discovery -t sendtargets -p 192.168.0.X:3260
The output of the iscsiadm discovery command should show the target volume that is
available to the initiator in the form:
192.168.0.X:3260,1 iqn.2007-07.com.example:disk1

6.

Use iscsiadm to "login" to the iSCSI target.


# iscsiadm -m node -T iqn.2007-07.com.example:disk1 -p
192.168.0.X:3260 -l

7.

Use fdisk to see the newly available device. It should appear as an unpartitioned 1 GB
volume.
The example here assumes that your local system disk is /dev/sda and that the iSCSI
disk appears as /dev/sdb.
# fdisk -l

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 5 Sequence 2 Solutions


112

Disk /dev/sda: 100.0 GB, 100030242816 bytes


255 heads, 63 sectors/track, 12161 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
/dev/sda1
*
/dev/sda2
/dev/sda3
/dev/sda4

Start
1
14
1161
1226

End
13
1160
1225
12161

Blocks
104391
9213277+
522112+
87843420

Id
83
8e
82
8e

System
Linux
Linux LVM
Linux swap
Linux LVM

Disk /dev/sdb: 3758 MB, 3758096384 bytes


255 heads, 63 sectors/track, 456 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
/dev/sda1
*
/dev/sda2

Start
1
14

End
13
456

Blocks
104391
3558397+

Id
83
8e

System
Linux
Linux LVM

Note: DO NOT do so now, but should you ever wish to disconnect from the iSCSI target,
first discontinue use of the target, then execute the command:
# iscsiadm -m node -T iqn.2007-07.com.example:disk1 -p
192.168.0.X:3260 -u
Restarting the iscsi service will automatically re-discover and access the target device again.
Should you wish to more permanently disconnect from the target (requiring a manual rediscover to connect again), execute the following command after logging out of the target:
# iscsiadm -m node -o delete -T iqn.2007-07.com.example:disk1 -p
192.168.0.X:3260
8.

Verify that the /dev/shared symlink is present.


# ls -l /dev/shared
lrwxrwxrwx 1 root root 3 Aug 15 09:55 /dev/shared -> sdb

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 5 Sequence 2 Solutions


113

Sequence 3 Solutions
1.

On stationY, copy /etc/xen/vm1 to /etc/xen/vm5. Edit /etc/xen/vm5 so that:

Its name is vm5

It uses /dev/shared as its disk

Its MAC address is 00:16:3e:00:YY:05

Its uuid is replaced with a new one generated using uuidgen

Save the file and exit the editor.


[root@stationX]# uuidgen
2ebf56c4-950f-4552-9023-b2ffb12b9411
The /etc/xen/vm5 file should read something like this:
name = "vm5"
memory = "256"
disk = [ 'phy:/dev/shared,xvda,w', ]
vif = [ 'mac=00:16:3e:00:YY:05, bridge=xenbr0', ]
vfb = ["type=vnc,vncunused=1"]
uuid = "2ebf56c4-950f-4552-9023-b2ffb12b9411"
bootloader="/usr/bin/pygrub"
vcpus=1
on_reboot
= 'restart'
on_crash
= 'restart'
Note: The configuration file is only used during domain creation. As the vm5 domain will
already have been created/started when we begin the migration, there is no need to copy this
configuration file to the machine it is migrating to. If, however, there is a need to start or
restart the virtual machine on the remote machine, then its configuration file may need to be
copied over.
2.

Start vm5 on stationY.


[root@stationY]# xm create vm5

3.

Edit the xend configuration files to enable the relocation server on both machines.
Configure them to allow relocations to and from each other but not from other hosts in the
classroom.
Edit /etc/xen/xend-config.sxp. Enable the relocation server by setting the
directive:
(xend-relocation-server yes)

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 5 Sequence 3 Solutions


114

Next, specify the address xend should listen on for relocation requests (if blank or empty,
all interfaces are used), by uncommenting the line:
(xend-relocation-address '')
Finally, specify the host names of those machines allowed to connect to and interact with
the relocation server. It is critical to only allow trusted hosts to migrate domains to a node
for security reasons. Set the directive:
(xend-relocation-hosts-allow '^localhost$
^localhost\\.localdomain$ 192\\.168\\.0\\.X 192\\.168\\.0\\.Y')
Save the file and exit.
4.

Restart the xend service so the changes take effect.


# service xend restart

5.

Use ssh to log in to the virtual machine, vm5.stationY.example.com. Run a command on


vm5 that produces continuous output, such as watch -n1 date.
[root@stationX]# ssh root@vm5.stationY.example.com
[root@vm5]# watch -n1 date

6.

Live migrate the vm5 virtual machine to the other machine. Monitor the ssh session while
you do this.
[root@stationY]# xm migrate -l vm5 stationX.example.com

7.

Assuming /etc/xen/xend-config.sxp was edited on both machines, migrate vm5


back to the original machine.
[root@stationX]# xm migrate -l vm5 stationY.example.com

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 5 Sequence 3 Solutions


115

Unit 6

Troubleshooting Virtualization

6-1
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 6
116

Objectives
Upon completion of this unit, you should be able to:
Collect system and network information
Boot DomU domains into recovery mode
Access the contents of disk image files
Collect DomU crash dumps
Diagnose common configuration issues
6-2
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 6
117

Troubleshooting in a Virtual Machine


Commands may behave differently in a virtual machine
Run from Dom0 if you want to see real hardware
Run from DomU to see virtual hardware, expect weird answers
lspci, dmidecode, hwclock, etc.

Kernel may behave differently in a virtual machine


Dom0 and paravirtual DomU use kernel-xen
Different CPU scheduling, memory handling, drivers, etc.

6-3
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

When troubleshooting in a virtual machine, it is important to remember that the environment presented to the guest
operating system is different than the normal environment on that hardware, even in Dom0. Generally speaking, if
information about physical hardware is important, check from Dom0. Remember, the DomU environment does not
have direct access to the actual hardware on the box, only the virtual environment presented by the hypervisor and
Dom0.
For example, the dmidecode command can get information from the system BIOS, but only from Dom0. In a
paravirtual DomU nothing is returned. In a fully virtual DomU information about a virtual BIOS is returned. Likewise,
lspci returns real information in Dom0, no information in a paravirtual DomU, and information about the emulated
device model hardware in a fully virtual DomU.
The kernel can also behave differently under virtualization than the standard kernels for the distribution. Both Dom0
and paravirtualized DomUs use the kernel-xen kernel in slightly different modes. One difference between kernel-xen
and the regular kernel is that it does not distinguish between hyperthreads, CPU cores on the same socket, and CPU
cores on different sockets when scheduling processes. Virtual machines may be using paravirtualized drivers rather
than drivers which reflect the hardware on the system. Certain profiling tools may work differently or not at all in
virtual machines. The contents of /proc may be different than expected. These differences should be kept in mind
when debugging issues.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 6
118

Networking Environment
When is eth0 not eth0?
Remember what changes in the standard networking setup
In Dom0: ethtool peth0
Beware ethtool eth0 in a domain

Use brctl on Dom0 to investigate virtual bridges


brctl show displays information about all bridges
brctl help

6-4
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Remember that the virtual networking environment on a machine using Virtualization is not normal. By default, the
virtual eth0 interface on all domains, including Dom0, is connected to a vifD.0 port on on a virtual bridge named
xenbr0 in Dom0. The real eth0 Ethernet interface is renamed peth0 and is only visible from Dom0, and is the
uplink from the virtual bridge to the rest of the LAN.
Running utilities like ethtool eth0 under the standard network environment on Dom0 only displays link status,
because eth0 is the virtual NIC. To check speed and duplex settings for the physical Ethernet card, use ethtool peth0
instead. In a DomU, the information reported by ethtool will also reflect the virtual hardware.
It is also important to remember that a machine may not be using the standard networking configuration if it has been
modified by another system administrator. The brctl show command will display information about all virtual bridges
on the system and what interfaces are bound to those bridges as virtual bridge ports. Remember, it is easy to view and
adjust information about virtual machine MAC addresses by looking in the /etc/xen/domain configuration files.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 6
119

Virtual Machine Information


virsh dumpxml domain
XML description of current domain configuration

virsh nodeinfo
Basic information about system hardware
Xen-native xm info provides somewhat more detail

6-5
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

An active domain does not necessarily have a local configuration file which accurately describes its current
configuration. Domains can be started without configuration files or may have migrated from another system.
virsh dumpxml domain dumps the current configuration of an active domain to the terminal as a standardized
libvirt XML description. There is a Xen-native command that can dump the current configuration as a S-expression
description, xm list domain --long, but the format of the output is internal to Xen and not guaranteed to be stable
across revisions.
Basic commands to get information about the system hardware and the Xen hypervisor are also available. virsh
nodeinfo is the standard command, xm info the Xen-native command.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 6
120

Log Information
xm dmesg
Displays the hypervisor message buffer
Normal kernel dmesg output can also be useful

/var/log/xen/xend.log
Main event log collected by xend

/var/log/xen/xend-debug.log
Event errors from xend and other user-space Xen daemons

6-6
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The Xen hypervisor maintains a log buffer very similar to the Linux kernel's message buffer. The native xm dmesg
command will display the contents of this message buffer, which is kept separate from the Dom0 kernel message
buffer displayed by the normal dmesg command.
A number of log files are maintained in the /var/log/xen directory. The xend.log file is the main log file
for the xend daemon. It collects information about domain operations and errors, and is normally the first log file to
check. The xend-debug.log file contains records of event errors from xend and the other userspace daemons such
as xenstored. Some of these errors may be in the form of Python tracebacks, since many of these daemons are written
in Python. The xend-hotplug.log file logs information about hotplug events as block devices and network
interfaces are added to and removed from domains.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 6
121

Recovery Runlevels and PyGRUB


DomU uses pyGRUB as a bootloader
GRUB work-alike for virtual machines written in Python

pyGRUB menu displayed on virtual serial console


Use xm create -c domain to open console immediately
Pass emergency, S, or 1 as kernel option normally

Prompt will appear on /dev/console


Graphical console by default, can change with kernel options

6-7
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The pyGRUB bootloader was developed by Red Hat to simplify management of the boot process for virtual machines.
pyGRUB is a GRUB work-alike written in Python, and the basic interface is intended to be similar to the normal
bootloader used by Red Hat Enterprise Linux.
pyGRUB displays its menu on the virtual serial console only, not the graphical console. In order to access it when
the domain is started, use xm create -c domain to open the console in your terminal. A normal-looking GRUB
menu should appear, and you can select alternative kernels or enter menu editing mode and pass options to the
kernel through this interface. This includes specifying a runlevel to init, such as the standard recovery runlevels
emergency, S (or single), and 1.
While pyGRUB sends its messages to the virtual serial console, (/dev/xvc0 for paravirtualized domains, /dev/
ttyS0 for HVM domains), init always sends its boot messages to the system console, which is /dev/console. By
default, the kernel makes the virtual graphical console the system console, just like a normal x86-based workstation.
Therefore, in the default configuration, once a recovery runlevel is passed to init on the kernel command line through
the pyGRUB interface, you need to switch to the virtual graphical console for the domain to see the boot messages
from init and the recovery prompt.
To force init to output its boot messages and the recovery prompt on the virtual serial console, the kernel must be
configured to use it and not the graphical console as the system console. To do this, the last console= directive on
the kernel command line used to boot the domain must be the domain's virtual serial console. Use console=xvc0
for paravirtualized domains, console=ttyS0,9600n8 for HVM domains.
After this change, since the virtual graphical console is no longer the system console, init will stop printing its boot
messages and recovery prompts to the graphical console; only the virtual serial console will be used. Normal login
prompts will still appear on the graphical console's virtual terminals once boot completes. (Which is exactly as
specified by the default /etc/inittab file.)

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 6
122

Accessing Disk Images from Dom0


kpartx -av VBD-file creates devices in /dev/mapper
blkid and lvs to find file systems
vgchange -ay volgroup to activate LVs

kpartx -dv VBD-file removes devices in /dev/mapper


Unmount file systems and vgchange -an volgroup first!

If DomU is unbootable, use Dom0 as a rescue environment


If image contains a volume group whose name is already in use
in Dom0, that volume group will not be usable
6-8
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

If a DomU has been rendered unbootable, Dom0 can be used as a rescue environment. The kpartx utility reads a
disk image and creates temporary block devices in the /dev/mapper directory for each "partition" on the image.
The blkid utility can be used to verify their file system labels and they can be mounted as if they were normal disk
partitions.
If the image contains an LVM physical volume, it will be detected and its volume group added into the Dom0
environment. The logical volumes in the volume group need to be made active before they are mounted; vgchange -ay
volgroup accomplishes this. The lvs command can identify what logical volumes are available.
When done with the file systems on the image, unmount them. If logical volumes were in use, vgchange -an
volgroup the volume group. Finally, remove the temporary devices with kpartx -dv VBD-image.
One ugly complication arises if the image file contains a volume group which happens to have the same name as a
volume group already active in Dom0. The LVM command-line utilities currently have trouble determining how to
access the logical volumes in the new volume group. To avoid this problem, it is best to ensure that names used for
volume groups in Dom0 are never used for volume groups in DomUs.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 6
123

Collecting Crash Dumps


virsh dump domain /var/lib/xen/dump/core-file
Dumps image of domain's memory to disk
Xen-native equivalent xm dump-core domain

Paravirtualized domains can automatically crash dump


(enable-dump yes) in /etc/xen/xend-config.sxp
Fully virtualized domains must be manually dumped

Dom0 currently can not be crash dumped


6-9
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

To collect a crash dump of a DomU domain from Dom0, use the command virsh dump domain /var/lib/xen/
dump/core-file. The equivalent Xen-native command is xm dump-core domain. By default, the core file
dumped by Xen has a name of the form YYYY-MMDD-hhmm.ss_domname.domid.core.
These crash dumps can then be analyzed by tools installed in Dom0. One very useful utility is crash. Using crash
to debug a system running kernel-xen requires the matching version of the kernel-xen-debuginfo package. A yum
repository defined in /etc/yum.repos.d/rhel-debuginfo.repo can be enabled to allow yum to install the
relevant packages, or the packages can be freely downloaded from ftp.redhat.com. The basic syntax to start the utility
is crash /usr/lib/debug/lib/modules/kernel-version/vmlinux core-file. A whitepaper on the use of crash is
available at http://people.redhat.com/anderson/crash_whitepaper/.
Currently, it is not possible to use standard mechanisms such as kdump, netdump, or diskdumputils to crash dump
kernel-xen in Dom0. This limitation may be remedied in a later revision of RHEL 5.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 6
124

SELinux and Virtualization


Virtualization components are confined by SELinux
xen_image_t for file-based images (in /var/lib/xen/images)
xend_var_log_t for log files (in /var/log/xen)
xend_var_lib_t for save files and core dumps (in /var/lib/xen)
etc_t and bin_t for configuration files and scripts (in /etc/xen)

Normal troubleshooting procedures apply


6-10
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

SELinux should be running in Enforcing mode on Dom0 to ensure the security of that critical domain. The
Virtualization software is confined by the default policy, and may refuse to function if files are mislabeled or are in
directories which have labels that the appropriate programs are not permitted to access.
Most of the Xen userspace daemons run with the SELinux type xend_t; xenconsoled runs as xenconsoled_t
and xenstored runs as xenstored_t instead. File-based images should be labeled with the type xen_image_t,
and are normally kept in /var/lib/xen/images. Log files should be labeled with the type xend_var_log_t
and are normally kept in the directory /var/log/xen. State files and core dumps are normally labeled
xend_var_lib_t and are saved to /var/lib/xen/save and /var/lib/xen/dump respectively.
Configuration files and network setup scripts are kept in various locations under /etc/xen and are labeled with
etc_t and bin_t respectively.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 6
125

Common Issues
xend will not start
Is Dom0 running kernel-xen?
Does the CPU support the PAE feature?

Dom0 always switches back to regular kernel on update


Check settings in /etc/sysconfig/kernel

Domain refuses to start


Ensure sufficient memory is available

6-11
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

This is a quick review of some basic configuration issues.


Symptom: the xm returns an error "Error connecting to xend: connection refused". Verify that xend is running. If it is,
verify that the Unix domain socket interface is still enabled in the configuration file.
Symptom: xend fails to start at boot, or when service xend start is run the message "Could not obtain handle on
privileged command interface" is displayed. Verify that Dom0 has been booted using the kernel-xen kernel. The
uname -r command is useful for this. If it has, verify that the system's CPU supports the PAE feature. First-generation
Pentium M processors are known not to support PAE.
Symptom: Dom0 seems to prefer to switch its default kernel back to the kernel package rather than kernel-xen
when kernel packages are updated. Verify that the correct preference for kernel-xen has been specified in /etc/
sysconfig/kernel.
Symptom: not all domains configured to start automatically at boot come up. Verify sufficient memory is available on
the system to run all of them. Xen does not permit overcommitment of memory to domains. Check the dom0-minmem directive in /etc/xen/xend-config.sxp to determine its minimum amount of memory and verify that
there is still memory free for new domains.
For additional tips, check out the Red Hat Knowledgebase articles on Virtualization at
http://kbase.redhat.com/faq/

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 6
126

End of Unit 6
Questions and Answers
Summary
Differences in virtualized environment
pyGRUB and recovery modes
kpartx and disk images
Crash dump collection
Common troubleshooting issues

6-12
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 6
127

Lab 6
Virtualization and Troubleshooting
Goal:

To learn more about troubleshooting in the Red Hat Virtualization


environment.

System Setup:

A computer installed with Red Hat Enterprise Linux and configured to run
Red Hat Virtualization with at least one paravirtualized virtual machine from
previous labs.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 6
128

Sequence 1: Using rescue runlevels with virtual machines


Scenario:

In this sequence, you will practice using the virtual serial console to boot a
domain into rescue runlevels.

Instructions:
1.

Check to see if the domain vm2 from your previous labs is currently running. If it is,
gracefully shut it down.

2.

Once vm2 has completely shut down, start it up again with xm create -c to immediately
open the virtual serial console in your terminal. You should see the pyGRUB menu appear.
Hit any key to stop the boot countdown. Then edit the boot menu item for your kernel to
boot to single-user mode.

3.

You do not see the boot process in the virtual serial console. That is because it is not the
system console on vm2. In virt-manager, open the graphical console. You should see vm2
boot to the single-user prompt in that window. Type Ctrl-d to exit the single-user shell and
allow vm2 to finish booting.

4.

Now shut down the domain vm0 if it is running, and repeat the above steps with that
domain in place of vm2. You should notice that this time the boot messages and single-user
prompt appear on the virtual serial console. That is because we configured that console as
the system console in an earlier lab.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 6 Sequence 1


129

Sequence 2: Using Dom0 as a virtual machine rescue environment


Scenario:

In this sequence, you will learn how to use tools such as kpartx to access
virtual block devices in Domain-0 so that you may access the file systems of
unbootable domains for repair.

Instructions:
1.

Shut down the vm2 domain again.

2.

Verify that vm2 has completely shut down. Remember that the vm2 domain uses /dev/
vbds/vm2 as the virtual block device for its hard drives, and it needs to be shut down
cleanly before we can safely mount its file systems from Domain-0. Use kpartx -a to add
temporary devices in /dev/mapper for the partitions in the disk image on the logical
volume /dev/vbds/vm2.

3.

Run blkid to look up information about the file systems on the temporarily-added partitions.
Is there information about all the partitions on the image here? Why not?

4.

Use pvs to check to see if any of the image partitions are LVM physical volumes. If so,
identify the new volume groups and logical volumes. Activate all logical volumes in the
new volume groups, then run blkid again to get information about them. Which one is
probably /?

5.

Mount the / file system of vm2 on /mnt. Then mount the /boot file system of vm2 on /
mnt/boot. Can you use chroot /mnt?

6.

Before starting vm2 again, we have to safely stop using its file systems and volume groups
from Domain-0. Unmount /mnt/boot and /mnt.

7.

Make the logical volumes in VolGroup00 unavailable.

8.

Use kpartx -d to remove the temporary devices in /dev/mapper for the image from /
dev/vbds/vm2.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 6 Sequence 2


130

Sequence 3: Collecting crash dumps from virtual machines


Scenario:

In this sequence, you will learn how to collect live memory dumps from virtual
machines, and how to configure a paravirtualized machine to crash dump on
kernel panic.

Instructions:
1.

NOTE: Throughout this lab, keep an eye on the amount of disk space remaining in /var/
lib/xen/dump. It is very easy to fill the small file system available with memory dumps,
even with the small 256 MB virtual machines we are dumping in this lab.

2.

Check to verify that the vm4 domain is currently running; if not, start it. Collecting a current
memory dump is as easy as running the command:
[root@stationX]# virsh dump vm4 /var/lib/xen/dump/vm4-dump
A memory dump should be located in /var/lib/xen/dump/vm4-dump, and vm4
should still be running.

3.

Next, enable automatic crash dump on kernel panic for virtual machines by editing the /
etc/xen/xend-config.sxp file and restarting xend.

4.

On vm4, crash the system with the SysRq mechanism. Do not crash Domain-0. Ensure
that you are logged into vm4 as root and run the command:
[root@vm4]# echo c > /proc/sysrq-trigger
The domain will freeze, then crash.

5.

On Domain-0, look in /var/lib/xen/dump. There should be a crash dump file with a


name of the form $(date +%Y-%m%d-%H%M.%S)-vm4.domid.core.

6.

The instructor may have kernel-debuginfo-common and kernel-xendebuginfo packages available which match the version of kernel-xen that vm4 is
running. If so, you may investigate the crash utility to see the basic information it might
provide.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 6 Sequence 3


131

Sequence 1 Solutions
1.

Check to see if the domain vm2 from your previous labs is currently running. If it is,
gracefully shut it down.
[root@stationX]# virsh shutdown vm2

2.

Once vm2 has completely shut down, start it up again with xm create -c to immediately
open the virtual serial console in your terminal. You should see the pyGRUB menu appear.
Hit any key to stop the boot countdown. Then edit the boot menu item for your kernel to
boot to single-user mode.
[root@stationX]# xm create -c vm2
At the pyGRUB menu, select your kernel with the arrow keys and type e to edit its menu
entry. Select the kernel line and type e again to edit the line. Add the word single on
the end of the line, then hit Return. Type b to boot the kernel.

3.

You do not see the boot process in the virtual serial console. That is because it is not the
system console on vm2. In virt-manager, open the graphical console. You should see vm2
boot to the single-user prompt in that window. Type Ctrl-d to exit the single-user shell and
allow vm2 to finish booting.

4.

Now shut down the domain vm0 if it is running, and repeat the above steps with that
domain in place of vm2. You should notice that this time the boot messages and single-user
prompt appear on the virtual serial console. That is because we configured that console as
the system console in an earlier lab.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 6 Sequence 1 Solutions


132

Sequence 2 Solutions
1.

Shut down the vm2 domain again.


[root@stationX]# virsh shutdown vm2

2.

Verify that vm2 has completely shut down. Remember that the vm2 domain uses /dev/
vbds/vm2 as the virtual block device for its hard drives, and it needs to be shut down
cleanly before we can safely mount its file systems from Domain-0. Use kpartx to add
temporary devices in /dev/mapper for the partitions in the disk image on the logical
volume /dev/vbds/vm2.
[root@stationX]# kpartx -av /dev/vbds/vm2
add map vm2p1 : 0 208782 linear /dev/vbds/vm2 63
add map vm2p2 : 0 7116795 linear /dev/vbds/vm2 208845
This indicates that two partitions were on that image, and that /dev/mapper/vm2p1 and
/dev/mapper/vm2p2 have been added for them temporarily.

3.

Run blkid to look up information about the file systems on the temporarily-added partitions.
Is there information about all the partitions on the image here? Why not?
Information about /dev/mapper/vm2p2 is missing because it is a LVM physical
volume.

4.

Use pvs to check to see if any of the image partitions are LVM physical volumes. If so,
identify the new volume groups and logical volumes. Activate all logical volumes in the
new volume groups, then run blkid again to get information about them. Which one is
probably /?
Run vgs. You will see that the volume group VolGroup00 from vm2 has been added to
the list of volume groups on your system. Run lvs. You should see two additional logical
volumes from that volume group, LogVol00 and LogVol01. Neither one is marked
active. Run vgchange -ay VolGroup00 to activate both logical volumes. When you rerun
blkid, you should see that LogVol01 is a swap space. That leaves /dev/VolGroup00/
LogVol00 as the remaining candidate for /.
Note. The automatic addition of volume groups from the image happens to work well
because the volume group names used by vm2 do not conflict with the volume group names
used by Domain-0. If there is a conflict, that makes this troubleshooting trick much harder
to use.

5.

Mount the / file system of vm2 on /mnt. Then mount the /boot file system of vm2 on /
mnt/boot. Can you use chroot /mnt? Exit the chroot shell.
[root@stationX]#
[root@stationX]#
[root@stationX]#
[root@stationX]#

Copyright 2007 Red Hat, Inc.


All rights reserved

mount /dev/VolGroup00/LogVol00 /mnt


mount /dev/mapper/vm2p1 /mnt/boot
chroot /mnt
exit

RH184-RHEL5-en-2-20070907, Lab 6 Sequence 2 Solutions


133

6.

Before starting vm2 again, we have to safely stop using its file systems and volume groups
from Domain-0. Unmount /mnt/boot and /mnt.
# umount /mnt/boot; umount -l /mnt

7.

Make the logical volumes in VolGroup00 unavailable.


[root@stationX]# vgchange -an VolGroup00

8.

Use kpartx -d to remove the temporary devices in /dev/mapper for the image from /
dev/vbds/vm2.
[root@stationX]# kpartx -dv /dev/vbds/vm2
If you check with vgs and lvs, you should see that the volume groups and logical volumes
belonging to vm2 are no longer listed.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 6 Sequence 2 Solutions


134

Sequence 3 Solutions
1.

NOTE: Throughout this lab, keep an eye on the amount of disk space remaining in /var/
lib/xen/dump. It is very easy to fill the small file system available with memory dumps,
even with the small 256 MB virtual machines we are dumping in this lab.
[root@stationX]# df /var/lib/xen/dump

2.

Check to verify that the vm4 domain is currently running; if not, start it. Collecting a current
memory dump is as easy as running the command:
[root@stationX]# virsh dump vm4 /var/lib/xen/dump/vm4-dump
A memory dump should be located in /var/lib/xen/dump/vm4-dump, and vm4
should still be running.

3.

Next, enable automatic crash dump on kernel panic for virtual machines by editing the /
etc/xen/xend-config.sxp file and restarting xend.
a.

In /etc/xen/xend-config.sxp set the directive:


(enable-dump yes)

b.
4.

[root@stationX]# service xend restart

On vm4, crash the system with the SysRq mechanism. Do not crash Domain-0. Ensure
that you are logged into vm4 as root and run the command:
[root@vm4]# echo c > /proc/sysrq-trigger
The domain will freeze, then crash.

5.

On Domain-0, look in /var/lib/xen/dump. There should be a crash dump file with a


name of the form $(date +%Y-%m%d-%H%M.%S)-vm4.domid.core.

6.

The instructor may have kernel-debuginfo-common and kernel-xendebuginfo packages available which match the version of kernel-xen that vm4 is
running. If so, you may investigate the crash utility to see the basic information it might
provide.
Information on how to debug kernel crash dumps is beyond the scope of this course.
A whitepaper is available at http://people.redhat.com/anderson/
crash_whitepaper/ which provides a good introduction to the capabilities of crash.
The crash package should already be installed in Domain-0. If so, install the other
packages from the instructor's YUM repository on server1 and run crash:
[root@stationX]# yum install kernel-xen-debuginfo
kernel-debuginfo-common
[root@stationX]# crash /usr/lib/debug/lib/modules/2.6.18-8.el5xe
n/vmlinux /var/lib/xen/dump/2007-0814-0154.11-vm4.10.core

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 6 Sequence 3 Solutions


135

crash 4.0-3.14
[...]
KERNEL: /usr/lib/debug/lib/modules/2.6.18-8.el5xen/vmlinux
DUMPFILE: 2007-0814-0154.11-vm4.10.core
CPUS: 1
DATE: Tue Aug 14 01:54:11 2007
UPTIME: 00:00:55
LOAD AVERAGE: 0.56, 0.18, 0.06
TASKS: 73
NODENAME: vm4.stationX.example.com
RELEASE: 2.6.18-8.el5xen
VERSION: #1 SMP Fri Jan 26 14:42:21 EST 2007
MACHINE: i686 (1994 Mhz)
MEMORY: 264 MB
PANIC: "SysRq : Trigger a crashdump"
PID: 2064
COMMAND: "bash"
TASK: c0fac550 [THREAD_INFO: c0fab000]
CPU: 0
STATE: TASK_RUNNING (SYSRQ)
crash> bt
PID: 2064
TASK: c0fac550 CPU: 0
COMMAND: "bash"
#0 [c0fabf04] xen_panic_event at c0409404
#1 [c0fabf18] notifier_call_chain at c05f68d2
#2 [c0fabf2c] panic at c041b47c
#3 [c0fabf48] sysrq_handle_crashdump at c052751b
#4 [c0fabf50] __handle_sysrq at c052736f
#5 [c0fabf78] write_sysrq_trigger at c0493691
#6 [c0fabf84] vfs_write at c0462719
#7 [c0fabf9c] sys_write at c0462d08
#8 [c0fabfb8] system_call at c0404cf8
EAX: ffffffda EBX: 00000001 ECX: b7d2c000 EDX: 00000002
DS: 007b
ESI: 00000002 ES: 007b
EDI: b7d2c000
SS: 007b
ESP: bf87316c EBP: bf87318c
CS: 0073
EIP: 00162402 ERR: 00000004 EFLAGS:
00000246
crash> quit
The initial screen shows the basic state of the system at the time of the crash and the process
which was running. Then additional commands can be used to get additional information.
bt prints out a back trace of the current context, by default the last few functions before
the crash. ps lists all processes running on the crashed system. log displays the state of the
dmesg buffer at the time of the crash. rd can display arbitrary memory addresses. There are
many other useful commands; read the whitepaper/tutorial for details.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 6 Sequence 3 Solutions


136

Unit 7

Hardware-Assisted Full Virtualization

7-1
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 7
137

Objectives
Upon completion of this unit, you should be able to:
Determine if your hardware supports HVM domains
Understand how HVM domains differ from PV domains
Install fully virtualized HVM domains
Troubleshoot basic problems with HVM domains
7-2
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 7
138

Introduction to HVM
Hardware-assisted full virtualization allows unmodified operating
systems to be used
Older systems like Red Hat Enterprise Linux 3.9
Closed systems like Microsoft Windows Server 2003

Generally slower than paravirtualization


Requires processors with virtualization support
7-3
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Red Hat Virtualization has support for Hardware Virtual Machine (HVM) domains; these domains take advantage
of the virtualization extensions in modern x86 processors in order to run unmodified operating systems. Unlike
paravirtualized guests, HVM guest operating systems do not require any modification in order to be used in a domain.
Older operating systems such as Red Hat Enterprise Linux 3.9, or proprietary operating systems such as Microsoft
Windows Server 2003, can be installed and used in virtual machines.
Given the choice, however, generally paravirtualized domains are faster than HVM domains. One of the reasons for
this is that the hardware devices in HVM are emulated by Dom0 and the hypervisor in order to isolate the DomU
domains from each other. In the future, one solution to this problem will be the development of paravirtualized drivers
for various operating systems that might be running in an HVM. These drivers would be more efficient than simulated
hardware because their knowledge of the virtual machine allows them to perform operations in more streamlined ways
not available to an emulated real device. Even without special drivers, HVM domains are generally faster than fully
emulated solutions.
Another issue is that HVM domains do require modern x86 processors which have virtualization support, and these
processors are just beginning to appear in large numbers on production systems.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 7
139

Processor Support for HVM Domains


Intel VT-x or AMD-V must be supported by CPU
Check /proc/cpuinfo for Intel VT-x or AMD-V support
svm flag indicates AMD-V support
vmx flag indicates Intel VT-x support

Intel VT-x or AMD-V must be enabled in BIOS


Xen guest architecture rules on x86/x86-64:
32-bit system can run 32-bit PV and HVM guests
64-bit system can run 64-bit PV and HVM guests, and 32-bit HVM guests

7-4
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Before attempting to use hardware-assisted full virtualization on x86 to set up HVM guest operating systems, it is
critical to verify that the hardware CPU actually provides appropriate support for Intel Virtualization Technology
(VT-x) or AMD Virtualization (AMD-V). The x86info utility can provide this information, as can the file /proc/
cpuinfo. If Intel VT-x is supported, the flag vmx will be listed; if AMD-V is supported, the flag svm will appear.
Only one of these two flags need appear.
[root@stationX]# cat /proc/cpuinfo | egrep '(vmx|svm)'
flags
: fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush
dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc pni monitor ds_cpl
vmx est tm2 cx16 xtpr lahf_lm
In addition, the BIOS must have virtualization support enabled. Some vendors disable this at the factory for certain
models of hardware, some do not. The procedure for enabling virtualization support in the BIOS will vary from vendor
to vendor.
The type of processor architecture has an impact on what processor architectures are supported in the virtual machines.
For Xen 3.0.3, a machine running RHEL 5 for 32-bit x86 can run 32-bit paravirtualized and fully virtualized
guests, but not 64-bit x86-64 guests. A machine running RHEL 5 for 64-bit x86-64 can run 64-bit guests, and 32bit fully virtualized guests. A computer running a 64-bit version of RHEL 5 in Dom0 currently can not run 32-bit
paravirtualized guests. A computer running a 32-bit version of RHEL 5 on a 64-bit capable processor can run 32bit paravirtualized guests but can not run 64-bit guests. These limitations may be addressed in later updates of Red
Hat Enterprise Linux 5. The xen_caps line in the output of xm info can provide information on supported guest
architectures. The example output below indicates support for paravirtualized 32-bit PAE kernels, and HVM 32-bit
non-PAE and PAE kernels.
[root@stationX]# xm info | grep xen_caps
xen_caps
: xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 7
140

Limitations of HVM Domains


2 GB RAM maximum per HVM domain
May not save state to disk or restore
Manually shutdown HVM domains before Dom0 shutdown

Domain migration is not supported at present


7-5
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The version of Xen used by Red Hat Virtualization places some restrictions on HVM domains that do not apply to
paravirtualized domains. Some of these restrictions may be addressed in future updates of Red Hat Enterprise Linux 5.
A HVM domain may have at most 2 GB of main memory allocated.
The ability to save state to disk and to restore that state are not supported. The normal action for the xendomains init
script is to save all domains on Dom0 shutdown, so that they may be restored on the next boot. Since HVM guests do
not support save currently, this will cause problems. Either manually shutdown HVM domains before shutting down
Dom0, or unset XENDOMAINS_SAVE= in /etc/sysconfig/xendomains to force all running domains on the
system to shutdown when Dom0 is shutdown.
HVM domains may not be migrated or live migrated to another node at present.
Unlike paravirtualized domains, HVM domains can not automatically crash dump on panic at this time. The domain
must be configured to persist in memory after the panic, and the crash dump taken manually with virsh dump or
similar utilities.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 7
141

Device Model and QEMU-DM


Unmodified operating systems need to see "real" hardware
For unmodified drivers to use; disk, network, BIOS, etc.
Unsafe for guests to directly access actual hardware

A device model is used to emulate real peripherals


Code is still run on the real CPU, not emulated

A modified version of QEMU is used as the Xen device model


A separate qemu-dm process for each HVM domain

7-6
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

For unmodified operating systems to run in a HVM domain, they need to have "real" hardware that their unmodified
drivers can use. It is unsafe to allow guest domains to have direct access to physical hardware on the system for
two reasons. The first problem is that any device that allows DMA which can be directly used by a guest can be
used to bypass the hypervisor and access regions of physical memory allocated to other guests, compromising the
security and isolation of all guests on the node. This is a problem with mechanisms such as PCI pass-through on most
current x86 hardware. The other problem is that most hardware devices are not designed to be accessed from multiple
operating systems at once. In the long run, support for technology such as Intel's upcoming VT-d extensions and
AMD's advanced IOMMU may help address these issues.
In the meantime, the hypervisor does not permit DomU guests to have low-level access to real hardware. Instead, it
presents emulated hardware to the HVM guests through the device model. Only peripheral hardware such as network
interfaces, video cards, disk controllers, and the BIOS are emulated; the CPU is not emulated, and the code itself still
runs on the processor directly.
A modified version of the QEMU emulator, QEMU-DM, is used as the Xen device model. Each HVM domain runs
its own private qemu-dm process in Dom0 to represent its hardware devices and isolate them from other guests. Basic
hardware is presented: a PIIX3/4-based motherboard, Realtek RTL-8139 network card, and Cirrus Logic GD 5446
video card that can be supported by many operating systems.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 7
142

Installation of HVM Domains


virt-manager
Select Fully Virtualized
Give path to ISO Image Location or CD-ROM or DVD device

virt-install
virt-install --hvm -n testing -r 512 -c /dev/cdrom -f /dev/vg0/testing --vnc

7-7
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

The standard utilities can be used to install HVM guests. With virt-manager, start domain installation normally,
but select Fully Virtualized instead of Paravirtualized when prompted. On the install media screen, either the path
to an ISO image of an installation CD-ROM or DVD, or the path to a CD-ROM or DVD device file loaded with an
installation CD-ROM or DVD will need to be specified. The remainder of the installation process continues normally;
when the installer starts it will "boot" the CD-ROM or image in the graphical console. (For example, a network
installation of RHEL 5 could be performed by using an appropriate boot.iso image file as the installation CDROM.)
With virt-install, two options need to be changed: --hvm instead of -p to specify full virtualization, and -c path
specifying the path to the ISO image or CD-ROM device being used for the install. The virt-install command can also
still be used interactively in the terminal if run without options.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 7
143

HVM Domain Configuration Files


Configuration files for HVM domains differ slightly
Directives for setting up HVM environment
builder=, kernel=, device_model=

Use of file: for disk image files


Use of type=ioemu for virtual network interfaces
7-8
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

A configuration file for a HVM domain will look slightly different than one for a paravirtualized domain.
Several directives, builder=, kernel=, and device_model= are set up to configure the device model for this
domain in the HVM environment.
In disk directives, at present it is safe and normal to use file: instead of tap:aio: to specify the locations of
virtual block device image files, as it is implemented differently for HVM guests than it is for PV guests.
Network interfaces are marked with type=ioemu to indicate that the interface should be modeled with the device
model and not using by using a paravirtualized device.
An example configuration file for a HVM domain:
[root@station5]# cat /etc/xen/manual-hvm
name = "manual-hvm"
builder = "hvm"
memory = "512"
disk = [ 'phy:/dev/VolGroup00/manual-hvm,hda,w', ]
vif = [ 'type=ioemu, mac=00:16:3e:5e:50:b0, bridge=xenbr0', ]
uuid = "c4bbf064-5d3d-4ee9-be5d-3356b7bd46ab"
device_model = "/usr/lib/xen/bin/qemu-dm"
kernel = "/usr/lib/xen/boot/hvmloader"
vnc=1
vncunused=1
apic=1
acpi=1
pae=1
vcpus=1
serial = "pty" # enable serial console
on_reboot
= 'restart'
on_crash
= 'restart'

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 7
144

Manual Installation of HVM Domains


Basic procedure similar to paravirtualized domain installation
HVM domain installation procedure
Acquire install media as ISO images or on CD-ROM/DVD-ROM media
Prepare virtual block devices as necessary
Create /etc/xen/domain configuration file
Set up install media as a secondary VBD in domain configuration file
Use xm create to install domain
Shutdown domain and remove install media's VBD from configuration

7-9
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

It is possible to use xm create to manually install a HVM domain in much the same way that it can be used to
manually install a paravirtualized domain. The basic procedure is as follows:
1.

Get an ISO image file or CD-ROM/DVD-ROM of the installation media for your operating system.

2.

Prepare the devices that will serve as the backend storage in the same way that you would for a paravirtualized
guest.

3.

Create a configuration file for the new HVM domain. Note that some directives are specified slightly differently
for HVM domains, and some additional directives may be needed. See the example configuration file from the
notes to the preceding slide for an example.

4.

Edit the disk line of your domain configuration file temporarily to add the installer ISO image file or CD-ROM
as a virtual disk.
disk = [ 'phy:/dev/VolGroup00/manual-hvm,hda,w',
phy:/dev/cdrom,hdc:cdrom,r', ]

5.

Boot the install from the installer media instead of the main virtual hard drive.
[root@station5]# xm create -c -f /etc/xen/manual-hvm boot=d

6.

After the install has completed, shut down the HVM domain and remove the installer media from the list of disk
devices.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 7
145

Microsoft Windows Installation Tips


Windows Server 2003 and Windows XP can be installed with
virt-manager or virt-install
Some tips to make the installation process go smoothly:
When asked to press F6 if you need to install drivers, press F5
Select "Standard PC" from the menu
Install normally until Setup reaches the first reboot
Edit the configuration file to add the installation media as a VBD
Start the domain with xm create domain
After installation, remove installation media from domain configuration

7-10
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Microsoft Windows XP and Microsoft Windows Server 2003 can be easily installed with virt-manager or virtinstall. During the installation process, a few non-standard steps will need to be performed to make the installation go
smoothly.
Setup has some trouble detecting what type of machine is being installed for 32-bit versions of Windows. Very early
in the installation process, when asked to press F6 if any additional SCSI or RAID drivers need to be installed, type
F5. It is easy to miss this step; make sure your pointer has been captured by the window so that keyboard input goes to
the virtual graphical console. On the menu that appears, select "Standard PC". Otherwise Setup will likely hang at the
"Setup is starting Windows" stage.
Continue the normal installation process until the first reboot performed by Setup. Shut down the domain and edit its
configuration file to include the installation media as a VBD, for example:
disk = [ 'phy:/dev/vg0/win2k3,w', 'phy:/dev/cdrom,hdc:cdrom,r', ]
Start the domain again with xm create domain and complete the installation. Once the installation is complete, shut
down the domain again and remove the installation media from the list of VBDs in the domain's configuration file.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 7
146

HVM DomU Virtual Serial Console


Not used at all by default
Unlike paravirtualized domains

Configuration is similar to Dom0


Just like an actual serial console
Use ttyS0 not xvc0

7-11
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Configuration of a HVM domain's virtual serial console is identical to the configuration of a normal system's serial
console. Follow the basic steps for configuration of a Dom0 serial console, only the /boot/grub/grub.conf file
will look more like this:
serial --unit=0 --speed=9600
terminal --timeout=10 serial console
default=0
timeout=5
#splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux Server (2.6.18-8.el5xen)
root (hd0,0)
kernel /boot/vmlinuz-2.6.18-8.el5xen ro root=LABEL=/ rhgb quiet console
=tty0 console=ttyS0,9600n8
initrd /boot/initrd-2.6.18-8.el5xen.img
Remember to edit /etc/inittab and /etc/securetty if root logins on the virtual serial console should be
allowed.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 7
147

Troubleshooting HVM Domains


Kernel hangs or environment seems unstable
Use normal kernels in RHEL-based HVM domain, not kernel-xen!
Does your OS support the device model hardware?
Is Dom0's kernel-xen up to date?
Is another hypervisor attempting to run?

Check /var/log/xen/qemu-dm.pid.log for device model


issues
7-12
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Some final tips on troubleshooting HVM domains in Xen. If the HVM domain kernel is hanging or seems unstable,
verify that you are running the normal kernel and not the kernel-xen version.
[root@station5]# uname -r
2.6.18-8.1.8.el5
The guest operating system may have buggy drivers for the hardware emulated by the device model, or may use the
devices in a non-standard way that the device model can not handle.
If the whole virtualization environment is having problems, ensure that Dom0's kernel-xen package is up to date.
There have been some fixes to the kernel and hypervisor since the release of RHEL 5 that may be relevant to your
configuration.
Another possibility is that another virtualization system that uses hardware virtualization extensions is being run on the
machine, such as KVM or recent versions of VMWare. There is no way for software to easily tell if another hypervisor
is already using the virtualization extensions, which can wreak havoc.
The file /var/log/xen/qemu-dm.pid.log, where pid is the PID of the qemu-dm process of interest, may
contain useful information on issues with a domain's device model.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 7
148

End of Unit 7
Questions and Answers
Summary
Determining processor support
HVM domain limitations
Device model and QEMU-DM
Installation of HVM domains
Troubleshooting

7-13
For use only by a student enrolled in a Red Hat training course taught by Red Hat, Inc. or a Red Hat Certified Training Partner. No part of this publication may be
photocopied, duplicated, stored in a retrieval system, or otherwise reproduced without prior written consent of Red Hat, Inc. If you believe Red Hat training materials
are being improperly used, copied, or distributed please email <training@redhat.com> or phone toll-free (USA) +1 (866) 626 2994 or +1 (919) 754 3700.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Unit 7
149

Lab 7
Advanced Topics
Goal:

To gain experience with advanced topics in virtualization beyond the basic


scope of the class.

System Setup:

See individual lab descriptions.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 7
150

Sequence 1: Configuration of a private network for local Xen


guests
Scenario:

In this sequence, you will set up a private network which is only visible to
the Xen domains running on your machine. You will set up a second software
bridge managed by Domain-0 which is not connected to the classroom
network. At least two of your local virtual machines will then be connected
to this bridge. You will verify that they can communicate over the private
network without passing traffic over the public classroom network.
Look in Unit 5 for more background on this sequence.

System Setup:

A workstation which still has domains vm2 and vm3 available as created in
earlier labs.

Instructions:
1.

On Domain-O, we need to modify xend to start a second network bridge, private0,


when it starts at boot time. In order to do this, in /etc/xen/scripts, create a text file
named network-custom containing the following shell script:
#!/bin/bash
# network-custom
# custom network script to create a private virtual network
#
# first we must active the default network bridge, xenbr0
/etc/xen/scripts/network-bridge netdev=eth0 start
# next we create and activate a private bridge, private0
brctl addbr private0
ifconfig private0 up
Save the file and exit your text editor.

2.

Set executable permissions on the /etc/xen/scripts/network-custom script.

3.

Modify /etc/xen/xend-config.sxp to start Xen networking using your networkcustom script in place of the default network-bridge script.

4.

Reboot your workstation so that your hypervisor changes take effect.

5.

Once your workstation has rebooted, log back in to Domain-0 as root. Use brctl to verify
that the private0 bridge has been configured and that the xenbr0 bridge is still present.

6.

If all has gone well up to this point, the hypervisor is providing a new private network with
its own bridge, but no virtual machines are using that network yet. We need to configure

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 7 Sequence 1


151

some of your existing virtual machines with additional virtual network cards connected to
that network.
As root on Domain-0, make sure that your vm2 domain is shut down. Edit the
configuration file for your vm2 domain, adding a second network interface connected to
private0.
7.

Repeat the previous step with domain vm3 and its /etc/xen/vm3 configuration file.

8.

Boot domains vm2 and vm3.

9.

On Domain-0, run brctl to inspect your software bridges again. Do you see interfaces
associated with the private0 bridge?

10.

Log into vm2 as root. Verify that you see both eth0 and eth1. Temporarily configure
eth1 with the IP address 10.0.0.2 and the netmask 255.255.255.0.

11.

Now log into vm3 as root. Verify that you see both eth0 and eth1. Temporarily configure
eth1 with the IP address 10.0.0.3 and the netmask 255.255.255.0.

12.

Next, on Domain-0, run a network sniffer to monitor the classroom network for ICMP
traffic. You will use this to verify that your two bridges are on separate networks and that
traffic across private0 is not visible to machines on the classroom network or only on the
xenbr0 bridge.
In another terminal window, log into Domain-0 as root. Ensure that you have the tcpdump
package installed, then run the following command:
[root@stationX]# tcpdump icmp -i eth0
Leave tcpdump running. You may see some traffic from other students working on this lab.

13.

On vm3, start a broadcast ping to the address 192.168.0.255. Since RHEL 5 ignores IPv4
broadcast pings by default, you should not get many responses from the classroom network,
but you will see the packets appear in the tcpdump output on Domain-0.

14.

Stop the ping from the previous step if necessary. Now, still on vm3, ping 10.0.0.255 (the
broadcast IP address on the eth1 network). As before, the ping command will get few
if any responses, but network traffic is being generated. You should not see traffic from
tcpdump on Domain-0. Try running tcpdump on the eth1 interface on vm2. Do you see
the network traffic now?

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 7 Sequence 1


152

Sequence 2: Dynamically adding a VBD to a paravirtualized guest


Scenario:

In this sequence you will add and remove a block device from a running
paravirtualized Xen guest. This is a very useful trick to temporarily move files
from a block device visible to Domain-0 to one of its local guests.

System Setup:

A domain vm2 as set up in previous labs. Free space in the vbds volume
group for another logical volume.

Instructions:
1.

As root on Domain-0, create a small logical volume in the vbds volume group named
vm2b.

2.

Ensure that the vm2 domain is running, then use the xm command to attach /dev/vbds/vm2b
to the domain.

3.

Log into vm2 as root. Run fdisk -l. Do you see an unpartitioned /dev/xvdb device of the
right size?

4.

Now, if the /dev/cdrom device from Domain-0 had been attached as /dev/xvdb in
vm2, then we could simply mount it on an empty directory with a command like mount /
dev/xvdb /mnt. If this logical volume were actually a USB key, we would be able to see
any partitions on the USB key in much the same way. One critical thing to remember is that
the backend device in Domain-0 should never be mounted at the same time in Domain-0
as it is in a guest domain, unless a cluster filesystem like GFS is in use, or filesystem
corruption and data inconsistency is almost certain.
Now that the block device is available in vm2, you may experiment by putting a partition on
it and formatting it with an ext3 file system to prove that it is usable. Do not mount it, or if
you do, unmount it before the next step.

5.

Now we will remove /dev/xvdb from vm2. Ensure that no partitions on /dev/xvdb are
mounted or otherwise in use before you continue. We need to determine the VBD number
of the device in use. On vm2, look in the file /sys/block/xvdb/device/nodename
and record the number that appears after the last slash in the output.

6.

Next, as root on Domain-0, use xm block-detach to detach the device from vm2.

7.

As root back on vm2, run fdisk -l again and verify that /dev/xvdb is gone.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 7 Sequence 2


153

Sequence 3: Red Hat Network registration of Xen guests


Scenario:

This is an optional sequence.


In this sequence, you will register your Domain-0 and vm2 with Red Hat
Network. This sequence requires that you have either Internet access and
an appropriately entitled account on the centrally-hosted Red Hat Network
servers, or that the instructor has provided a version 5.0 or later Red Hat
Network Satellite server in the classroom environment with appropriate
accounts configured.

Instructions:
1.

First, use rhn_register to register your Domain-0 machine normally to Red Hat Network.

2.

Log into RHN or your RHN Satellite using the account your Domain-0 is registered
to, and adjust its channel subscriptions to include "RHEL Virtualization" and "Red Hat
Network Tools for RHEL Server".

3.

As root on Domain-0, use yum to install the rhn-virtualization-common and rhnvirtualization-host packages.

4.

Make sure that the vm2 domain is running, then run rhn_check and rhn-profile-sync as
root on Domain-0.

5.

As root on vm2, run rhn_register as you did for Domain-0 in the first step of this
sequence.

6.

In the RHN web interface, go to the Systems tab, click on your Domain-0's hostname link,
then on the detail page find the Virtualization link/tab and click on that. You should see
a table or list of the virtual machines running on Domain-0 when you updated its RHN
profile. Click on the vm2 link to see its RHN profile.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 7 Sequence 3


154

Sequence 1 Solutions
1.

On Domain-0, we need to modify xend to start a second network bridge, private0,


when it starts at boot time. In order to do this, in /etc/xen/scripts, create a text file
named network-custom containing the following shell script:
#!/bin/bash
# network-custom
# custom network script to create a private virtual network
#
# first we must active the default network bridge, xenbr0
/etc/xen/scripts/network-bridge netdev=eth0 start
# next we create and activate a private bridge, private0
brctl addbr private0
ifconfig private0 up
Save the file and exit your text editor.

2.

Set executable permissions on the /etc/xen/scripts/network-custom script.


[root@stationX]# cd /etc/xen/scripts
[root@stationX]# chmod a+x network-custom
[root@stationX]# ls -l network-custom
-rwxr-xr-x 1 root root 302 Sep 30 18:40 network-custom

3.

Next, modify /etc/xen/xend-config.sxp to start networking using your


network-custom script in place of the default network-bridge script.
Open /etc/xen/xend-config.sxp with a text editor and change the line
(network-script network-bridge)
to read
(network-script network-custom)
This will be on about line 90 of the file. Save /etc/xen/xend-config.sxp and exit.

4.

Reboot your workstation so that your hypervisor changes take effect.


[root@stationX]# reboot

5.

Once your workstation has rebooted, log back in to Domain-0 as root. Use brctl to verify
that the private0 bridge has been configured and that the xenbr0 bridge is still present.
The exact output of the commands below will depend on which virtual machines you have
configured to start at boot.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 7 Sequence 1 Solutions


155

[root@stationX]# virsh list


Id Name
State
----------------------------0 Domain-0
running
1 vm0
blocked
[root@stationX]# brctl show
bridge name
bridge id
private0
8000.000000000000
xenbr0
8000.feffffffffff

STP enabled
no
no

interfaces
vif1.0
peth0
vif0.0

The output of virsh list shows that Domain-0 and a single guest VM, vm0, is running.
The output of brctl show shows that there are two software bridges configured private0
and xenbr0. Right now, private0 has no interfaces associated with it. The xenbr0
bridge has three ports connected. vif0.0 goes to eth0 in Domain-0 (ID 0). vif1.0
goes to eth0 in vm0 (ID 1). peth0 goes to the physical connection to the classroom
network through the actual network card on your station.
6.

If all has gone well up to this point, the hypervisor is providing a new private network with
its own bridge, but no virtual machines are using that network yet. We need to configure
some of your existing virtual machines with additional virtual network cards connected to
that network.
As root on Domain-0, make sure that your vm2 domain is shut down. Edit the
configuration file for your vm2 domain, adding a second network interface connected to
private0.
You can use virsh shutdown vm2 to shutdown the vm2 domain. Wait until the domain is
completely shut down before continuing. If you get error: failed to get domain
'vm2' it is likely the domain was not running and you may continue.
Open /etc/xen/vm2 in a text editor. find the vif line and change it to read
vif = [ 'mac=00:16:3e:00:XX:02, bridge=xenbr0',
'mac=00:16:3e:00:XX:a2, bridge=private0', ]
where XX is your station number in decimal (station10 uses 10 not 0a, and so on.)

7.

Repeat the previous step with domain vm3 and its /etc/xen/vm3 configuration file.
As root on Domain-0, make sure that your vm3 domain is shut down. Edit the
configuration file for your vm3 domain, adding a second network interface connected to
private0.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 7 Sequence 1 Solutions


156

You can use virsh shutdown vm3 to shutdown the vm3 domain. Wait until the domain is
completely shut down before continuing. If you get error: failed to get domain
'vm3' it is likely the domain was not running and you may continue.
Open /etc/xen/vm3 in a text editor. find the vif line and change it to read
vif = [ 'mac=00:16:3e:00:XX:03, bridge=xenbr0',
'mac=00:16:3e:00:XX:a3, bridge=private0', ]
where XX is your station number in decimal (station10 uses 10 not 0a, and so on.)
8.

Boot domains vm2 and vm3.


[root@stationX]# xm create vm2
[root@stationX]# xm create vm3

9.

On Domain-0, run brctl to inspect your software bridges again. Do you see interfaces
associated with the private0 bridge?
[root@stationX]# virsh list
Id Name
State
----------------------------0 Domain-0
running
1 vm0
blocked
3 vm2
blocked
4 vm3
blocked
[root@stationX]# brctl show
bridge name
bridge id
private0
8000.feffffffffff

STP enabled
no

xenbr0

no

8000.feffffffffff

interfaces
vif4.1
vif3.1
vif4.0
vif3.0
vif1.0
peth0
vif0.0

We now see four new interfaces associated with the software bridges. vif3.0 and
vif4.0 are associated with the eth0 interfaces on domain ID 3 (vm2 at present) and ID
4 (vm3 at present). These two interfaces are bound to xenbr0, the bridge connected to
the classroom network. vif3.1 and vif4.1 are associated with the eth1 interfaces on
vm2 and vm3 respectively. These are bound to the bridge connected to the private virtual
network on your station.
10.

Log into vm2 as root. Verify that you see both eth0 and eth1. Temporarily configure
eth1 with the IP address 10.0.0.2 and the netmask 255.255.255.0.
[root@vm2]# ip link show dev eth0

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 7 Sequence 1 Solutions


157

3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc


pfifo_fast qlen 1000
link/ether 00:16:3e:00:XX:02 brd ff:ff:ff:ff:ff:ff
[root@vm2]# ip link show dev eth1
4: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
pfifo_fast qlen 1000
link/ether 00:16:3e:00:XX:a2 brd ff:ff:ff:ff:ff:ff
[root@vm2]# ifconfig eth1 10.0.0.2 netmask 255.255.255.0 up
11.

Log into vm3 as root. Verify that you see both eth0 and eth1. Temporarily configure
eth1 with the IP address 10.0.0.3 and the netmask 255.255.255.0.
[root@vm3]# ip link show dev eth0
3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
pfifo_fast qlen 1000
link/ether 00:16:3e:00:XX:03 brd ff:ff:ff:ff:ff:ff
[root@vm3]# ip link show dev eth1
4: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
pfifo_fast qlen 1000
link/ether 00:16:3e:00:XX:a3 brd ff:ff:ff:ff:ff:ff
[root@vm3]# ifconfig eth1 10.0.0.3 netmask 255.255.255.0 up

12.

Next, on Domain-0, run a network sniffer to monitor the classroom network for ICMP
traffic. You will use this to verify that your two bridges are on separate networks and that
traffic across private0 is not visible to machines on the classroom network or only on the
xenbr0 bridge.
In another terminal window, log into Domain-0 as root. Ensure that you have the tcpdump
package installed, then run the following command:
[root@stationX]# tcpdump icmp -i eth0
Leave tcpdump running. You may see some traffic from other students working on this lab.
To verify that tcpdump is installed, use yum install tcpdump.

13.

On vm3, start a broadcast ping to the address 192.168.0.255. Since RHEL 5 ignores
IPv4 broadcast pings by default, you should get few if any responses from the classroom
network, but you will see the packets appear in the tcpdump output on Domain-0.
On vm3:
[root@vm3]# ping -b 192.168.0.255
On Domain-0:
[root@stationX]# tcpdump icmp -i eth0
tcpdump: verbose output suppressed, use -v or -vv for full
protocol decode

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 7 Sequence 1 Solutions


158

listening on eth0, link-type EN10MB (Ethernet), capture size 96


bytes
20:38:00.384494 IP 192.168.1.104 > 192.168.1.255: ICMP echo
request, id 26892, seq 1, length 64
20:38:01.386831 IP 192.168.1.104 > 192.168.1.255: ICMP echo
request, id 26892, seq 2, length 64
14.

Stop the ping from the previous step if necessary. Now, still on vm3, ping 10.0.0.255 (the
broadcast IP address on the eth1 network). As before, the ping command will get few
if any responses, but network traffic is being generated. You should not see traffic from
tcpdump on Domain-0. Try running tcpdump on the eth1 interface on vm2. Do you see
the network traffic now?
On vm3:
[root@vm3]# ping -b 10.0.0.255
On Domain-0:
[root@stationX]#
tcpdump icmp -i eth0
tcpdump: verbose output suppressed, use -v or -vv for full
protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96
bytes
On vm2:
[root@vm2]# tcpdump icmp -i eth1
tcpdump: verbose output suppressed, use -v or -vv for full
protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96
bytes
20:41:20.242234 IP 10.0.0.2 > 10.0.0.255: ICMP echo request, id
12033, seq 1, length 64
20:41:21.244232 IP 10.0.0.2 > 10.0.0.255: ICMP echo request, id
12033, seq 2, length 64
To prove to yourself that network traffic is actually being generated, in both this and the
previous step you can configure vm2 and/or vm3 to respond to broadcast pings by running
the following command as root on those machines:
[root@vm2]# sysctl -w net.ipv4.icmp_echo_ignore_broadcasts=0

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 7 Sequence 1 Solutions


159

Sequence 2 Solutions
1.

As root on Domain-0, create a small logical volume in the vbds volume group named
vm2b.
[root@stationX]# lvcreate -L 1G -n vm2b vbds
Logical volume "vm2b" created

2.

Ensure that the vm2 domain is running, then use the xm block-attach command to attach /
dev/vbds/vm2b to the domain in read-write mode as /dev/xvdb. Look at the man page for
hints or the solution for exact syntax.
[root@stationX]# virsh domstate vm2
blocked
[root@stationX]# xm block-attach vm2 phy:/dev/vbds/vm2b
/dev/xvdb w

3.

Log into vm2 as root. Run fdisk -l. Do you see an unpartitioned /dev/xvdb device of the
right size?
[root@vm2]# fdisk -l
Disk /dev/xvda: 3758 MB, 3758096384 bytes
255 heads, 63 sectors/track, 456 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
/dev/xvda1
*
/dev/xvda2
LVM

Start
1
14

End
13
456

Blocks
104391
3558397+

Id
83
8e

System
Linux
Linux

Disk /dev/xvdb: 1073 MB, 1073741824 bytes


255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/xvdb doesn't contain a valid partition table
4.

Now, if the /dev/cdrom device from Domain-0 had been attached as /dev/xvdb in
vm2, then we could simply mount it on an empty directory with a command like mount /
dev/xvdb /mnt. If this logical volume were actually a USB key, we would be able to see
any partitions on the USB key in much the same way. One critical thing to remember is that
the backend device in Domain-0 should never be mounted at the same time in Domain-0
as it is in a guest domain, unless a cluster filesystem like GFS is in use, or filesystem
corruption and data inconsistency is almost certain.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 7 Sequence 2 Solutions


160

Now that the block device is available in vm2, you may experiment by putting a partition on
it and formatting it with an ext3 file system to prove that it is usable. Do not mount it, or if
you do, unmount it before the next step.
5.

Now we will remove /dev/xvdb from vm2. Ensure that no partitions on /dev/xvdb are
mounted or otherwise in use before you continue. We need to determine the VBD number
of the device in use. On vm2, look in the file /sys/block/xvdb/device/nodename
and record the number that appears after the last slash in the output.
An example might look like this:
[root@vm0]# cat /sys/block/xvdb/device/nodename
device/vbd/51728

6.

Next, as root on Domain-0, use xm block-detach to detach the device from vm2.
[root@stationX]# xm block-detach vm2 51728

7.

As root back on vm2, run fdisk -l again and verify that /dev/xvdb is gone.
[root@vm2]# fdisk -l
Disk /dev/xvda: 3758 MB, 3758096384 bytes
255 heads, 63 sectors/track, 456 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot
/dev/xvda1
*
/dev/xvda2
LVM

Copyright 2007 Red Hat, Inc.


All rights reserved

Start
1
14

End
13
456

Blocks
104391
3558397+

Id
83
8e

System
Linux
Linux

RH184-RHEL5-en-2-20070907, Lab 7 Sequence 2 Solutions


161

Sequence 3 Solutions
1.

First, as root on Domain-0, use rhn_register to register the domain normally to Red Hat
Network.
Click Forward, and on the Choose an update location screen select either Red Hat
Network or Red Hat Network Satellite as appropriate for the classroom. For the latter,
specify the appropriate URL for the Red Hat Network Location (possibly https://
server1.example.com or as specified by your instructor). Click Forward.
On the Enter your account information screen, specify the username and password for the
RHN account you intend to register the system to. Click Forward.
Select the defaults for the System Profile and click Forward twice and then Finish.

2.

Log into RHN or your RHN Satellite using the account your Domain-0 is registered
to, and adjust its channel subscriptions to include "RHEL Virtualization" and "Red Hat
Network Tools for RHEL Server".
Once you are logged in, click on the Systems tab, then the Domain-0 system's hostname
hyperlink on the table. This should bring up a detail page for the host. Find the Alter
Channel Subscriptions link and click on it. Put checkmarks in the "RHEL Virtualization"
and "Red Hat Network Tools for RHEL Server" channel check boxes and click on the
Change Subscriptions button.

3.

As root on Domain-0, use yum to install the rhn-virtualization-common and rhnvirtualization-host packages.
[root@stationX]# yum install rhn-virtualization-common
rhn-virtualization-host

4.

Make sure that the vm2 domain is running, then run rhn_check and rhn-profile-sync as
root on Domain-0.
[root@stationX]# xm create vm2
[root@stationX]# rhn_check; rhn-profile-sync

5.

As root on vm2, run rhn_register as you did for Domain-0 in the first step of this
sequence.

6.

In the RHN web interface, go to the Systems tab, click on your Domain-0's hostname link,
then on the detail page find the Virtualization link/tab and click on that. You should see
a table or list of the virtual machines running on Domain-0 when you updated its RHN
profile. Click on the vm2 link to see its RHN profile.

Copyright 2007 Red Hat, Inc.


All rights reserved

RH184-RHEL5-en-2-20070907, Lab 7 Sequence 3 Solutions


162

Das könnte Ihnen auch gefallen