Sie sind auf Seite 1von 19

HVM Dom0: Any Unmodified OS as dom0

Xiantao Zhang, Nakajima Jun, Dongxiao Xu


Speaker: Auld Will
Intel Corporation
Legal Disclaimer
INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO
LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL
PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTELS TERMS
AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER,
AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF
INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A
PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR
OTHER INTELLECTUAL PROPERTY RIGHT. INTEL PRODUCTS ARE NOT INTENDED FOR USE IN
MEDICAL, LIFE SAVING, OR LIFE SUSTAINING APPLICATIONS.
Intel may make changes to specifications and product descriptions at any time, without notice.
All products, dates, and figures specified are preliminary based on current expectations, and are subject to
change without notice.
Intel, processors, chipsets, and desktop boards may contain design defects or errors known as errata, which
may cause the product to deviate from published specifications. Current characterized errata are available on
request.
Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the
United States and other countries.
*Other names and brands may be claimed as the property of others.
Copyright 2013 Intel Corporation.
Outline
Para-virtualized Dom0 history & problems

Why HVM dom0 ?

Technologies in HVM dom0

Call to Action

Takeaways

3
Todays Xen Architecture (Since Xen3.0)
Dom0 VM1 VM2 VM3
Device Unmodified Unmodified Unmodified
Manager & User User User
Control s/w Software Software Software

GuestOS GuestOS GuestOS Unmodified


(XenLinux) (XenLinux) (XenLinux) GuestOS
AGP (Windows OS
ACPI Back-End Back-End
SMP Linux))
PCI Native Native
Device Device Front-End Front-End
Driver Driver Device Drivers Device Drivers
VT-x

Control IF Safe HW IF Event Channel Virtual CPU Virtual MMU


32/64bit VT-d
Xen Hypervisor

Hardware (SMP, MMU, physical memory, Ethernet, SCSI/IDE)


History of PV Dom0
PV Dom0 evolution:

Xen Linux 2.6.18
Xen Linux 2.6.27
Linux 2.6.32 + PVOPS patchset
PVOPS Linux pushed to upstream 3.0

Challenges:
Tremendous effort spent on pushing patches to Linux upstream.
Maintaining effort.
Hard to push certain features/fixes into Linux upstream.
Xen PAT
Xen ACPI.
Xen RAS

5
Why was PV dom0 only?

Problems
Old x86 architecture (Pre-VT)
Virtualization is not considered by design.
Many virtualizations holes exist
X86 architecture with 1st Gen VT
Lack of performance optimizations
No hardware features to support memory/IO virtualization
Solution PV dom0
Modify dom0s kernel source code to address virtualization holes.
Adopt PV interfaces to enhance system performance.
Network, storage, MMU, etc.
But Linux only

6
Limitations of PV Dom0
Dom0 kernel is modified Linux (XenLinux)
Depends on Kernel changes
Hard to push some changes(RAS, PAT, ACPI) to upstream Linux
Cant support unmodified OS (Windows, Mac OS, etc)
Cant leverage VT for performance enhancement
Performance limitations of dom0
64bit dom0 has to suffer from poor performance
Super page cant be supported well
Fast system call cant be supported as well
Un-avoidable various traps to hypervisor
Thread switch
FPU, TLS, stack switch
MMU update operations
Guest page faults

7
New Hardware Virtualization Technologies
CPU virtualization
Virtualization holes are finally closed by architecture
CR access acceleration
TPR shadow/APIC-v
Memory virtualization
EPT VPID memory virtualization is done by hardware
EPT super page supported
Unrestricted guest
I/O virtualization
VT-d supports direct IO for guest
SR-IOV
Interrupt virtualization
APIC-V
Posted interrupts

HVM domain comparable performance with PV domain

8
Good chance for improving dom0

Goal
Remove PV dom0s limitations
Leverage new VT technologies to enhance dom0s performance
Options
PVH dom0: Running PV kernel in HVM container, leveraging some VT technologies
(e.g. EPT) to enhance dom0s performance. Only limited to Linux OS.

HVM dom0: Allows unmodified OS (may with PV drivers) running in HVM container
with full VT technologies. Ideally, it can support any unmodified OS.

Our choice: HVM dom0

9
Xen Architecture with HVM dom0
HVM Dom0 HVM Domain
Ring3
Device
Qemu Manager & Unmodified
(virtio
backend)
Control s/w User
Software
VMX
Non-Root
Unmodified
GuestOS
(Windows OS
Linux/Window/etc. Linux))
Ring0
Native Front-End
Back-End Device Device Drivers (e.g.
Device Drivers Driver virtio)

VM Entry/Exit Interrupts VM Entry/Exit

VMX Root Control IF Safe HW IF Event Channel Virtual CPU Virtual MMU

Xen Hypervisor

Hardware

10
Benefits of HVM dom0

More choices for Dom0 (Windows, Mac OS, etc.).


Better performance compared with 64-bit PV Dom0.
Reduce the Xen Linux kernel complexity and maintenance effort.
Xen hypervisor becomes more flexible to support more usage cases.
Desktop virtualization to benefit Windows/Mac OS users.
New Xen client hypervisor usage.
Mobile virtualization.
Covers more virtualization model.
First windows/Mac OS based type-1 open source hypervisor.

11
How to make HVM dom0 work ?
CPU virtualization
Same as HVM domU.
Memory virtualization
Adopt EPT/NPT for memory virtualization
Super page is used for performance enhancement
IO virtualization
With VT-d, all physical devices are assigned to dom0 by default
IO access & mmio access doesnt trigger vmexits.
Interrupt virtualization
Dom0 controls physical IOAPIC
Dom0s local APIC is virtualized
Hypervisor owns physical local APIC

12
HVM dom0 Boot Sequence
EFI-based system as HVM Dom0 (e.g. Windows*)
Dynamically de-privilege EFI shell to a HVM guest environment
Boot flow:
System power on Boot EFI shell execute startup.nsh xen_loader.efi Xen
entry point start_xen() construct_dom0() prepare_hvm_context()
VMLAUNCH back to original EFI shell Load OS as usual
startup.nsh:
xen_loader.efi xen.gz
/boot/efi/ia32.efi
xen_loader.efi:
Used dynamically to de-privilege EFI shell
One EFI binary for loading Xen and setup return point from hypervisor
After back to EFI shell, EFI environment is in a HVM guest

EFI is dynamically de-privileged to HVM container

13
HVM dom0 Boot Sequence (Cond)

Linux as HVM dom0


System loaded by grub
System Power on Power on to grub Xen entry point start_xen()
construct_dom0() prepare_hvm_context() VMLAUNCH to kernel entry.

Similar with todays PV dom0

14
Multi-domain Support
Qemu is a must
Both Linux & Windows supports Qemu
PV Driver support
Xen bus/event channel mechanism is needed
One virtual PCI device (PCI platform device) is virtualized
Port PCI platform devices logic from Qemu to hypervisor

Back-end Event Channel/ User-land


Dom0 OS Qemu
Driver Xen Bus Driver* tools/libs

Linux Ready Ready Ready Ready

Partially
Windows Virtio Ready Not ready
Ready*

15
Call for Action

Port (or simplify) Xens userland tools/libraries


For Windows
Enable PV drivers for DomU guests
For performance enhancement of DomU
Port Xen-Qemu logic to Windows

16
Takeaways

Unmodified OS as HVM dom0


Only an add-on feature for Xen project
Doesnt break existing Xens usage models
Only used for new x86 platforms
Can resolve PV dom0s limitations
Can cover more usage models
New type of Xen Client (with Windows HVM Dom0)
Create Trusted Execution Environment for single Windows/Mac OS
Xen can be used in PC market like todays type-2 VMMs

17
Questions?

Or contact xiantao.zhang@intel.com

18
19

Das könnte Ihnen auch gefallen