Sie sind auf Seite 1von 37

A

SEMINAR REPORT
ON

VIRTUAL MACHINE AND ITS TOOLS


In partial fulfillment of
B.E IV yr (Computer Science & Engg.)

Work Carried Out At

JIET, Jodhpur

Submitted To:
Nidhi Purohit

Submitted by:
Savan Kumar Vaishnav
CSE, VI Year

Acknowledgement
I am indebted to all my elders, lecturers and friends for inspiring me to give
a seminar with immense dedication. With great pleasure and gratefulness, I
extend my deep sense of gratitude to Ms. Nidhi Purohit for giving me an
opportunity to accomplish my seminar under her guidance and to increase
my knowledge.
I would also like to thank Mrs. Sugandha (HOD, Computer Science &
Engineering, Jodhpur Institute of Engineering & Technology) for giving her
precious time & guidance for the successful completion of the seminar.
Lastly I wish to thank each and every person involved in making my
seminar successful.

Thanking You.
Savan K. Vaishnav
[B.E. (IV Year)
Computer Science & Engineering]

Preface
This seminar is on topic Virtual machine and its tools. In this seminar we will deal with
virtual machines, which is basically software realization of computer hardware. As now days
virtual machines are becoming increasingly popular for using multiple guest operating
systems amounted on host operating. So in this seminar we will discuss many of the points
on virtual machine as what are VM, whats their usage, how they work etc.
We will also discuss on tools used for virtual machines, how to use them, what
are their benefits, business perspectives about them etc.

Index
1. Introduction

2. History-Why VM?

3. Hypervisor

4. Categories

5. Techniques

13

6. Tools

15

7. Conclusion

28

8. Advantages

29

9. Disadvantages

31

10.Applications

34

11.Appendix

36

12.References

36

Introduction
A virtual machine was originally defined by Popek and Goldberg as "an efficient, isolated
duplicate of a real machine". Current use includes virtual machines which have no direct
correspondence to any real hardware. So a virtual machine is basically a
software implementation of a machine (i.e. a computer) that executes programs like a
physical machine.
E.g. - a program written in Java receives services from the java runtime
environment (JRE) software by issuing commands to, and receiving the expected results
from, the Java software. By providing these services to the program, the Java software is
acting as a "virtual machine", taking the place of the operating system or hardware for
which the program would ordinarily be tailored.

History-Why VM ?

Early computers were large (mainframes) and expensive.

VMM approach allowed the machine to be safely multiplexed among


many different applications.

As an alternative to multiprogramming.

Early example: the IBM 370

VM/370 is the virtual machine monitor.

As each user logs on, a new virtual machine is created.

CMS, a single-user, interactive OS was commonly run as the


OS.

Separation of powers:

Virtual machine interacts with user applications.

Virtual machine monitor manages hardware resources.

1980s & 1990s

As hardware got cheaper and operating systems became better


equipped to handle multitasking, the original motivation went
away.

Hardware platforms gradually eliminated hardware support for


virtualization.

And then

Late 90s to today


Massively parallel processors (MPPs) were developed during
the 1990s; they were hard to program and did not support existing
operating systems.
Researchers at Stanford used virtualization to make MPPs look
more like traditional machines.
Result: VMware Inc. supplier of VMMs for commodity
hardware.

Rationale for VMMs Today

Today, security and encapsulation are the most important


reasons for using VMMs.

VMMs give operating systems developers another


opportunity to develop functionality no longer practical in
todays complex and ossified operating systems, where
innovation moves at a geologic pace.

Hypervisor
In computing, a hypervisor, also called virtual machine monitor (VMM), allows
multiple operating systems to run concurrently on a host computer a feature
called hardware virtualization. The hypervisor presents the guest operating systems with
a virtual platform and monitors the execution of the guest operating systems. In that way,
multiple operating systems, including multiple instances of the same operating system,
can share hardware resources. Unlike multitasking, which also allows applications to
share hardware resources, the virtual machine approach using a hypervisor isolates
failures in one operating system from other operating systems sharing the hardware.
An example of a Hypervisor is PR/SM (Processor Resource/System Manager), which is a
type-1 Hypervisor that allows multiple logical partitions to share physical resources such
as CPUs, direct access storage device (DASD), and memory. PR/SM was introduced by
IBM in the early 1970's with the IBM System 370 processors. These concepts have
become an important part of the term known as virtualization.
The concept of Virtualization with PR/SM, multiple LPARs and type 1 Hypervisor was
first introduced by IBM in 1972 with the introduction of the large main IBM System 370
computers (such as the IBM 3168 Central Processing Unit). PR/SM is a type-1
Hypervisor that runs directly on the machine level and allocates system resources across
LPARs to share physical resources.

Goldberg classifies two types of hypervisor:

Type 1 (or native, bare-metal) hypervisors run directly on the host's hardware to
control the hardware and to monitor guest operating-systems. A guest operating
system thus runs on another level above the hypervisor.
This model represents the classic implementation of virtual machine architectures;
the original hypervisor was CP/CMS, developed at IBMin the 1960s, ancestor of
IBM's z/VM.

Type

2 (or hosted)

hypervisors

run

within

conventional operating-

system environment. With the hypervisor layer as a distinct second software


level, guest operating systems run at the third level above the hardware.
Note: Microsoft Hyper-V (released in June 2008) exemplifies a type 1 product that
is often mistaken for a type 2. Both the free stand-alone version and the version that
is part of the commercial Windows Server 2008 product use a virtualized Windows
Server 2008 parent partition to manage the Type 1 Hyper-V hypervisor. In both
cases the Hyper-V hypervisor loads prior to the management operating system, and
any virtual environments created run directly on the hypervisor, not via the
management operating system.
The term hypervisor apparently originated in IBM's CP-370 reimplementation
of CP-67 for the System/370, released in 1972 as VM/370. The term hypervisor call,
or hypercall, referred to the paravirtualization interface, by which a guest operating
system could access services directly from the (higher-level) control program
analogous to making a supervisor call to the (same-level) operating system. The
term supervisor refers to the operating system kernel, which runs in supervisor
state on IBM mainframes.

Categories
System virtual machines:System virtual machines (sometimes called hardware virtual machines) allow the sharing
of the underlying physical machine resources between different virtual machines, each
running its own operating system. The software layer providing the virtualization is
called a virtual machine monitor or hypervisor. A hypervisor can run on bare hardware
(Type 1 or native VM) or on top of an operating system (Type 2 or hosted VM).
The main advantages of system VMs are:

multiple OS environments can co-exist on the same computer, in strong


isolation from each other
the virtual machine can provide an instruction set architecture (ISA) that is
somewhat different from that of the real machine
Application provisioning, maintenance, high availability and disaster
recovery.

The main disadvantage of system VMs is:

a virtual machine is less efficient than a real machine when it accesses the
hardware indirectly

Multiple VMs each running their own operating system (called guest operating system)
are frequently used in server consolidation, where different services that used to run on
individual machines in order to avoid interference are instead run in separate VMs on the
same physical machine. This use is frequently called quality-of-service isolation (QoS
isolation).
The desire to run multiple operating systems was the original motivation for virtual
machines, as it allowed time-sharing a single computer between several single-tasking
OSes. In some respects, a system virtual machine can be considered a generalization of
the concept of virtual memory that historically preceded it. IBM's CP/CMS, the first
systems to allow full virtualization, implemented time sharing by providing each user
with a single-user operating system, the CMS. Unlike virtual memory, a system virtual
machine allowed the user to use privileged instructions in their code. This approach had
certain advantages, for instance it allowed users to add input/output devices not allowed
by the standard system.
The guest OSes do not have to be all the same, making it possible to run different OSes
on the same computer (e.g., Microsoft Windows and Linux, or older versions of an OS in
order to support software that has not yet been ported to the latest version). The use of
virtual machines to support different guest OSes is becoming popular in embedded

systems; a typical use is to support a real-time operating system at the same time as a
high-level OS such as Linux or Windows.
Another use is to sandbox an OS that is not trusted, possibly because it is a system under
development. Virtual machines have other advantages for OS development, including
better debugging access and faster reboots.
Lighter weight operating system-level virtualization techniques, such as Solaris Zones,
provide a level of isolation within a single operating system. They do not have isolation
as complete as a virtual machine.

Process virtual machines:A process VM, sometimes called an application virtual machine, runs as a normal
application inside an OS and supports a single process. It is created when that process is
started and destroyed when it exits. Its purpose is to provide a platform-independent

programming environment that abstracts away details of the underlying hardware or


operating system, and allows a program to execute in the same way on any platform.
A process VM provides a high-level abstraction that of a high-level programming
language (compared to the low-level ISA abstraction of the system VM). Process VMs
are implemented using an interpreter; performance comparable to compiled programming
languages is achieved by the use of just-in-time compilation.
This type of VM has become popular with the Java programming language, which is
implemented using the Java virtual machine. Other examples include the Parrot virtual
machine, which serves as an abstraction layer for several interpreted languages, and
the .NET Framework, which runs on a VM called the Common Language Runtime.
A special case of process VMs are systems that abstract over the communication
mechanisms of a (potentially heterogeneous) computer cluster. Such a VM does not
consist of a single process, but one process per physical machine in the cluster. They are
designed to ease the task of programming parallel applications by letting the programmer
focus on algorithms rather than the communication mechanisms provided by the
interconnect and the OS. They do not hide the fact that communication takes place, and
as such do not attempt to present the cluster as a single parallel machine.
Unlike other process VMs, these systems do not provide a specific programming
language, but are embedded in an existing language; typically such a system provides
bindings for several languages (e.g., C and FORTRAN). Examples are PVM (Parallel
Virtual Machine) and MPI (Message Passing Interface). They are not strictly virtual
machines, as the applications running on top still have access to all OS services, and are
therefore not confined to the system model provided by the "VM".

Techniques

Emulation of the underlying raw hardware (native execution):This approach is described as full virtualization of the hardware, and can be implemented
using a Type 1 or Type 2 hypervisor. (A Type 1 hypervisor runs directly on the hardware;
a Type 2 hypervisor runs on another operating system, such as Linux). Each virtual
machine can run any operating system supported by the underlying hardware. Users can
thus run two or more different "guest" operating systems simultaneously, in separate
"private" virtual computers.
The pioneer system using this concept was IBM's CP-40, the first (1967) version of
IBM's CP/CMS (1967-1972) and the precursor to IBM's VM family (1972-present). With
the VM architecture, most users run a relatively simple interactive computing single-user
operating system, CMS, as a "guest" on top of the VM control program (VM-CP). This
approach kept the CMS design simple, as if it were running alone; the control program
quietly provides multitasking and resource management services "behind the scenes". In
addition to CMS, VM users can run any of the other IBM operating systems, such
as MVS or z/OS. Z/VM is the current version of VM, and is used to support hundreds or
thousands of virtual machines on a given mainframe. Some installations use Linux for
zSeries to run Web servers, where Linux runs as the operating system within many virtual
machines.
Full virtualization is particularly helpful in operating system development, when
experimental new code can be run at the same time as older, more stable, versions, each
in a separate virtual machine. The process can even be recursive: IBM debugged new
versions of its virtual machine operating system, VM, in a virtual machine running under
an older version of VM, and even used this technique to simulate new hardware.
The standard x86 processor architecture as used in modern PCs does not actually meet
the Popek and Goldberg virtualization requirements. Notably, there is no execution mode
where all sensitive machine instructions always trap, which would allow per-instruction
virtualization.
Despite these limitations, several software packages have managed to
provide virtualization on the x86 architecture, even though dynamic recompilation of
privileged code, as first implemented by VMware, incurs some performance overhead as
compared to a VM running on a natively virtualizable architecture such as the IBM
System/370 or Motorola MC68020. By now, several other software packages such
as Virtual PC, VirtualBox, Parallels Workstation and Virtual Iron manage to implement
virtualization on x86 hardware.
Intel and AMD have introduced features to their x86 processors to enable virtualization
in hardware.

Emulation of a non-native system:Virtual machines can also perform the role of an emulator, allowing software applications
and operating systems written for another computer processor architecture to be run.

Some virtual machines emulate hardware that only exists as a detailed specification. For
example:

One of the first was the p-code machine specification, which allowed
programmers to write Pascal programs that would run on any computer running
virtual machine software that correctly implemented the specification.
The specification of the Java virtual machine.
The Common Language Infrastructure virtual machine at the heart of
the Microsoft .NET initiative.
Open Firmware allows plug-in hardware to include boot-time diagnostics,
configuration code, and device drivers that will run on any kind of CPU.

This technique allows diverse computers to run any software written to that specification;
only the virtual machine software itself must be written separately for each type of
computer on which it runs.

Operating system-level virtualization:Operating System-level Virtualization is a server virtualization technology which


virtualizes servers on an operating system (kernel) layer. It can be thought of as
partitioning: a single physical server is sliced into multiple small partitions (otherwise
called virtual environments (VE), virtual private servers (VPS), guests, zones, etc.); each
such partition looks and feels like a real server, from the point of view of its users.
For example, Solaris Zones supports multiple guest OSes running under the same OS
(such as Solaris 10). All guest OSes have to use the same kernel level and cannot run as
different OS versions. Solaris native Zones also requires that the host OS be a version of
Solaris; other OSes from other manufacturers are not supported; however you need to use
Solaris Branded zones to use another OSes as zones.
Another example is System Workload Partitions (WPARs), introduced in the IBM AIX
6.1 operating system. System WPARs are software partitions running under one instance
of the global AIX OS environment.
The operating system level architecture has low overhead that helps to maximize efficient
use of server resources. The virtualization introduces only a negligible overhead and
allows running hundreds of virtual private servers on a single physical server. In contrast,
approaches such as full virtualization and paravirtualization (like Xen or UML) cannot
achieve such level of density, due to overhead of running multiple kernels. From the other
side, operating system-level virtualization does not allow running different operating
systems (i.e. different kernels), although different libraries, distributions etc. are possible.

Tools
VMware

VMware software provides a completely virtualized set of hardware to the guest operating
system. VMware software virtualizes the hardware for a video adapter, a network adapter, and
hard disk adapters. The host provides pass-through drivers for guest USB, serial, and parallel
devices. In this way, VMware virtual machines become highly portable between computers,
because every host looks nearly identical to the guest. In practice, a system administrator can
pause operations on a virtual machine guest, move or copy that guest to another physical
computer, and there resume execution exactly at the point of suspension. Alternately, for
enterprise servers, a feature called VMotion allows the migration of operational guest virtual
machines between similar but separate hardware hosts sharing the same storage. Each of these
transitions is completely transparent to any users on the virtual machine at the time it is being
migrated.
VMware Workstation, Server, and ESX take a more optimized path to running target operating
systems on the host than emulators (such asBochs) which simulate the function of each CPU
instruction on the target machine one-by-one, or dynamic recompilation which compiles blocks
of machine-instructions the first time they execute, and then uses the translated code directly
when the code runs subsequently. (Microsoft Virtual PC for Mac OS X takes this approach.)
VMware software does not emulate an instruction set for different hardware not physically
present. This significantly boosts performance, but can cause problems when moving virtual
machine guests between hardware hosts using different instruction-sets (such as found in 64bit Intel and AMD CPUs), or between hardware hosts with a differing number of CPUs. Stopping
the virtual-machine guest before moving it to a different CPU type generally causes no issues.
VMware's products use the CPU to run code directly whenever possible (as, for example, when
running user-mode and virtual 8086 mode code on x86). When direct execution cannot operate,
such as with kernel-level and real-mode code, VMware products re-write the code dynamically, a
process VMware calls "binary translation" or BT. The translated code gets stored in spare
memory, typically at the end of the address space, which segmentation mechanisms can protect
and make invisible. For these reasons, VMware operates dramatically faster than emulators,
running at more than 80% of the speed that the virtual guest operating-system would run directly
on the same hardware. In one study VMware claims a slowdown over native ranging from 06
percent for the VMware ESX Server.

VMware's approach avoids some of the difficulties of virtualization on x86-based


platforms. Virtual machines may deal with offending instructions by replacing them, or
by simply running kernel-code in user-mode. Replacing instructions runs the risk that the
code may fail to find the expected content if it reads itself; one cannot protect code
against reading while allowing normal execution, and replacing in-place becomes

complicated. Running the code unmodified in user-mode will also fail, as most
instructions which just read the machine-state do not cause an exception and will betray
the real state of the program, and certain instructions silently change behavior in usermode. One must always rewrite; performing a simulation of the current program
counter in the original location when necessary and (notably) remapping hardware
code breakpoints.
Although VMware virtual machines run in user-mode, VMware Workstation itself
requires the installation of various drivers in the host operating-system, notably to
dynamically switch the GDT and the IDT tables.
The VMware product line can also run different operating systems on a dual-boot system
simultaneously by booting one partition natively while using the other as a guest within
VMware Workstation.

VMware

Configuring VM

Here as you can see any property of VM can be set like hard disk capacity,network
adapter, USB drive etc.

Running OpenSuse linux VM in Windows XP

After configuring VM you just need to click on start this virtual machine to start it.

VirtualBox

Oracle VM VirtualBox is an x86 virtualization software package, originally created


by Germansoftware company Innotek, purchased by Sun Microsystems, now developed
by Oracle Corporation as part of its family of virtualization products. It is installed on an existing
host operating system; within this application, additional guest operating systems, each known as
aGuest OS, can be loaded and run, each with its own virtual environment.
Supported host operating systems include Linux, Mac OS X, Windows XP, Windows
Vista,Windows 7 and Solaris; there is also an experimental port to FreeBSD. Supported guest
operating systems include a small number of versions of NetBSD and various versions
ofDragonFlyBSD, FreeBSD, Linux, OpenBSD, OS/2 Warp, Windows,
Solaris, Haiku, Syllable,ReactOS and SkyOS.
According to a 2007 survey by DesktopLinux.com, VirtualBox was the third most popular
software package for running Windows programs on Linux desktops.

Feature set

64-bit guests (64-bit hosts with CPU virtualization extensions or experimentally


on 64-bit

capable 32-bit host operating systems)

NCQ support for SATA raw disks and partitions


Snapshots
Seamless mode
Clipboard
Shared folders
Special drivers and utilities to facilitate switching between systems
Command line interaction (in addition to the GUI)
Public API (Java, Python, SOAP, XPCOM) to control VM configuration and

execution
Remote display (useful for headless host machines)
Nested paging for AMD-V and Intel Core i7
Raw hard disk access - allows physical hard disk partitions on the host system to

appear in the guest system


VMware Virtual Machine Disk Format (VMDK) support - allows VirtualBox to

exchange disk images with VMware


Microsoft VHD support
3D virtualization (Limited support for OpenGL was added to v2.1, more support
was added to v2.2, OpenGL 2.0 and Direct3D support was added in VirtualBox
3.0)

SMP support (up to 32 virtual CPUs), since version 3.0


Teleportation (aka Live Migration), since version 3.1
2D video acceleration, since version 3.1

Only available in the full (closed source) version:

Remote Desktop Protocol (RDP) control of VM (though the GPL version can be
run from distance using for example NoMachine)

USB support, with remote devices over RDP

VirtualBox

Configuring VM

Installlation of Windows Vista in VirtualBox

Microsoft Virtual PC

Windows Virtual PC (formerly Microsoft Virtual PC and Connectix Virtual PC) is a


virtualization program for Microsoft Windows operating systems. In July 2006 Microsoft
released the Windows-hosted version as a free product. In August 2006 Microsoft
announced the Macintosh-hosted version would not be ported to Intel-based Macintosh
computers, effectively discontinuing the product as PowerPC-based Macintosh computers
are no longer manufactured. The newest release, Windows Virtual PC, is available only
for Windows 7 hosts.
Virtual PC virtualizes a standard PC and its associated hardware. Supported Windows
operating systems can run inside Virtual PC. Other operating systems like Linux may run,
but are not officially supported. The successor to Virtual PC 2007, Windows Virtual PC,
entered public beta testing on April 30, 2009, and was released alongside Windows
7. Unlike its predecessors, this version supports only Windows 7 host operating systems.
It originally required hardware virtualization support but on March 19th 2010 Microsoft
released an update to Microsoft Virtual PC which allows it to run on PCs without
Hardware Virtualisation Technology (VT).
Windows Virtual PC includes the following new features:

USB support and redirection connect peripherals such as flash drives and digital
cameras, and print from the guest to host OS printers(USB flash drives that are
already connected to the host can however be emulated as a network drive on older
version of the program)

Seamless application publishing and launching run Windows XP


Mode applications directly from the Windows 7 desktop
Support for multithreading run multiple virtual machines concurrently, each in
its own thread for improved stability and performance
Smart card redirection use smart cards connected to the host
Integration with Windows Explorer manage all VMs from a single Explorer
folder (%USER%\Virtual Machines)

It removes the following features:

Official guest support for legacy operating systems earlier than Windows XP
Professional

Drag and drop file sharing between the guest and the host

Ability to commit changes automatically to the VHD when saving or discarding


changes Changes must be commmited to the VHD manually from the settings dialog.
Parallel ports are no longer supported.
Floppy disks are not supported from the user interface but are supported using
scripts
Shared folders in DOS virtual machines no longer work. Support for this was
removed in Microsoft Virtual PC 2007 but would still work if you installed the VPC
2004 VM Additions for DOS.

Microsoft Virtual PC

Configuring VM

Running VM

Conclusion

Virtual machines are just a software implementation of physical computer.

They are now used for :-

Security and isolation.

Ability to support several operating systems at the same time .

Ability to experiment with new operating systems, or modifications of


existing systems, while maintaining backward compatibility with
existing operating systems.

Generally system VM are used for servers and process VM for multiple OS support.

Full virtualization was used before but nowadays non native technique is being used.

VMware is best among all software available today for VM (in context for OS
support).

Advantages
In general case:Security:Applications running on a virtual machine are more secure than those running
directly on hardware machines.VMM controls how guest operating systems use
hardware resources; what happens in one VM doesnt affect any other VM: by
virtualizing all hardware resources, a VMM can prevent one VM from even
naming the resources of another VM, let alone modifying them.
Isolation:

The software state of a virtual machine isnt dependent on the underlying


hardware.
Rosenblum and Garfinkel point out that this makes it possible to suspend and
resume entire virtual machines and even move them to other platforms
For load balancing
For system maintenance
Etc.

Servers:

Conventionally, servers run on dedicated machines.


Protects against another server/application crashing the OS
But wasteful of hardware resources
VMM technology makes it possible to support multiple servers, each running on
its own VM, on a single hardware platform.

In case of users:

Run multiple operating systems on a single computer including Windows, Linux


and more.

Let your Mac run Windows creating a virtual PC environment for all your
Windows applications.

Get enhanced support of software.

Test different-different OS without any changes to host system that is changes are
pretty easy to get them reversed.

In case of an organization:Get more out of your existing resources: Pool common infrastructure resources
and break the legacy one application to one server model with server
consolidation.
Reduce datacenter costs by reducing your physical infrastructure and
improving your server to admin ratio: Fewer servers and related IT hardware
means reduced real estate and reduced power and cooling requirements. Better
management tools let you improve your server to admin ratio so personnel
requirements are reduced as well.
Increase availability of hardware and applications for improved business
continuity: Securely backup and migrate entire virtual environments with no
interruption in service. Eliminate planned downtime and recover immediately
from unplanned issues.
Gain operational flexibility: Respond to market changes with dynamic resource
management, faster server provisioning and improved desktop and application
deployment.
Improve desktop manageability and security: Deploy, manage and
monitor secure desktop environments that users can access locally or remotely,
with or without a network connection, on almost any standard desktop, laptop or
tablet PC.

Disadvantages
In general case:Virtual machines have only one disadvantage that they are not fast as real
hardware.

In case of users facing:-

Slim to no 3d support.
Setting up networking can sometimes be a pain (but is solvable).
Sound support sometimes is a problem.

In case of server virtualization or for an organization:Magnified physical failures


Imagine you have ten important servers running on one physical host and its RAID
controller runs amok, wiping out all of your hard disks. Dont say that this is not very
likely, as we have already had two or three incidents from malfunctioning RAID
controllers from well-known brands.
There are several ways to compensate for this downside. One is clustering, which
certainly entails extra efforts. Another answer is to backup the virtual machines with a
CDP (Continuous Data Protection) solution. If your physical server goes down, it is
possible to restore all VMs quickly to another host. This solution implies that you have
enough capacity left on another host. Thus, if your virtual infrastructure is well planned,
physical failures may be less problematic. However, this means that you have to invest in
redundant hardware, which more or less eliminates one of the alleged advantages of
server virtualization.
Degraded performance
There is no doubt that virtualization requires extra hardware resources. The problem is
that it is almost impossible to estimate in advance how many extra resources will be
needed. I know that there are capacity planning guides and tools but from my experience
every piece of software behaves differently in a virtualized environment. We have
applications that are quite modest as long as they run on a physical server, but when they
were virtualized their resource requirement multiplied.

You cant do much if you have such applications. In our case, we had no choice but leave
them on physical servers. Hence, the only solution to this problem is to thoroughly test
each application with the virtualization solution of your choice.
New skills
It sounds so easy install a virtualization solution and then just deploy your servers as
you are used to. Not really! Many things are different in a virtual environment. I will give
you just one example. When we installed our first server virtualization solution, I
instructed our administrators to test some of their servers in the virtual environment. After
a week or so, an administrator told me that he could not test his server because there was
no more RAM available on the host. I was quite surprised, as this server has enough
capacity for 10 VMs.
When I logged on, only 3 VMs were actually running. What happened? Some of his
fellow administrators had assigned the same amount of RAM to the virtual servers as
their physical servers had required. It took me quite some time to convince them to
change their working habits. When you buy a new physical server, it is common practice
to equip this server with as much memory as your budget allows. This makes sense, as it
takes time to order new memory modules and add them to the server. Even if you do not
require it now, you will most likely require more RAM very soon.
Of course, this situation is different in a virtual environment. I assign blame to myself, as
we should have discussed things in advance. I should have told the administrators that
they first need to figure out how much RAM their servers really need using a
performance monitoring tool. If their server requires more RAM later, it is not a big deal
to assign more. I chose this simple example because it demonstrates that you have to do
some rethinking when you work in a virtual environment. The fact that several
administrators share one physical server causes problems that didnt previously exist. Of
course, it is also necessary to acquire many new technical skills.
Complex root cause analysis
Virtualizing a server certainly implies big changes to the whole system. A new layer of
complexity is added and can cause new problems. However, the main difficulty is that if
something doesnt work as it is supposed to, it can require considerable extra efforts to
find the cause of the problem. I have another example for this downside of server
virtualization.

New management tools


Virtualization also has advantages, such as easier migration, cloning or snapshots.
However, you can only take advantage of these new capabilities if you have the proper
tools. Often, the tools that come with a virtualization solution are not enough, only
supporting basic management tasks. This means that you need additional utilities, which
cost both money and time. I am not only talking about such tools as VMware Virtual
Center or Microsoft Virtual Machine Manager (VMM).
Another important field is backup, or more precisely, disaster recovery. Of course, you
can use your current backup software to secure your virtualized servers. However, one of
the advantages of server virtualization is that disaster recovery becomes much easier and
faster provided you have a backup solution that is able to perform live backups of the
virtual machines and not just of the virtualized servers running in these VMs.
The problem lies in that there are no real standards when it comes to virtual server
management. But, there are standards for server management in general. For example,
there are many backup tools that allow you to secure your Windows, UNIX and Mac
machines, but it is difficult to find a disaster recovery solution that supports all the
various virtualization solutions out there. All in all, this means that your zoo of
management tools will grow, meaning more work for you.
Virtual machine sprawl
Even though virtual server management can get quite complex, installing a new virtual
machine is a piece of cake. You need a new server? Just clone your master image to a
new VM and you are done within a few seconds. The problem is that the number of
servers might grow faster than the number of admins who are supposed to manage them.
It is good that even virtual servers have physical limits. As soon as you reach the limit of
your virtual capacity, the virtual machine sprawl will naturally stop.
The number of servers in my department has grown significantly since we started
working with server virtualization. As a matter of fact, quite a few of them exist only
because it is so easy to create them in a virtual environment. Thus, you have to be very
careful that you dont waste the resources of virtual server hosts with unneeded virtual
machines.

Applications
Platform independence:- Virtual machines are used for platform independence in
many of the programming languages like Java, .NET etc. Virtual machine is dependent
on machine a byte code is independent of machine so code is compiled to byte code and
then that byte code is interpreted at machine on which code has to be executed.

Server Consolidation:- Virtual machines are used for server consolidation in which
on a single physical server many of the virtual servers are mounted using virtual machine
for security and isolation:-

Streaming Media Server

Virtualized Host

Voice-over-IP Server
Application Server

consolidation

Web Server
Database Server

Guest Domain
1
Guest Domain
2
Guest Domain
3
Guest Domain
4
...

...

Operating system debugging:- Virtual machine are heavily used in operating


system debugging as we can keep copy of stable version running and simultaneously
debug newer version.

Guest 1

Guest 2

VMM
Host OS
Hardware

Appendix
OS

Operating system

VM

Virtual machine

QoS

Quality of services

VMM

Virtual machine monitor

IP

Internet protocol

VPS

Virtual private servers

WPAR

Workload partitions

RAID

Redundant array of independent disks

References
www.wikipedia.org/
vmware.com/
www.microsoft.com/windows/virtual-pc/
www.virtualbox.org
www.wisegeek.com/what-is-a-virtual-machine.htm
www.diit.unict.it
www.slideshare.net

Das könnte Ihnen auch gefallen