Sie sind auf Seite 1von 31

UNIT III VIRTUALIZATION

Cloud deployment models: public, private, hybrid, community – Categories of cloud computing:
Everything as a service: Infrastructure, platform, software - Pros and Cons of cloud computing –
Implementation levels of virtualization – virtualization structure – virtualization of CPU, Memory
and I/O devices – virtual clusters and Resource Management – Virtualization for data centre
automation.

1. Cloud deployment models: public, private, hybrid, community


1.1. Cloud Computing
Defn:-1: is a high-throughput computing (HTC) paradigm whereby the infrastructure
provides the services through a large data centres or server farms.
Defn-2: applies a virtual platform with elastic resources put together by on-demand
provisioning of hardware, software and data sets, dynamically
 concept of cloud computing has evolved from cluster, grid, and utility computing.
 Cluster and grid computing uses many computers in parallel to solve problems of any
size. Utility computing provides computing resources as a service with pay per use.
 cloud computing model
 enables users to share access to resources from anywhere at any time through
their connected devices.
 avoids large data movement, resulting in much better network bandwidth
utilization.
 enhanced resource utilization, increased application flexibility, and reduced the
total cost of using virtualized data-centre resources
 It frees IT companies by freeing them from the low-level task of setting up the
hardware (servers) and managing the system software
Different types of Clouds/cloud deployment models

Fig-3.1: Public, private, and hybrid clouds


1. Public cloud
 is built over the Internet (i.e., service provider offers resources, applications
storage to the customers over the internet) and can be accessed by any user who
has paid for the service.
 are owned by service providers and are accessible through a subscription
 Examples: Amazon Elastic Compute Cloud (EC2), IBM’s Blue Cloud, Sun Cloud,
Google AppEngine(GAE) and Windows Azure Services Platform.
 delivers a selected set of business processes i.e., The application and
infrastructure services are offered on a price-per-use basis.
 promote standardization, preserve capital investment, and offer application
 Advantage
 It offers greater scalability
 Its cost effectiveness helps you save money.
 It offers reliability which means no single point of failure will interrupt your
service.
 Services like SaaS, (Paas), (Iaas) are easily available on Public Cloud platform
as it can be accessed from anywhere through any Internet enabled devices.
 It is location independent – the services are available wherever the client is
located
 Disadvantage
 No control over privacy or security
 Cannot be used for use of sensitive applications
 Lacks complete flexibility as the platform depends on the platform provider
 No stringent protocols regarding data management.
 Pictorial representation

Fig-3.2: Public Cloud

2. Private Cloud
 is built within the domain of an intranet owned by a single organization.
 Therefore, it is client owned and managed, and its access is limited to the owning
clients and their partners
 give local users a flexible and agile private infrastructure to run service workloads
within their administrative domains.
 is supposed to deliver more efficient and convenient cloud services .
 attempt to achieve customization and offer higher efficiency, resiliency, security,
and privacy
 Advantage
 Offers greater Security and Privacy
 Offers more control over system configuration as per the company’s need
 Greater reliability when it comes to performance
 Enhances the quality of service offered by the clients
 Saves money
 Disadvantage
 Expensive when compared to public cloud
 Requires IT Expertise
 Pictorial representation

Fig-3.3: Private Cloud


3. Hybrid Cloud
 is built with both public and private clouds,
 the Research Compute Cloud (RC2) is a private cloud, built by IBM, that
interconnects the computing and IT resources at eight IBM Research Centers
scattered throughout the United States, Europe, and Asia.
 provides access to clients, the partner network, and third parties
 operates in the middle, with many compromises in terms of resource sharing
 Advantage
 It is scalable
 It is cost efficient
 Offers better security
 Offers greater flexibility
 Disadvantage
 Infrastructure Dependency
 Possibility of security breach through public cloud
4. Community Cloud
 Community clouds are a hybrid form of private clouds built and operated
specifically for a targeted group. These communities have similar cloud
requirements and their ultimate goal is to work together to achieve their business
objectives.
Or
is a type of cloud hosting in which the setup is mutually shared between many
organisations that belong to a particular community, i.e. banks and trading firms
 Pictorial representation

Fig-3.4:Community Cloud

Fig-3.5: Types of Cloud Computing

2. Categories of cloud computing: Everything as a service:


Infrastructure, platform, software
2.1 Cloud Service Models
 The services provided over the cloud in a pay-as-you-go model to consumers as
subscription-based are categorized into three different models . They are
1. Infrastructure as a Service (IaaS)
2. Platform as a Service (PaaS)
3. Software asa Service (SaaS).
 These models are offered based on various SLAs between providers and users
 the SLA for cloud computing is addressed in terms of service availability,
performance, and data protection and security
 pictorial representation of three cloud models at different service levels of the cloud

Fig-3.6: IaaS, PaaS, and SaaS cloud service models at different service levels

 SaaS is applied at the application end using special interfaces by users or clients.
 At the PaaS layer, the cloud platform must perform billing services and handle job
queuing, launching, and monitoring services.
 At the bottom layer of the IaaS services, databases, compute instances, the file
system, and storage must be provisioned to satisfy user demands
1. Infrastructure as a Service (IaaS)
 This model offers computing resources such as servers, storage, networks, and the
data center based on the demand of users.
or
IaaS takes the traditional physical computer hardware, such as servers, storage
arrays, and networking, and lets you build virtual infrastructure that mimics these
resources
 The user can deploy and run on multiple VMs running guest OSes on specific
applications.
 the service is performed by rented cloud infrastructure
 The user does not manage or control the underlying cloud infrastructure, but can
specify when to request and release the needed resources.
 Examples: Amazon EC2, Windows Azure, Rackspace, Google Compute Engine.
2. Platform as a Service (PaaS)
 provides computing platforms which typically includes operating system,
programming language execution environment, database, web server etc.
or
This model enables the user to deploy (meaning is install) user-built applications onto
a virtualized cloud platform.
 PaaS includes hadware, software, middleware, databases, development tools, and
some runtime support such as Web 2.0 and Java.
 The user is freed from managing the cloud infrastructure.
 Examples: AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App
Engine, Apache Stratos.
3. Software as a Service (SaaS)/Web-based software/ on-demand
software/hosted software
 refers to browser-initiated application software over thousands of paid cloud
customers
or
a software that is owned, delivered and managed remotely by one or more providers.
or
is a way of delivering applications over the Internet—as a service. Instead of
installing and maintaining software, you simply access it via the Internet, freeing
yourself from complex software and hardware management.
 On the customer side, there is no upfront investment in servers or software licensing.
 On the provider side, costs are rather low, compared with conventional hosting of
user applications.
 E.g. Google Gmail and docs, Microsoft SharePoint, and the CRM software from
Salesforce.com
4. Mashup of Cloud Services
 Cloud mashups have resulted from the need to use multiple clouds simultaneously or
in sequence
 Pictorial representation of all services

Fig-3.7: Cloud Services


3. Pros and Cons of cloud computing
3.1 Advantages of Cloud Computing
1. Always Available: Users can access public or private cloud environments any time
of day or night, as long as they have the proper credentials.
2. Backup/Recovery: Files are backed up and ready to recover, even following a
disaster.
3. Convenient: Able to access applications and virtual environments from anywhere, at
any time.
4. Cost Savings: With cloud computing, capital costs can be saved with zero in-house
server storage and application requirements. The lack of on-premises infrastructure
also removes their associated operational costs in the form of power, air conditioning
and administration costs. You pay for what is used.
5. Durable/Dynamic: The cloud environment is easily adaptable and dynamic, making
it a durable choice for the future.
6. Easy to Use: No training sessions is needed to get people ready to use the Cloud.
7. Fast Deployment: Deploying a cloud application, database or infrastructure is much
quicker and easier than installing and developing an in-house infrastructure system.
8. Remote Access: Users can access all sorts of applications and from anywhere.
9. Reliability: Cloud computing is much more reliable and consistent than in-house IT
infrastructure. Most providers offer a Service Level Agreement which guarantees
24/7/365 and 99.99% availability. If a server fails, hosted applications and services
can easily be transited to any of the available servers.
3.2 Disadvantages of Cloud Computing
1. Downtime: As cloud service providers take care of a number of clients each day,
they can become overwhelmed and may even come up against technical outages
(meaning is failure). This can lead to business processes being temporarily
suspended. Additionally, if internet connection is offline, applications, server or data
from the cloud cannot be able to access.

2. Security: Although cloud service providers implement the best security standards
and industry certifications, storing data and important files on external service
providers always opens up risks..

3. Vendor Lock-In: Although cloud service providers promise that the cloud will be
flexible to use and integrate, switching cloud services is something that hasn’t yet
completely evolved. Organizations may find it difficult to migrate their services from
one vendor to another. For instance, applications developed on Microsoft
Development Framework (.Net) might not work properly on the Linux platform.
4. Limited Control: Since the cloud infrastructure is entirely owned, managed and
monitored by the service provider, it transfers minimal control over to the customer.
The customer can only control and manage the applications, data and services
operated on top of that, not the backend infrastructure itself.

4. Implementation levels of virtualization


4.1 Virtualization

Defn: is a computer architecture technology by which multiple virtual machines (VMs)


are multiplexed in the same hardware machine
 purpose of a VM is to enhance resource sharing by many users and improve
computer performance(resource utilization and application flexibility)

4.2 Levels of Virtualization Implementation


 A traditional computer runs with a host operating system specially tailored for its
hardware architecture

Fig-3.8: Traditional computer

 After virtualization, different user applications managed by their own operating


systems (guest OS) can run on the same hardware, independent of the host OS.
 done by adding additional software, called a virtualization layer/ hypervisor /
virtual machine monitor (VMM)

Fig-3.9: After virtualization


 The main function of the software layer for virtualization is to virtualize the
physical hardware of a host machine into virtual resources to be used by the
VMs.

 virtualization software creates the abstraction of VMs by interposing a


virtualization layer at various levels of a computer system.
 Various operational levels where virtualization layers included are
1. Instruction set architecture (ISA) level
2. Hardware abstraction level(HAL)
3. Operating system level/Server Virtualization
4. Library support level/Middleware Support level
5. Application level

Fig-3.10: Various levels of Virtualization

4.2.1 Instruction Set Architecture (ISA) level


 ISA level virtualization is performed by emulating a given ISA by the ISA of the
host machine
 Therefore, it is possible to run a large amount of legacy binary code written for
various processors on any given new hardware host machine.
 There are two approaches
i) Code Interpretation
 It is basic emulation method
 An interpreter program interprets the source instructions to target instructions
one by one.
 this process is relatively slow
ii) Dynamic Binary Translation
 translates basic blocks of dynamic source instructions to target instructions
 Instruction set emulation requires binary translation and optimization.
 A virtual instruction set architecture (V-ISA) requires adding a processor-specific
software translation layer to the compiler.
4.2.2. Hardware Abstraction Level
 generates a virtual hardware environment for a VM
 idea is to virtualize a computer’s resources, such as its processors,
memory,and I/O devices.
 The intention is to upgrade the hardware utilization rate by multiple users
concurrently.
 The idea was implemented in the IBM VM/370 in the 1960s.
 Recently, the Xen hypervisor has been applied to virtualize x86-based machines to
run Linux or other guest OS applications
4.2.3 Operating System Level/Container Based Virtualization/Server
Virtualization/ Virtualization Support at the OS Level
Defn: refers to an abstraction layer between traditional OS and user applications

4.2.3.1 Why OS-Level Virtualization?


 There are many issues in hardware level virtualization. They are
 it is slow to initialize a hardware-level VM because each VM creates its own image
from scratch
 storing the VM images also becomes an issue
 full virtualization at the hardware level leads to slow performance and low
density,
 Therefore OS-level virtualization provides a feasible solution for these hardware-level
virtualization issues
4.2.3.2 OS-Level Virtualization
 inserts a virtualization layer inside an operating system to partition a machine’s
physical resources
 It enables multiple isolated VMs within a single operating system kernel.
 Also called as virtual execution environment (VE), Virtual Private System (VPS), or
simply container, Single-OS image Virtualization.
 VEs look like real servers
 most OS-level virtualization systems are Linux-based

Fig-3.11: OpenVZ virtualizationlayer inside the host OS, which provides some OS images to create
VMs
4.2.3.3 Advantages of OS-Level Virtualization
i. have minimal startup/shutdown costs, low resource requirements, and high
scalability
ii. it is possible for a VM and its host environment to synchronize state changes when
necessary.

are achieved by
(1) All OS-level VMs on the same physical machine share a single operating system
kernel
(2) the virtualization layer can be designed in a way that allows processes in VMs to
access as many resources of the host machine as possible, but never to modify them
4.2.3.4 Disadvantages of OS-Level Virtualization
i. all the VMs at operating system level on a single container must have the same kind
of guest operating system.
4.2.4 Library Level Virtualization/Middleware Support for Virtualization
 also known as user-level Application Binary Interface (ABI) or API emulation
 can create execution environments for running alien programs on a platform rather
than creating a VM to run the entire operating system.
 API call interception and remapping are the key functions performed.
 Different library-level virtualization systems are
a. Windows Application Binary Interface (WABI): middleware to convert
Windows system calls to Solaris system calls
b. Lxrun: a system call emulator that enables Linux applications written for x86
hosts to run on UNIX systems.
c. WINE(Windows Emulator):Offers library support for virtualizing x86
processors to run Windows applications on UNIX hosts
d. Visual MainWin: Offers a compiler support system to develop Windows
applications using Visual Studio to run on some UNIX hosts.
e. vCUDA
 CUDA is a programming model and library for general-purpose GPUs
 vCUDA virtualizes the CUDA library and can be installed on guest OSes.
 CUDA applications are difficult to run on hardware-level VMs directly.
 vCUDA virtualizes the CUDA library and can be installed on guest OSes.
 When CUDA applications run on a guest OS and issues a call to the CUDA API,
vCUDA intercepts the call and redirects it to the CUDA API running on the host OS
 vCUDA employs a client-server model to implement CUDA virtualization
 It consists of three user space components
1. the vCUDA library
 resides in the guest OS as a substitute for the standard CUDA library.
 It is responsible for intercepting and redirecting API calls from the client to
the stub.
 vCUDA also creates vGPUs and manages them.
2. a virtual GPU in the guest OS (which acts as a client) : functionality of a
vGPU
 It abstracts the GPU structure and gives applications a uniform view of the
underlying hardware
 when a CUDA application in the guest OS allocates a device’s memory the
vGPU can return a local virtual address to the application and notify the
remote stub to allocate the real device memory
 the vGPU is responsible for storing the CUDA API flow
3. the vCUDA stub in the host OS (which acts as a server)
 receives and interprets remote requests and creates a corresponding
execution context for the API calls from the guest OS, then returns the
results to the guest OS.
 also manages actual physical resource allocation

Fig-3.12: Basic concept of the vCUDA architecture

4.2.5. User-Application Level


 Virtualization at the application level virtualizes an application as a VM.
 On a traditional OS, an application runs as a process. Therefore, it is called as
process-level virtualization
 Most popular approach is to deploy(i.e., install) high level language (HLL) VMs.
 In this Level, the virtualization layer sits as an application program on top of
the operating system, and the layer exports an abstraction of a VM that
can run programs written and compiled to a particular abstract machine
definition. E.g. Java Virtual Machine (JVM).
 Other forms of application-level virtualization are known as application
isolation, application sandboxing, or application streaming.
4.2.6 Relative Merits of Different Approaches
 Below table shows the Relative Merits of Virtualization at Various Levels (More
―X‖’s Means Higher Merit, with a Maximum of 5 X’s).

cost to implement that particular virtualization refers to the effort required to isolate
level resources committed to different VMs

Fig-3.13: Merits of Different Approaches

4.2.7 VMM Design Requirements and Providers


Design requirements of VMM are
1. Provides a duplicate or essentially identical to the original machine environment for
program
2. programs run in this environment should show minor decreases in speed.
3. a VMM should be in complete control of the system resources. It includes
a) The VMM is responsible for allocating hardware resources for programs
b) it is not possible for a program to access any resource not explicitly allocated to it
c) it is possible under certain circumstances for a VMM to regain control of resources
already allocated.
Design requirements in terms of differences are
1. differences caused by the availability of system resources(arises w when more than
one VM is running on the same machine)
2. differences caused by timing dependencies

5. Virtualization Structures/Tools and Mechanisms


Depending on the position of the virtualization layer VM architectures are classified into
1. Hypervisor architecture
2. Para-virtualization
3. Host-based virtualization
5. 1 Hypervisor and Xen Architecture
Hypervisor
 Hypervisor supports hardware-level virtualization.
 Hypervisor/VMM software sits directly between the physical hardware and its OS.
Based on the functionality, a hypervisor are classified into
a) a micro-kernel architecture like the Microsoft Hyper-V
 includes only the basic and unchanging functions (such as physical memory
management and processor scheduling).
 The device drivers and other changeable components are outside the
hypervisor
b) a monolithic hypervisor architecture like the VMware ESX.
 Includes all functions and device drivers.
 the size of the hypervisor code of a micro-kernel hypervisor is smaller than that of a
monolithic hypervisor.

XEN Architecture
 is an open source hypervisor program
 is a micro-kernel hypervisor
 does not include any device drivers natively
 Provides a virtual environment located between the hardware and the OS.
 The core components of a Xen system are the hypervisor, kernel, and applications
 In xen hypervisor not all guest OSes are created equal
 The guest OS, which has control ability, is called Domain 0, and the others are
called Domain U.
Domain 0
 is a privileged guest OS of Xen.
 It is first loaded when Xen boots without any file system drivers being available.
 Domain 0 is designed to access hardware directly and manage devices.
 one of the responsibilities of Domain 0 is to allocate and map hardware resources for

the guest domains (the Domain U domains).


 Xen is based on Linux and its security level is C2.Therefore security policies are
needed to improve the security of Domain 0.
 Pictorial representation of XEN architecture

Fig-3.14:Xen Architecture
5. 2 Binary Translation with Full Virtualization
Hardware virtualization can be classified into two categories based on implementation
technologies
1. Full virtualization
 does not need to modify the host OS.
 relies on binary translation to trap and to virtualize the execution of certain sensitive,
nonvirtualizable instructions
2. Host-based virtualization.
 both a host OS and a guest OS are used.
 A virtualization software layer is built between the host OS and guest OS.
5.2.1 Full Virtualization
 In full virtualization, noncritical instructions run on the hardware directly while critical
instructions are discovered and replaced with traps into the VMM to be emulated by
software.
 Both the hypervisor and VMM are considered full virtualization
 Why are only critical instructions trapped into the VMM?
 binary translation can incur a large performance overhead
 Noncritical instructions do not control hardware or threaten the security of the
system, but critical instructions do.
 Running noncritical instructions on hardware not only can promote efficiency, but
also can ensure system security.
Note:
o The traditional x86 processor offers four instruction execution rings: Rings 0,1, 2,
and 3.
o The lower the ring number, the higher the privilege of instruction being executed.
o The OS is responsible for managing the hardware and the privileged instructions
to execute at Ring 0, while user-level applications run at Ring 3.
5.2.2 Binary Translation of Guest OS Requests Using a VMM
 This approach was implemented by VMware
 Pictorial representation of this implementation

Fig-3.15: Indirect execution of complex instructions via binary translation of guest OS requests
using the VMM plus direct Execution of simple instructions on the same host
 VMware puts the VMM at Ring 0 and the guest OS at Ring 1.
 The VMM scans the instruction stream and identifies the privileged, control- and
behavior-sensitive instructions. Then these instructions are trapped into the VMM,
which emulates the behavior of these instructions.
 The emulation method is called binary translation.
 Full virtualization combines binary translation and direct execution.
 The guest OS is unaware that it is being virtualized.
 The performance of full virtualization on the x86 architecture is typically 80 percent
to 97 percent that of the host machine.
 Drawbacks => Performance of full virtualization may not be ideal (meaning is
best), because, binary translation which is time-consuming
5.2.3 Host-Based Virtualization
 Guest OS is aware that it is virtualized
 virtualization layer is installed on top of the host OS
 host OS is still responsible for managing the hardware
 The guest OSes are installed and run on top of the virtualization layer. Dedicated
applications may run on the VMs
 Advantages
 the user can install this VM architecture without modifying the host OS.
 the host-based approach appeals to many host machine configurations
 disadvantage
 performance is too low(When an application requests hardware access, it involves
four layers of mapping which downgrades performance significantly.)
 When the ISA of a guest OS is different from the ISA of the underlying hardware,
binary translation must be adopted.
5.3 Para-Virtualization with Compiler Support
 Modifies the guest operating systems.
 para-virtualization reduce the virtualization overhead, and improve performance by

modifying only the guest OS kernel

 concept of a para-virtualized VM architecture

Fig-3.16:para-virtualized VM Architecture
 guest operating systems are paravirtualized and are assisted by an intelligent
compiler to replace the nonvirtualizable OS instructions by hypercalls.

Fig-3.17:para-virtualizedguest OS assisted by an intelligentcompiler to replace nonvirtualizableOS


instructions by hypercalls

 The guest OS kernel is modified to replace the privileged and sensitive


instructions with hypercalls to the hypervisor or VMM.
 The guest OS running in a guest domain may run at Ring 1 instead of at Ring 0.
 This implies that the guest OS may not be able to execute some privileged and
sensitive instructions.
 The privileged instructions are implemented by hypercalls to the hypervisor.
 After replacing the instructions with hypercalls, the modified guest OS emulates the
behavior of the original guest OS
 E.g., KVM(Kernel-Based VM)
 is a Linux para-virtualization system—a part of the Linux version 2.6.20 kernel
 Memory management and scheduling activities are carried out by the existing
Linux kernel and KVM does the rest
 KVM is a hardware-assisted para-virtualization tool, which improves performance
and supports unmodified guest OSes such as Windows, Linux, Solaris, and other
UNIX variants.
 Pictorial representation of KVM

Fig-3.18: KVM Architecture


5.3.1 Para-Virtualization Architecture
Advantage
 Performance is high
Disadvantages
 compatibility and portability
 the cost of maintaining
 para-virtualized OSes is high, because they may require deep OS kernel
modifications
Table-3.1: Difference between full and para virtualization
S.No. Full Virtualization Para Virtualization
1. intercepts and emulates privileged and replaces privileged and sensitive
sensitive instructions at runtime instructions with hypercalls at compile
time.
2. Not aware that it is virtualized aware that it is virtualized
3. performance is too low performance is high

6. Virtualization of CPU, Memory and I/O devices


 Processors x86 employ a special running mode and instructions called as h/w
assisted virtualization

6.1 H/W Support Virtualization


 Modern OS & processors allow multiple processes to run simultaneously.
 If no protection mechanism is given, then all instructions from different processes will
access h/w directly and cause a system crash
 Therefore all processes have 2 modes. They are
o User mode o Supervisor mode
 Instructions running in supervisor mode are called privileged instruction, others
are non -privileged instructions
 There are many h/w virtualization products are available
 Example
o VMware workstation is a VM s/w that allows users to setup multiple x86 & X86-64
virtual computers and that run one or more VMs simultaneously with the host OS.
o Xen is a hypervisor for use in IA-32, x86-64, Itanium, and PowerPC 970 hosts.
o KVM can support hardware-assisted virtualization using the Intel VT-x or AMD-v
and VirtIO framework.
Fig-3.19: Intel hardware support for virtualization of processor, memory, and I/O
devices

 Intel provides a hardware-assist technique to make virtualization easy and


improve performance
 For processor virtualization, Intel offers the VT-x or VT-i technique.
 VT-x adds a privileged mode (VMX Root Mode) and some instructions to
processors. This enhancement traps all sensitive instructions in the VMM
automatically.
 For memory virtualization, Intel offers the EPT (Extended Page Table), which
translates the virtual address to the machine’s physical addresses to improve
performance.
 For I/O virtualization, Intel implements VT-d (virtualization for direct I/O) and
VT-c (Virtualization for connectivity) to support.
6.2 CPU Virtualization
 VM is a duplication of existing computer system in which a majority of the VM
instructions are executed on the host processor
 Unpriviledged instructions of VMs run directly on the host machine
 Critical instruction should be handled carefully for correctness and stability
 The critical instructions are divided into three categories:
o Privileged instructions o Behavior sensitive instructions
o Control-sensitive
instructions
 Privileged instructions execute in a privileged mode and will be trapped if
executed outside this mode.
 Control-sensitive instructions attempt to change the configuration of resources
used.
 Behavior-sensitive instructions have different behaviors depending on the
configuration of resources, including the load and store operations over the
virtual memory.
 A CPU architecture is virtualizable if
o VM’s privileged and unprivileged instructions run in CPU’s user mode
o VMM run in supervisor mode
 When the privileged instructions are executed by VM , it is trapped by VMM(i.e it
act as a unified mediator)
6.3 Hardware-Assisted CPU Virtualization
 Additional mode called privilege mode level (some people call it Ring-1) is added
to x86 processors.
 Therefore, operating systems can still run at Ring 0 and the hypervisor runs at
Ring -1.
 All the privileged and sensitive instructions are trapped in the hypervisor
automatically. This technique removes the difficulty of implementing binary
translation of full virtualization.
 E.g.: Intel Hardware-Assisted CPU Virtualization

Fig-3.20: Intel hardware-assisted CPU virtualization

 Intel calls the privilege level of x86 processors the VMX Root Mode
 Transition from the hypervisor to the guest OS and from guest OS to hypervisor occurs
through VM entry and VM exit
6.4 Memory Virtualization
 In traditional execution environment, OS maps virtual memory to machine memory
using page tables (called as one stage mapping) and also makes use of MMU and
TLB.
 In virtual execution environment, virtual memory virtualization involves sharing
the physical system memory in RAM & dynamically allocating it to the
physical memory of VMs
 Two-stage mapping process should be maintained by guest OS and the VM
i. Stage I : Guest Virtual Memory  Guest Physical memory
ii. Stage II: Guest Physical Memory Host Machine memory

Fig-3.21: Two-stage Mapping process

 Each Page table of the guest OS has a separate page table in the VMM, the VMM
page table is called as shadow Page Table.
 Nested Page table add another layer of indirection to virtual memory
 VMware uses shadow page tables to perform virtual memory to machine memory
address translation
 Processor uses TLB h/w to map the virtual memory directly to the machine
memory to avoid two levels of translation
 Example: Extended Page Table by Intel for Memory Virtualization
 Intel offers a Virtual Processor ID (VPID) to improve use of the TLB.
 The Page table in Guest is used for converting Guest Virtual address(GVA) to
Guest Physical Address(GPA)
 Guest Physical Address(GPA) can be converted to the host physical address (HPA)
using EPT.

Fig-3.22: Memory Virtualization


6.5 I/O Virtualization
 I/O virtualization involves managing the routing of I/O request between virtual
devices and the shared physical hardware
 There ways to perform I/O Virtualization. They are
o Full Device emulation o Direct I/O Virtualization
o Para virtualization
6.5.1 Full Device Emulation
 First approach of I/O Virtualization
 Emulates well known , real world devices
 All the functions of a device or bus infrastructure, such as device enumeration,
identification, interrupts, DMA are replicated in software
 This S/W is located in the VMM & acts as virtual device
 I/O access requests of guest OS are trapped in the VMM which interacts with I/O
devices.

Fig-3.23: Device emulation for I/O virtualization


6.5.2 Para virtualization
 Used in Xen
 Consists of a front end and a backend driver
 Front end driver is running in Domain U and backend driver is running in Domain 0
 They interact with each other via a shared memory.
 The front end driver manages the I/O request of the guest OS and the backend driver
is responsible for managing the real I/O devices and multiplexing the I/O data of
different VMMs.

Fig-3.24: Para virtualization


6.5.3 Direct I/O Virtualization
 VM access the I/O device directly
6.5.4 Self Virtualized I/O(SV-I/O)
 Goal: Harness (Meaning bind) the rich resources of a multicore processor.
 All tasks associated with vitalizing an I/O device are encapsulated in SV-I/O
 Provides Virtual devices and an associated access API to VMs and a management API
to the VMM.
 SV-IO defines virtual interface (VIF) for every kind of virtualized I/O device such as
virtual n/w interface, virtual block disk, virtual camera disk etc.
 Guest OS interacts through Virtual Interface device(VIF) drivers

7. VIRTUAL CLUSTERS AND RESOURCE MANAGEMENT


 A physical cluster is a collection of servers (physical machines) interconnected by a
physical network such as a LAN.
 There are three critical design issues of virtual clusters:
1) Live migration of VMs 3) Dynamic deployment of virtual
2) Memory and file clusters
migrations
 When a traditional VM is initialized, the administrator needs to manually write
configuration information or specify the configuration sources.
 When more VMs join a network, an inefficient configuration always causes problems
with overloading or underutilization.
 Note: Elastic computing is a concept in cloud computing in which computing
resources can be scaled up and down easily by the cloud service provider. Elastic
computing is the ability of a cloud service provider to provision flexible computing
power when and wherever required E.g., Amazon’s Elastic Compute Cloud(EC2)
 Virtual clusters are built with VMs installed at distributed servers from one or more
physical clusters.
 The VMs in a virtual cluster are interconnected logically by a virtual network across
several physical networks.
 Each virtual cluster is formed with physical machines or a VM hosted by multiple physical

clusters

Fig-3.25: illustrates the concepts of virtual clusters and physical clusters. Each virtual cluster is
formed with physical machines or a VM hosted by multiple physical clusters .
 The provisioning(meaning making available) of VMs to a virtual cluster is done
dynamically to have the following interesting properties/ Virtual cluster
characteristics
 The virtual cluster nodes can be either physical or virtual machines. Multiple VMs
running with different OS can be deployed on the same physical node.
 A VM runs with a guest OS, which is often different from the host OS, that manages
the resources in the physical machine, where the VM is implemented.
 The purpose of using VMs is to consolidate multiple functionalities on the same
server. This will greatly enhance server utilization and application flexibility.
 VMs can be colonized (replicated) in multiple servers for the purpose of promoting
distributed parallelism, fault tolerance, and disaster recovery.
 The size (number of nodes) of a virtual cluster can grow or shrink dynamically.
 The failure of any physical nodes may disable some VMs installed on the failing
nodes. But the failure of VMs will not pull down the host system.
 To manage virtual clusters and to build a high performance virtualized computing
environment properly. The following are required
1) Virtual cluster deployment(meaning 3) Resource scheduling
is organizing) 4) Load balancing
2) Monitoring and management over 5) Server Consolidation
large-scale cluster 6) Fault Tolerance

Fig-3.26: virtual cluster based on application partitioning

 As a large number of VM images might be present, the most important thing is to


determine how to store those images in the system efficiently.
 Software packages can be preinstalled as templates (called template VMs). With
these templates, users can build their own software stacks.
 New OS instances can be copied from the template VM. User-specific components
such as programming libraries and applications can be installed to those instances.
 The physical machines are also called host systems. In contrast, the VMs are guest
systems. The host and guest systems may run with different operating systems .
 Each VM can be installed on a remote server or replicated on multiple servers
belonging to the same or different physical clusters.
 The boundary of a virtual cluster can change as VM nodes are added, removed, or
migrated dynamically over time.
7.1 Fast Deployment and Effective Scheduling Fast Deployment
 The system should have the capability of fast deployment. Here, deployment means
two things:
1) To construct and distribute software stacks (OS, libraries, applications) to a
physical node inside clusters as fast as possible.
2) To quickly switch runtime environments from one user’s virtual cluster to
another user’s virtual cluster.
 If one user finishes using his system, the corresponding virtual cluster should shut
down or suspend quickly to save the resources to run other VMs for other users.
7.2 High-Performance Virtual Storage
 A template is a disk image that includes a preinstalled operating system with or
without certain application software.
 The template VM can be distributed to several physical hosts in the cluster to
customize the VMs.
 It is important to efficiently manage the disk spaces occupied by template software
packages. Some storage architecture design can be applied to reduce duplicated
blocks in a distributed file system of virtual clusters. Hash values are used to
compare the contents of data blocks.
 Basically, there are four steps to deploy a group of VMs onto a target cluster: 1)
preparing the disk image, 2) configuring the VMs, 3) choosing the destination nodes,
and 4) executing the VM deployment command on every host.
 Users choose a proper template according to their requirements and make a
duplicate of it as their own disk image.
 Templates could implement the COW (Copy on Write) format. A new COW backup file
is very small and easy to create and transfer. Therefore, it definitely reduces disk
space consumption.
7.3 Live VM Migration Steps
 There are four ways to manage a virtual cluster.
o Guest –Based Manager: Cluster manager resides on a guest system. In this case,
multiple VMs form a virtual cluster.
o Host-based manager: Cluster manager resides on the host systems. The host-
based manager supervises the guest systems.
o Independent cluster manager: Use an independent cluster manager on both the
host and guest systems. This will make infrastructure management more complex.
o Integrated cluster: Integrated cluster on the guest and host systems. This means
the manager must be designed to distinguish between virtualized resources and
physical resources.
 A VM can be in one of the following four states:
o Inactive state: VM is not enabled.
o Active state: VM that has been instantiated
o Paused state: VM that has been instantiated but disabled to process a task
or paused in a waiting state.
o Suspended state: Virtual resources are stored back to the disk.
 Live VM Migration Steps
1. Pre-migration 4. Stop& Copy
2. Reservation 5. Commitment
3. Iterative Precopy 6. Activation

Fig-3.27: Live migration process of a VM from one host to another

 Steps 0 and 1: Start migration. This step makes preparations for the migration,
including determining the migrating VM and the destination host
 Steps 2: Transfer memory. Since the whole execution state of the VM is stored in
memory, sending the VM’s memory to the destination node ensures continuity of the
service provided by the VM.
o All of the memory data is transferred in the first round, and then the migration
controller recopies the memory data which is changed in the last round.
o These steps keep iterating until the dirty portion of the memory is small enough to
handle the final copy.
o Although precopying memory is performed iteratively, the execution of programs is
not obviously interrupted.
 Step 3: Suspend the VM and copy the last portion of the data.
o The migrating VM’s execution is suspended when the last round’s memory data is
transferred.
o During this step, the VM is stopped and its applications will no longer run.
o This ―service unavailable‖ time is called the ―downtime‖ of migration, which should
be as short as possible so that it can be negligible to users.
 Steps 4 and 5: Commit and activate the new host. After all the needed data is
copied, on the destination host, the VM reloads the states and recovers the execution
of programs in it, and the service provided by this VM continues.
7.4 Memory Migration, File System Migration, Network Migration
7.4.1 Memory Migration

 Moving the memory instance of a VM from one physical host to another


 First Technique: The Internet Suspend-Resume (ISR) technique deals with
situations where the migration of live machines is not a necessity (Cold Migration).
 Second Technique: Pre Copy and follows a Tree Structure
7.4.2 File System Migration
1. First Techniques: Copying Local File System
o Each VM has its own virtual disk and it has its own local file system
o VMM only accesses its local file system.
o VMM has to store the contents of each VM’s virtual disks in its local files, which have
to be moved.
2. Second Techniques: Global File System
o Global file system (Distributed file system) across all machines where a VM could be
located. This way removes the need to copy files from one machine to another
3. Third technique: Smart Copying
o Typically, people often move between the same small number of locations, such as
their home and office. In these conditions, it is possible to transmit only the
difference between the two file systems at suspending and resuming locations.
o This technique significantly reduces the amount of actual physical data that has to be
moved.
7.4.3 Network Migration
 A migrating VM should maintain all open network connections
 Each VM must be assigned a virtual IP address known to other entities.
 Each VM can also have its own distinct virtual MAC address.
 The VMM maintains a mapping of the virtual IP and MAC addresses to their
corresponding VMs.
 If the source and destination machines of a VM migration are typically connected to a
single switched LAN,
o ARP reply from the migrating host is provided advertising that the IP has
moved to a new location.
 Alternatively, on a switched network,
o Migrating OS can keep its original Ethernet MAC address and rely on the
network switch to detect its move to a new port.
 Example: Live Migration of VMs between Two Xen-Enabled Hosts

Fig-3.28: Live migration of VM

8. Virtualization of Data Centre Automation


 Data Centre Automation means huge volumes of h/w, s/w and database resources in
the data centers can be allocated dynamically to millions of Internet Users
simultaneously with QOS and cost effectiveness
 Virtualization Provides
o High availability o Workload Balancing
o Backup Services
8.1 Server Consolidation in Data Centers:
o Heterogeneous workload is divided into two categories
 Chatty Workload: Busy at some time and silent at some other times
 Non interactive workloads: Do not require people’s efforts
o Most servers in data centers are underutilized (A large amount of hardware, space,
power, and management cost of these servers is wasted.)
o Server Consolidation is an approach to improve the low utility ratio h/w resources by
reducing the no. of physical servers
o Server consolidation techniques are
 Centralized and physical consolidation
 Virtualization-based server consolidation
o Server virtualization has the following side effects:
 Enhance resource utilization & facilitates backup services & disaster recovery.
 The total cost of ownership is reduced.(Purchases of new servers, low
maintenance cost, lower power, cooling & cabling).
 Improves availability and business continuity.
o To automate data center operations, one must consider
 Resource scheduling
 Architectural support
 Power Management
 Automatic or Autonomic resource management etc.
8.2 Virtual Storage Management
 Virtual storage are managed by VMM and guest OS
 Two Categories of data
i) VM images ii) Application data
 Virtualization layer is present between H/W and OS
 Storage Management of the guest OS performs in such a way that it is operating
in real hard disk.
 Many VM’s running on the VMM refers to the hard dusk on a single physical m/c.
So storage management by VMM is Complex

Fig-3.29: Parallax Providing Virtual Disks to Client VMs


 Parallax itself runs as a user-level application in the storage appliance VM. It provides virtual disk
images (VDIs) to VMs.
Table-3.2 :VI Managers and Operating Systems for Virtualizing Data Centers

 These VI managers are used to create VMs and aggregate them into virtual clusters
as elastic resources
Example: Eucalyptus for Virtual Networking of Private Cloud
 Eucalyptus is an open source software system, for supporting Infrastructure as a
Service (IaaS) clouds.
 Supports virtual networking and the management of VMs.
 Virtual storage is not supported.
 Builds private clouds that can interact with end users through Ethernet or the
Internet.
 The system also supports interaction with other private clouds or public clouds over
the Internet
 The system is short on security

Fig-3.30: Eucalyptus for building private clouds by establishing virtual networks over the VMs
 Instance Manager: controls the execution, inspection, and terminating of VM
instances on the host where it runs.
 Group Manager: Gathers information about and schedules VM execution on specific
instance managers, as well as manages virtual instance network.
 Cloud Manager: Entry-point into the cloud for users and administrators. It queries
node managers for information about resources, makes scheduling decisions, and
implements them by making requests to group managers.
8.3 Trust Management in Virtualized Data Centres
 Management of VM’s is done by either VMM or a VM that monitors other
VM’s(example Domain 0)
 System will be in danger when Hacker attacks VMM or Management VM
 VM Based Intrusion Detection(Intrusion: unauthorized access)
 IDS can be implemented by
o Host based IDS (HIDS) : Implemented in monitored system
o Network based IDS (NIDS) : Based on flow of network traffic which cannot
detect the fake action.
o Virtualization based Intrusion Detection System is of two types
o IDS is an independent process in each VM or a high privileged VM on the VMM
o IDS is integrated into VMM has the same privilege to access H/W
 VMM monitors and audits access request for h/w and System S/W

Fig-3.31: The architecture of intrusion detection using a dedicated VM.

 The VM-based IDS contains a policy engine and a policy module.


 The policy framework can monitor events in different guest VMs by operating system
interface library
 PTrace indicates trace to secure policy of monitored host.
 Analysis of IDS is important after intrusion.
E.g., Logs are used. IDS log system is based on OS Kernel
 Honey pots and Honey nets are other intrusion system.

Das könnte Ihnen auch gefallen