Sie sind auf Seite 1von 8


Hardware Virtualization Role -- WS2016

Firs in WS2008
Enhanced -- new features.
Hypervisor controls access to hardware.
Host = Hardware Server where is installed Hyper-V
Guest OS = VM -- WS2008 up! Vista SP2 up! Linux FreeBSD
HPV Subdivide the hardware capacity of a single physical computer and allocate the capacity
to multiple VM´s
Each VM runs independently of Host and other VM´s

Hypervisor is inserted in the boot process of the VM -- controls access to physical hardware.
Hardware drivers are installed only in the host OS (parent partition)
The VM communicate only with virtualized hardware.

New HOST Features in 2016:

Host Resource Protection: prevents a VM from monopolizing all of the resources on a

HPVHost. Ensures the host and all the vm´s have sufficient resources to function. Not default.

HPV Manager improvements: manage previous versions of hpv. Uses HTTP for management,
instead of RPC to simplify connectivity.

Nested Virtualization: enable HPV role in vm´s. Not for production environments.

Rolling HPV Cluster upgrade: allows to upgrade a 2012 r2 cluster to 2016 by adding node to
an existing cluster. Vm´s can be moved between nodes 2012r2 and coexist with nodes 2016

Shielded VM: the entire vm is encrypted and is accesible only to the admins of the vm.

Startup order priority: identifies a specific startup order for vm´s. Reduces contention and
allows u to start th most important vms first.

Storage QOS: improves storage performance by allowing to assign storage qos policies on a
scale out file server. Virtual HD stored on a scale out fs cane limited, or can be guaranteed an
amount of storage throughput (tasa de transferencia efectiva)

PowerShell Direct: runs ps cmdlets on a vm from the hpv host. No need to configure any
network connectivity.


New Virtual Machines features in 2016:

Discrete device assignment: allows vms to directly access pic-e devices connected to host.

Hot add or remove, for network and memory: Network adapters and virtual memory can be
added to a running vm.
Integration services delivered through W. Update: delivering the most recent version of
integration services through W. Update.

Key storage drive: allows G1 VMS to store BitLocker Drive Encryption keys.

Linux Secure boot: Increases security of linux vms. SB verifies digital signatures on files
during boot process to prevent malware. Feature already available for Win vms.

Memory and processor capacity improvements: a vm now supports 12tb ram, 240 processors.

Production checkpoints: applications are in consistent state when the checkpoint is created.

VM configuration file format: increases efficiency of read n write operations to the vm

configuration file with a binary format. Prevents to making manual changes to the vm config

VM configuration version: vm compatibility with 2012r2. Any vm migrated from 2012r2

(rolling cluster upgrade) are not auto updated from version 5 to 8, to retain backward
compatibility. Version 8 only can be host 2016.
W. Server Containers & Docker in HPV:

VM provides -- Hardware virtualization.

Containers provide OS virtualization.

. Isolated namespace (includes computer name, files, network address)
. Controlled access to hardware (doesn't monopolize resources)
Benefits of Containers:
. Faster startup (OS Kernel already startup)
. High deployment density (1 OS instance)
Docker is the management software for containers.
HPV Containers -- greater isolation.

WS Containers -- (2016) run multiple apps independently within a single os instance. The os
kernel is shared by multiple containers. OS Virtualization. --:: Virtual Operating System

Docker can retrieve (recuperar) containers from and store containers in a repository.
Containers are layered together to provide and entire app.
Ej.: container for os, container for web soft, container for web-based app.
Docker can retrieve all containers required for the app and deploy them.

Storage for Containers :::: if a lower layer (capa inferior) container for an os is updated, it
invalidates any upper layer (capa superior) container that rely on it (cuenta con..). Updating
the lower layer forces to update the upper layers.

Hyper- V Containers:
Greater level of isolation for containers. Each container has his own os kernel. Operates
In development environment -- performance more important than stability therefore (por lo
tanto) WS Containers are used for app development. However (sin embargo) in production
where stability is critical -- use HPV Containers.

Installing HP-V.
Verify meet requirements -- Systeminfo.exe

Nested Virtualization.
MAC address spoofing enabled !! (Configured in VM that is host)
4GB static memory enabled.
VM configuration version 8.0

MAC address spoofing -- not enabled: network packets from nested guest virtual machines
will not be recognized as legitimate, and will be blocked. Not required for vms connected to
private networks.

Features not available in nested virtualization guests:

Virtualization based security -- dynamic memory -- device guard -- hot add static memory --
checkpoints -- live migration -- save or restore state

Configuring Storage on Hyper-V host Servers:

Hard disks for VM: .vhd -- .vhdx -- .vhds

Virtual Hard disks: Fixed-size -- Dynamically Expanding -- Pass-through -- Differencing

Virtual Hard disk -- Special file format -- represents a traditional hard disk.

Create Virtual Hard Disk:

. Hyper-V Manager console

. Disk management Console
. DiskPart Command-line tool
. New-VHD cmdlet

Virtual Hard Disk Formats:

.VHD : -- WS2008 - WS2008R2 -- Limited 2TB -- limited Performance.


. WS2012

. 64TB file
. Less chance disk will become corrupted if the HOST suffered unexpected power outage.

. .vhdx format supports better alignment (alineacion) when delayed to large-sector disks.

. .vhdx allows larger block sizes for dynamically expanding and differencing disks -- provides
better performance.
Create .vhdx unless need backward compatibility with 2008/R2



.Shared Virtual HD.

. Multiple Virtual Machines can access simultaneusly. For high availability with clustering.

. Can convert between hard disk formats. A new virtual disk Is created (the content of the
existing disk are copied into it)


. Fixed Size: allocates all of the space immediately. Minimizes fragmentation. Enhances

. Dynamically expanding: allocates space as required. More efficient.

.vhdx disks can shrink when remove data. Occurs automatically when the vm is shut down.
A dynamically expanding .vhdx-formatted hard disk -- suitable for production. (Offer almost
the same level of performance)
Free disk space shown by dynamically expanding virtual disk not equal to physical free space.

. Pass-through: provides direct access to physical disk or iscsi logical unit number (LUN) .
Sometimes offers better performance.

. Differencing: this type of dynamically expanding virtual hard disk stores data that has
changed when compared to a parent disk. Used to reduce data storage requirements
Multiple layers (linked disks increases) on differencing disk decreases performance.
Modify a parent disk, the differencing disk is no longer valid.
Can move a parent virtual disk, but must relink it with the differencing disk.

Fibre Channel support in HP-V:

Virtual Fibre Channel adapter: virtual hardware that can add to a virtual machine.

. Allows a vm to connect to a fibre channel SAN directly.

. Requires the hp-v host to have a fibre channel HBA.
. Requires the fibre channel HBA driver support virtual FC.
. Virtual Fibre Channel Adapters support port virtualization by exposing HBA ports in guest
OS. This allows the VM to access the SAN by using a standard www that is associated with
the vm.

. Can deploy up to 4 virtual fibre channel adapters on each vm.

Planning Storage for Hyper-V:

.High Performance Connectivity to storage:

Can locate virtual hard disk files on local o remote storage. Remote (adequate bandwidth,
minimal latency between host and remote storage)

.Redundant Storage:
Volume on with the virtual hard disk files are stored should be fault-tolerant. Replacing failed
disks shouldn't affect the operation of the hp-v host or guests.

.High Performance Storage:

Storage on which you store virtual disk should have excellent I/O input/output. Ej.: SSD
drives in RAID 1>0

.Adequate growth Space:

Virtual disk grow auto -->adequate space into which the files can grow.

Storing Virtual Machines on SMB 3.0 shares:

SMB 3.0 --> 2012 UP.

HPV can store the following data on an smb 3.0 file share:
.Configuration files.
.Virtual Hard disk.
.checkpoint files.

Scale-Out file Sever:

.Provides highly available file shares.
.Has storage QOS policies.

SMB 3.0 file share provide an alternative to storing vim files on iscsi or fibre channel san
devices. Create a vm can specify a network share when choosing the vm sharing location and
the virtual hard disk location. Can attach disk noted on smb 3.0 file shares (vid, vhdx, vhds)

Use smb 3.0 fs for --> Create a SAN. Have to segregate access to file shares.
Client network traffic should not be on the same vlan.

Scale-Out File Server

--> High Availability for file shares storing vm files.
-->Redundant servers for accessing the file share.
-->Provides faster performance, compared when accessing files through a single share.
-->All servers active at the same time.
-->2016 uses storage QOS to manage QOS policies for hpv and scale-out fs. Allows
deployment of sos policies for smb 3.0 storage.
Types of HP-V Networks:


.Configure VLAN
.Capture data traveling through a switch.
.Filter data traveling through a switch.

Virtual Switch
-->manage Virtual switch manager.
-->Controls how network traffic flows between vms
-->Controls how network traffic flows between vms and the rest of the network

--> map a network to a specific network adapter or network adapter team in the hp-v host.
--> provides vm with access to a network to which a host is connected.
-->support mapping an external network to a wireless network adapter. (Adapters must be

-->communicate between the vms on the hp-v host.
-->communicate between the vm and the host.

-->communicate between vms on the hp-v host.

.associates the management os with the network.
.Extend existing vlans (external network) to clans within th hp-v host network switch.
.Vlans -- can be used to partition network traffic
.vlans function as separate logical network.
. If traffic passes through a router can pass only between vlans.

Extensions for each virtual switch type:

Network Driver Interface Specification (NDIS) capture: this extension allows the capture of
data that travels across a virtual switch.

Windows filtering platform: this extension allows filtering of data that travels across a virtual


Best Practices for configuring Hyper-V Virtual Networks:

.NIC Teaming on the HP-V host to ensure connectivity to vms if an adapter fails.
Cojnfigure multiple teams with network adapters that are connected to different switches.

.Enable Bandwitch Management--> no single vm is able to monopolize the network

Enable Bandwitch allocation to guarantee that each vm has a minimum Bandwitch allocation.

.Use Network Adapters -->support VMQ (virtual machine queue)

VMQ uses hardware packet filtering to deliver network traffic directly to a vm. This improves
performance because packet doesn't need to be copied from the host OS to the VM.

.Use Network Virtualization to isolate vms without using VLAN.

No necessary to configure VLANs on all the switches that are connected to the HP-V host.
Use Network Virtualization to isolate large number of VM for hosting them.


New HP-V networking features in WS2016:


.RDMA for Virtual Switches

.Switch-Embedded teaming:
(Adapters must be identical)
New-VMSwithc -Name “ExternalTeam”
-NetAdapterName “NIC1”, “NIC2”

.NAT Virtual Switch:

New-VMSwitch -Name “NATSwitch” -SwitchType NAT

Helps to ensure that all vm are able to obtain a minimum level of networking capacity when

VM Multi Queues:
Feature that enhances network performance for VMs. When its enabled in the NIC, VMQ
passes network packets directly from the external network to vm. Each VM gets a queue (cola)
for delivery of the packets. (2008^).
VMMQ: (2016) allocating multiple queues per virtual machine and spreading (divulgar)
traffic across the queues.

Remote Direct Memory Access for Virtual Swithes: RDMA -- SMB Direct

.Feature requires hardware support in the NIC.

.NIC with RDMA functions at full speed with low resource utilization. Higher throughput
(high-speed Nic)
.In 2012, RDMA could be used for Nic in a HP-V host that accessed virtual hard disks over
SMB. RDMA couldn't be used for adapters attached to a vswitch---> VMs couldn't utilize
RDMA for connectivity with clients.
.In 2016, RDMA can be used for network adapters that are attached to a HP-V switch.
(Enhanced network performance for vms)

Switch.embedded teaming: SET

Compatible with RDMA
SET works with VMQ --> provide high performance and high availability.
Can combine NICs into a team by creating a VS with up to 8 NICs.
All NICs must be identical.
SET is enabled auto when multiple NICs are used.