Beruflich Dokumente
Kultur Dokumente
6 - Getting Started
Table of Contents
Lab Overview - HOL-1808-01-HCI - vSAN v6.6 - Getting Started....................................... 2
Lab Guidance .......................................................................................................... 3
Module 1 - vSAN 6.6 Setup and Enablement (15 Minutes,Beginner) ................................ 9
Introduction........................................................................................................... 10
VMware vSAN Overview ........................................................................................ 11
VMware vSAN 6.6 requirements............................................................................ 14
Prepare VMware vSAN Cluster .............................................................................. 16
Conclusion............................................................................................................. 31
Module 2 - vSAN Scale Out with Configuration Assist (30 Minutes, Beginner) ................ 33
Introduction........................................................................................................... 34
VSAN Cluster Capacity Scale Out and vSAN Config Assist .................................... 37
Conclusion............................................................................................................. 54
Module 3 - vSAN All Flash Capabilities (30 Minutes, Beginner) ...................................... 56
Introduction........................................................................................................... 57
Storage Policy Based Management - Raid 5/6 ....................................................... 60
New Sparse VM Swap Object ............................................................................... 78
Conclusion............................................................................................................. 86
Module 4 - vSAN iSCSI Target (30 Minutes, Beginner)..................................................... 88
Introduction........................................................................................................... 89
iSCSI Target Configuration..................................................................................... 94
Conclusion........................................................................................................... 115
Module 5 - vSAN Encryption (30 Minutes, Beginner)..................................................... 116
Introduction......................................................................................................... 117
Configuring the Key Management Server............................................................ 120
Enabling vSAN Encryption................................................................................... 130
vSAN Encryption Health Check ........................................................................... 137
Conclusion........................................................................................................... 141
Module 6 - vSAN PowerCLI and ESXCLI (30 Minutes, Beginner) .................................... 142
Introduction......................................................................................................... 143
PowerCLI Overview ............................................................................................. 145
PowerCLI vSAN Commands ................................................................................. 156
PowerCLI vSAN Automation................................................................................. 173
ESXCLI Enhancements ........................................................................................ 181
Conclusion........................................................................................................... 194
Module 7 - vSAN Stretched Cluster (30 Minutes, Beginner) .......................................... 195
Introduction......................................................................................................... 196
vSAN 6.6 - Stretched Cluster Overview ............................................................... 197
Creating a New vSAN 6.6 - 2 Node Stretched Cluster ......................................... 203
Monitoring a vSAN 6.6 Stretched Cluster ............................................................ 222
vSAN Site Affinity ................................................................................................ 224
Conclusion........................................................................................................... 239
Lab Overview -
HOL-1808-01-HCI - vSAN
v6.6 - Getting Started
Lab Guidance
Note: It will take more than 90 minutes to complete this lab. You should
expect to only finish 2-3 of the modules during your time. The modules are
independent of each other so you can start at the beginning of any module
and proceed from there. You can use the Table of Contents to access any
module of your choosing.
The Table of Contents can be accessed in the upper right-hand corner of the
Lab Manual.
Lab Captains:
This lab manual can be downloaded from the Hands-on Labs Document site found here:
http://docs.hol.vmware.com
This lab may be available in other languages. To set your language preference and have
a localized manual deployed with your lab, you may utilize this document to help guide
you through the process:
http://docs.hol.vmware.com/announcements/nee-default-language.pdf
1. The area in the RED box contains the Main Console. The Lab Manual is on the tab
to the Right of the Main Console.
2. A particular lab may have additional consoles found on separate tabs in the upper
left. You will be directed to open another specific console if needed.
3. Your lab starts with 90 minutes on the timer. The lab can not be saved. All your
work must be done during the lab session. But you can click the EXTEND to
increase your time. If you are at a VMware event, you can extend your lab time
twice, for up to 30 minutes. Each click gives you an additional 15 minutes.
Outside of VMware events, you can extend your lab time up to 9 hours and 30
minutes. Each click gives you an additional hour.
During this module, you will input text into the Main Console. Besides directly typing it
in, there are two very helpful methods of entering data which make it easier to enter
complex data.
You can also click and drag text and Command Line Interface (CLI) commands directly
from the Lab Manual into the active window in the Main Console.
You can also use the Online International Keyboard found in the Main Console.
1. Click on the Keyboard Icon found on the Windows Quick Launch Task Bar.
In this example, you will use the Online Keyboard to enter the "@" sign used in email
addresses. The "@" sign is Shift-2 on US keyboard layouts.
When you first start your lab, you may notice a watermark on the desktop indicating
that Windows is not activated.
One of the major benefits of virtualization is that virtual machines can be moved and
run on any platform. The Hands-on Labs utilizes this benefit and we are able to run the
labs out of multiple datacenters. However, these datacenters may not have identical
processors, which triggers a Microsoft activation check through the Internet.
Rest assured, VMware and the Hands-on Labs are in full compliance with Microsoft
licensing requirements. The lab that you are using is a self-contained pod and does not
have full access to the Internet, which is required for Windows to verify the activation.
Without full access to the Internet, this automated process fails and you see this
watermark.
Please check to see that your lab is finished all the startup routines and is ready for you
to start. If you see anything other than "Ready", please wait a few minutes. If after 5
minutes your lab has not changed to "Ready", please ask for assistance.
Introduction
What is VMware vSAN ?
vSAN is a storage solution from VMware, released as a beta version back in 2013, made
generally available to the public in March 2014, and reached version 6.5 in November
2016. vSAN is fully integrated with vSphere. It is an object-based storage system and a
platform for Virtual Machine Storage Policies that aims to simplify Virtual Machine
storage placement decisions for vSphere administrators. It fully supports and is
integrated with core vSphere features such as vSphere High Availability (HA), vSphere
Distributed Resource Scheduler (DRS), and vMotion.
As a component of vSphere, vSAN extends the hypervisor to pool and abstract server
based storage resources, much the way vSphere pools and abstracts compute
resources. It is designed to be much simpler and more cost-effective than traditional
external storage arrays. Users of vSphere should be able to learn vSAN and become
productive quickly.
vSAN is fully integrated with vSphere, and supports almost all popular vSphere
functionality: DRS, HA, vMotion and more. vSAN is also integrated with the vRealize
suite.
Administrators define storage policies, and assign them to VMs. A storage policy will
define availability, performance and provisioning requirements (e.g. thin). When a VM is
provisioned, vSAN will interpret the storage policy, and configure the underlying storage
devices to satisfy the policy automatically (e.g. RAID 1). When the storage policy is
changed, vSAN will automatically reconfigure resources to satisfy the new policy.
Key points:
• Uses internal server components to create a shared storage pool across a single
cluster
Technical characteristics:
• Scales to 62TB VMDKs, 64 nodes, 35 capacity devices per node, 200 VMs per node
Customer Benefits
Simple
High Performance
vSAN's deep integration with the vSphere kernel and use of flash dramatically improves
application performance as compared to traditional storage solutions. Applications that
require even higher levels of predictable performance can use all-flash configurations.
Lower TCO
vSAN can lower TCO by up to 50% by using a streamlined management model as well as
cost effective server storage components. Expanding either capacity or performance
involves simply adding more resources to the cluster: flash, disks or servers.
vSAN Adoption
vCenter Server
vSAN 6.6 requires ESXi 6.5d and vCenter Server 6.5d. vSAN can be managed by both
the Windows version of vCenter Server and the vCenter Server Appliance (VCSA).
vSAN is configured and monitored via the vSphere Web Client and this also needs to be
version 6.5d.
vSphere ESXi
vSAN requires at least 3 vSphere hosts (where each host has local storage) in order to
form a supported vSAN cluster. This is to allow the cluster to meet the minimum
availability requirements of tolerating at least one host failure. The vSphere hosts must
be running vSphere 6.5. With fewer hosts there is a risk to the availability of virtual
machines if a single host goes down. The maximum number of hosts supported is 64.
Each vSphere host in the cluster that contributes to local storage to vSAN must have at
least one hard disk drive (HDD) and at least one solid state disk drive (SSD).
• One SAS or SATA host bus adapter (HBA), or a RAID controller that is in passthrough or
RAID 0 mode.
• Hybrid disk group configuration : At least one flash cache device, and one or more
SAS, NL-SAS or SATA magnetic disks.
• All-flash disk group configuration : One SAS or SATA solid state disk (SSD) or PCIe flash
device used for caching, and one or more flash devices used for capacity.
• In vSAN 6.5 hybrid cluster SSD will provide both a write buffer ( 30% ) and a read
cache ( 70% ). The more SSD capacity in the host, the greater the performance since
more I/O can be cached.
• In vSAN all-flash cluster, 100% of the cache is allocated for writes, read performance
from capacity flash is more than sufficient.
• Not every node in a vSAN cluster needs to have local storage although a balanced
configuration is recommended. Hosts with no local storage can still leverage the
distributed vSAN datastore.
• Each host must have minimum bandwidth dedicated to vSAN. 1 GbE for hybrid
capacity, 10 GbE for all-flash capacity
• A Distributed Switch can be optionally configured between all hosts in the vSAN
cluster, although VMware Standard Switches (VSS) will also work.
• A vSAN VMkernel port must be configured for each host. With a Distributed Switch,
Network I/O Control can also be enabled to dedicate bandwidth to the vSAN network.
• For vSAN 6.6 to use unicast networking mode, all ESXI hosts must be upgraded to
vSAN 6.6 and the on-disk format must be upgraded to version 5.
The VMkernel port is labeled vSAN. This port is used for intra-cluster node
communication and for read and writes when one of the vSphere hosts in the cluster
owns a particular virtual machine, but the actual data blocks making up the virtual
machine files are located on a different vSphere host in the cluster. In this case, I/O will
need to traverse the network configured between the hosts in the cluster.
• A Virtual SAN cluster must include a minimum of three ESXi hosts. For a Virtual
SAN cluster to tolerate host and device failures, at least three hosts that join the
Virtual SAN cluster must contribute capacity to the cluster. For best results,
consider adding four or more hosts contributing capacity to the cluster.
• Only ESXi 5.5 Update 1 or later hosts can join the Virtual SAN cluster.
• All hosts in the Virtual SAN cluster must have the same on-disk format.
• Before you move a host from a Virtual SAN cluster to another cluster, make sure
that the destination cluster is Virtual SAN enabled.
• To be able to access the Virtual SAN datastore, an ESXi host must be a member of
the Virtual SAN cluster.
1. Click on the Chrome Icon on the Windows Quick Launch Task Bar.
1. On the vSphere Web Client login screen, select "Use Windows session
authentication"
2. Click Login
You will be presented with the vSphere Web Client Home page.
To minimize or maximize the Recent Tasks, Alarms or Work In Progress panes, click
the pin.
If the Home page is not the initial screen that appears, select Home from the top menu
in the vSphere Web Client.
Enable vSAN
In your lab environment, vSAN is currently disabled. In this lesson we will show you how
to enable or turn on vSAN in a few easy steps.
A quick note about the Lab environment : The Cluster called RegionA01-COMP01
currently contains 3 ESXi hosts that will contribute storage in the form of cache and
capacity to form the vSAN datastore.
1. Select RegionA01-COMP01
2. Select Configure
3. Select vSAN > General
4. Select Configure
Configure vSAN
Selecting Allow Reduced Redundancy, vSAN will be able to reduce the protection level of
your VMs, if needed, during operations of enabling Deduplication and Compression. This
option is only usable if your setup is at the limit of the protection level, configured by
the Storage Policy of a Specific VM.
3. In the Fault Domains and Stretched Cluster section, verify Do not configure
is selected
Click Next
Network Validation
Checks have been put in to verify that there are VMKernel adapters configured and the
vSAN network service is Enabled.
Click Next
Claim Disks
Select which disks should be claimed for Cache and which for Capacity in the vSAN
cluster.
The disks are grouped by model and size or by host. The recommended selection has
been made based on the available devices in your environment.
You can expand the lists of the disks for individual disk selection.
The number of capacity disks must be greater than or equal to the number of cache
disks claimed per host.
Do Not click Next just yet. Move to the next step in the Lab Manual.
Claim Disks
This is a view of the storage from the Host perspective. In this exercise, we will create
One Disk Group on each ESXi Host. The Disk Group will contain one 5 GB Cache disk
and 2 x 10 GB Capacity disks.
2. Select the ESXi Host called esx-01a.corp.local and expand the host to see the
available disks.
3. Select Do not claim for the following disks ( In the Claim For column, select the
drop down for the following disks and take the option Do not claim ) :
mpx.vmhba1:C0:T1:L0
mpx.vmhba3:C0:T1:L0
mpx.vmhba4:C0:T1:L0
Repeat Step 3 for the additional ESXi hosts called esx-02a.corp.local and
esx-03a.corp.local
Click Next
Ready to Complete
1. Here we can determine that we will create a vSAN datastore with a capacity of 60
GB.
The VSAN datastore uses the Capacity disks for the vSAN datastore capacity. The
Caching disks are not taken into account.
2. This is an All Flash vSAN Cluster, where both the Cache and Capacity disks are
SSD/Flash disks.
Click Finish
Refresh Display
Click the Refresh icon to see the changes. ( If you see Misconfiguration detected
you may need to Refresh a couple of times )
After the refresh you should see all 3 hosts in the vSAN cluster
Recent Tasks
You can review the Tasks that were carried out by opening the Recent Tasks in the
vSphere Web Client.
These tasks consist of Creating the VSAN cluster, Creating the Disk Groups
and adding the Disks to the Disk Groups.
1. Select RegionA01-COMP01
2. Select Configure
3. Select vSAN ->Disk Management
The vSAN Disk Groups on each of the ESXi hosts are listed.
You may have to scroll down through the list to see all the disk groups.
Towards the lower part of the screen , you can see the Drive Types and the Disk Tier
that make up these disk groups.
To summarize :
Once you have formed the vSAN Cluster, a vsanDatastore has also been created.
The capacity shown is an aggregate of the capacity devices taken from each of the
ESXi hosts in the cluster (less some vSAN overhead - in vSAN 6.5 overhead is 1% of
physical disk capacity + deduplication metadata which is highly variable and will
depend on the data set stored in the vSAN datastore).
The flash devices used as cache are not considered when the capacity calculation is
made.
For each ESXi host to be aware of the capabilities of vSAN and to communicate between
vCenter and the storage layer a Storage Provider is created. Each ESXi host has a
storage provider once the vSAN cluster is formed.
The storage providers will be registered automatically with SMS (Storage Management
Service) by vCenter. However, it is best to verify that the storage providers on one of
the ESXi hosts has successfully registered and is active, and that the other storage
providers from the remaining ESXi hosts in the cluster are registered and are in standby
mode.
Should the active provider fail for some reason one of the standby storage providers will
take over.
Conclusion
In this module, we showed you how to enable vSAN in just a few clicks. As part of the
vSAN enablement, we verified that vSAN Network was configured correctly, we
demonstrated how to select the disks for the vSAN Disk Groups and we also enabled
Deduplication and Compression on the vSAN Datastore.
• Module 2 - vSAN Scale Out with Configuration Assist (30 Minutes, Beginner)
Module 2 explores how to scale out your vSAN environment with the Configuration
assist.
Module 3 explores all flash array capabilities along with storage policy based
management integration with Virtual SAN
Module 4 demonstrates the iSCSI feature in the v6.6 release and the use cases. This
feature provides block storage for physical and virtual workloads using the iSCSI
protocol.
Module 5 explores how to add a Key Management Server and how to enable vSAN
Encryption.
Module 6 unveils the PowerCLI 6.5 Release 1 and esxcli enhancements that have been
introduced to help automate, manage and monitor VMware Virtual SAN Environments.
Learn how to create a 2 Node Direct Connect vSAN Stretched Cluster with Witness
Traffic Separation.
If you would like to end your lab click on the END button.
Introduction
The vSAN Setup Wizard takes care of specific tasks when setting up a vSAN cluster.
Configuration settings like Deduplication & Compression (as well as Encryption in vSAN
6.6), whether or not a cluster is 2 Node/Stretched or not, as well as claiming disks.
What about some of the normal vSAN recommendations?
Some of the normal vSAN recommendations/checks that are not configured as part of
the vSAN cluster wizard include:
• vSphere Availability
• vSphere Distributed Resource Scheduler (DRS)
• vSphere Distributed Switch for vSAN Traffic
• vMotion configuration
• Ensuring all available disks are claimed
• Appropriate host controller tools are present
• Appropriate host controller firmware
To configure each of these, tasks must be performed in different parts of the vSphere
Web Client. Configuration Assist allows these to be done from single location in the UI.
Previously configuring vSAN VMKernel interfaces for vSAN or vMotion traffic required
creating these individually on each host or through the vSphere Distributed Switch
wizard. They are now part of Configuration Assist.
Lab Preparation
If you have completed Module 1 by completing the steps as outlined, then you can skip
the next few steps to prepare your environment for this lesson.
If you have skipped to this module, we will use our Module Switcher PowerCLI
Application to prepare the environment.
Module Switcher
Module 2 Start
Module 2 Progress
Please Note that you cannot 'go back' and take Modules prior to the one you are
currently in unless you end the lab and start it over again
For example: If you Start Module 4, you cannot use the Module Switcher to
Start Labs 1, 2 or 3).
Virtual SAN Configuration Assist enables you to verify the configuration of cluster
components, resolve issues, and troubleshoot problems. The configuration checks
cover hardware compatibility, network, and Virtual SAN configuration options.
Adding hosts to the vSAN cluster is quite straight forward. Of course, you must ensure
that the host meets vSAN requirements or recommendations such as a 1 Gb
dedicated network interface card (NIC) port (10 GbE recommended) and at least one
cache tier device and one or more capacity tier devices if the host is to provide
additional storage capacity. Also, pre-configuration steps such as a VMkernel port for
vSAN communication should be considered, although these can be done after the host
is added to the cluster.
Switch back to the "Hosts & Clusters" view in the vSphere Web Client Navigator pane
If ESXi does not automatically recognize its devices as flash/SSD, you can mark them as
flash/SSD devices. ESXi does not recognize certain devices as flash when their vendors
do not support automatic flash disk detection. The Drive Type column for the devices
shows HDD as their type.
Note : Marking HDD disks as flash disks could deteriorate the performance of
datastores and services that use them. Mark disks as flash disks only if you
are certain that those disks are flash disks.
1. Select esx-04a.corp.local
2. Select Configure
3. Select Storage -> Storage Devices
In the Storage Devices list, you can see the disks that will contribute storage to the
vSAN datastore.
Although this is an All Flash VSAN Cluster, we still need one SSD disk for Cache and at
least one SSD disk for Capacity.
If the Drag and Drop does not seem to be working for you, right click the ESXi host
called esx-04a.corp.local an select Move to .... Select the cluster called
RegionA01-COMP01.
If there are Virtual Machines running on the ESXi host , you may see the following
message. If this screen does not appear, move to the next step.
1. Select the default option"Put all of this host's virtual machines in the
cluster's root resource pool. Resource pools currently present on the
hosts will be deleted."
Click OK
You may see warning messages against the ESXi hosts already in the Cluster, but these
messages will self heal after a while.
If the Exit Maintenance Mode option is not instantly available, you may have to wait a
little while of Refresh the vSphere Web Client.
The vSAN Setup Wizard takes care of specific tasks when setting up a vSAN cluster.
Configuration settings like Deduplication & Compress (as well as Encryption in vSAN
6.6), whether or not a cluster is 2 Node/Stretched or not, as well as claiming disks.
What about some of the normal vSAN recommendations?
Some of the normal vSAN recommendations/checks that are not configured as part of
the vSAN cluster wizard include:
• vSphere Availability
• vSphere Distributed Resource Scheduler (DRS)
• vSphere Distributed Switch for vSAN Traffic
• vMotion configuration
• Ensuring all available disks are claimed
• Appropriate host controller tools are present
To configure each of these, tasks must be performed in different parts of the vSphere
Web Client. Configuration Assist allows these to be done from single location in the UI.
Previously configuring vSAN VMKernel interfaces for vSAN or vMotion traffic required
creating these individually on each host or through the vSphere Distributed Switch
wizard. They are now part of Configuration Assist.
We can see that the Networking Configuration has Failed and specifically for the
ESXi host esx-04a.corp.local
This is the ESXi host that we have just added to the vSAN Cluster.
At the bottom of the screen, you will see the affected host. ( esx-04a.corp.local ). We
will now configure the vSAN vmknic from the vSAN Configuration Assistant
Here are the list of the ESXi host in our vSAN Cluster.
Click Next
If the Next button is not available on the bottom of the screen, double click the Grey
bar of the dialog box and it should appear.
Click Next
Click Next
Click Finish
After a few moments once the vSAN Network has been configured on the ESXi host
esxi-04a.corp.local the configuration alert will turn green.
You can also manually run the test by clicking on the Retest button.
Now that we have configured the network, we need to turn our attention to the
Storage.
Claiming disks was accomplished via the vSAN cluster wizard, but only upon initial
setup. Adding additional disks required manual intervention.
Configuration Assist will show all disks that have not been claimed, even after hosts
have been added to an existing cluster.
The disks listed here are additional disks that are available to the esxi hosts to use if we
wanted to Scale Up the ESXi hosts. In other words add additional disks to an existing
Disk Group or add additional Disk Groups to an ESXi hosts
3. Select Do not claim for the following disks ( In the Claim For column, select the
drop down for the following disks and take the option Do not claim ) :
mpx.vmhba1:C0:T1:L0
mpx.vmhba3:C0:T1:L0
mpx.vmhba4:C0:T1:L0
Click OK
You can monitor the Disk Group creation task in the Recent Tasks
1. Even though we have created a Disk Group on the ESXi hosts called
esx-04a.corp.local, the Health test for the All disks claimed will remain as a
Warning.
2. This is because there are additional disks still on the ESXi hosts that have not
been claimed for vSAN use.
1. Select RegionA01-COMP01
2. Select Configure
3. Select vSAN ->Disk Management
The vSAN Disk Groups on each of the ESXi hosts are listed.
4. Verify that the Disk Group has been created on the ESXi host called
esx-04a.corp.local
Configuration Assist
Configuring settings like vSphere HA/DRS are also accomplished from the
Configuration Assist UI.
Configuration Assist even allows updating tools and firmware for storage controllers
from some OEM vendors.
Before Configuration Assist, customers would often have to update firmware out of
band, often through remote consoles, or through custom processes.
Conclusion
Configuration Assist is a great new feature of vSAN 6.6 that provides a central local for
initial and ongoing vSAN Cluster configuration tasks.
The ability to make changes for both for configuration settings as well as controller
firmware provides a more uniform and consistent management experience.
Module 3 explores all flash array capabilities along with storage policy based
management integration with Virtual SAN
Module 4 demonstrates the iSCSI feature in the v6.6 release and the use cases. This
feature provides block storage for physical and virtual workloads using the iSCSI
protocol.
Module 5 explores how to add a Key Management Server and how to enable vSAN
Encryption.
Module 6 unveils the PowerCLI 6.5 Release 1 and esxcli enhancements that have been
introduced to help automate, manage and monitor VMware Virtual SAN Environments.
Learn how to create a 2 Node Direct Connect vSAN Stretched Cluster with Witness
Traffic Separation.
If you would like to end your lab click on the END button.
Introduction
In this module we will take a look at some of the VMware vSAN all flash features enabled
through storage policy based management. This module will more specifically
concentrate on failure tolerance methods which specifies whether the data replication
method optimizes for Performance or Capacity. The Number of failures to tolerate has an
important role when we plan and size storage capacity for vSAN.
RAID 5 or RAID 6 erasure coding enables vSAN to tolerate the failure of up to two
capacity devices in the datastore. You can configure RAID 5 on all-flash clusters with
four or more fault domains. You can configure RAID 5 or RAID 6 on all-flash clusters with
six or more fault domains. RAID 5 or RAID 6 erasure coding requires less additional
capacity to protect your data than RAID 1 mirroring. For example, a VM protected by a
Number of failures to tolerate value of 1 with RAID 1 requires twice the virtual disk size,
but with RAID 5 it requires 1.33 times the virtual disk size.
Lab Preparation
If you have completed the previous modules by completing the steps as outlined, then
you can skip the next few steps to prepare your environment for this lesson.
If you have skipped to this module, we will use our Module Switcher PowerCLI
Application to prepare the environment.
Module Switcher
Module 3 Start
This Startup Routine can take a few minutes to complete - thank you for your patience !
Monitor Progress
Please Note that you cannot 'go back' and take Modules prior to the one you are
currently in unless you end the lab and start it over again (for example: If you Start
Module 4, you cannot use the Module Switcher to Start Labs 1, 2 or 3).
The failure tolerance method is used in conjunction with number of failures to tolerate.
The purpose of this setting is to allow administrators to choose between performance
and capacity. If performance is the absolute end goal for administrators, then RAID-1
(which is still the default) is the tolerance method that should be used.
If administrators do not need maximum performance and are more concerned with
capacity usage, then RAID-5/6 is the tolerance method that should be used.
The easiest way to explain the behavior is to display the various policy settings and the
resulting object configuration.
Number of disk stripes per object - The number of capacity devices across which
each replica of a virtual machine object is striped. A value higher than 1 might result in
better performance, but also results in higher use of system resources.
Flash read cache reservation - Flash capacity reserved as read cache for the virtual
machine object. Specified as a percentage of the logical size of the virtual machine disk
(vmdk) object. Reserved flash capacity cannot be used by other objects. Unreserved
flash is shared fairly among all objects. This option should be used only to address
specific performance issues.
Primary level failures to tolerate - For non-stretched clusters, defines the number of
disk,host or fault domain failures a storage object can tolerate. For n failures tolerated,
n+1 copies of the virtual machine object are created and 2*n+1 hosts contributing
storage are required.
Force provisioning - If the option is set to Yes, the object will be provisioned even if
the policy specified in the storage policy is not satisfiable by the datastore. Use this
parameter in bootstrapping scenarios and during an outage when standard provisioning
is no longer possible.
Object space reservation - Percentage of the logical size of the virtual machine disk
(vmdk) object that should be reserved, or thick provisioned when deploying virtual
machines.
Disable object checksum - If the option is set to No, the object calculates checksum
information to ensure the integrity of its data. If this option is set to Yes, the object will
not calculate checksum information. Checksums ensure the integrity of data by
confirming that each copy of a file is exactly the same as the source file. If a checksum
mismatch is detected, Virtual SAN automatically repairs the data by overwriting the
incorrect data with the correct data.
Failure tolerance method - Specifies whether the data replication method optimizes
for Performance or Capacity. If you choose Performance, Virtual SAN uses more disk
space to place the components of objects but provides better performance for accessing
the objects. If you select Capacity, Virtual SAN uses less disk space, but reduces the
performance.
IOPS limit for object - Defines the IOPS limit for a disk. IOPS is calculated as the
number of IO operations, using a weighted size. If the system uses the default base size
of 32KB, then a 64KB IO represents two IO operations. When calculating IOPS, read and
write are considered equivalent, while cache hit ratio and sequentiality are not
considered. If a disk’s IOPS exceeds the limit, IO operations will be throttled. If the IOPS
limit for object is set to 0, IOPS limits are not enforced.
Note that there is a requirement on the number of hosts needed to implement RAID-5
or RAID-6 configurations on vSAN.
For RAID-5, a minimum of 4 hosts are required; for RAID-6, a minimum of 6 hosts
are required.
The objects are then deployed across the storage on each of the hosts, along with a
parity calculation. The configuration uses distributed parity, so there is no dedicated
parity disk. When a failure occurs in the cluster, and it impacts the objects that were
deployed using RAID-5 or RAID-6, the data is still available and can be calculated using
the remaining data and parity if necessary.
A new policy setting has been introduced to accommodate the new RAID-5/RAID-6
configurations.
This new policy setting is called Failure Tolerance Method. This policy setting takes
two values: performance and capacity. When it is left at the default value of
performance, objects continue to be deployed with a RAID-1/mirror configuration for the
best performance. When the setting is changed to capacity, objects are now deployed
with either a RAID-5 or RAID-6 configuration.
First we need to create a VM Storage Policy that will define the Failure Tolerance
method of Raid 5/6.
PFTT=1-Raid5
Click Next
Select VSAN as the Storage Type and add rules for the Primary level of failures to
tolerate and the Failure tolerance method.
Review the Storage Consumption Model on the right hand side of the screen. Notice
that the Storage space that would be used would be 200 GB based on a virtual disk of
100 GB.
Click Next
Here we can see that the vsanDatastore is compatible with the VM Storage Policy
that we are about to create.
Depending on how many vSAN Disk Groups you created on each ESXi host in your vSAN
Cluster, the Total Capacity of the vSAN Datastore may be different.
Click Next
Click Finish
1. Select FTT=1-Raid5
2. Select Manage
3. Select Rule-Set-1:VSAN
Here we can see the rules that make up our VM Storage Policy.
Make a note of the capacity figures here. ( Pretty much our vsanDatastore is empty )
Depending on how many vSAN Disk Groups you created on each ESXi host in your vSAN
Cluster, the Total Capacity of the vSAN Datastore may be different.
We will clone the VM called core-A ( which currently resides on a Local VMFS datastore )
to the vSAN Datastore and apply the VM Storage Policy ( PFTT=1-Raid5 ) that we have
just created.
1. Expand the ESXI host called esxi-07a.corp.local and right click the VM called
core-A
2. Select Clone
3. Select Clone to Virtual Machine
PFTT=1-Raid5
Click Next
Click Next
The resulting list of compatible datastores will be presented, in our case the
vsanDatastore
In the lower section of the screen we can see that the Virtual SAN storage
consumption would be 1.33 MB disk space and 0.00 B reserved Flash space.
Since we have a VM with 100 MB disk and a VM Storage Policy of Raid 5, the VSAN disk
consumption will be 133.33 MB disk.
Click Next
Click Finish
Check the Recent Tasks for a status update on the Clone virtual machine task.
• Here we can see that the VM Storage Policy for this VM is set to PFTT=1-Raid5
and the policy is compliant.
Notice with this VM Storage Policy, we have a Raid 5 disk placement, made up of 4
Components.
1. Select RegionA01-COMP01
2. Select Monitor
3. Select vSAN
4. Select Capacity
If we focus on the Capacity Overview first of all, we can see the full size of the VSAN
datastore. This is approximately 80 GB in size. We can also see Deduplication and
compression overhead.
The amount of space Used – Total on the VSAN datastore refers to how much space
has been physically written (as opposed to logical size). This is a combination of Virtual
disks, VM home objects, Swap objects, Performance management objects and Other
items that may reside on the datastore. Other items could be ISO images, unregistered
VMs, or templates, for example.
The Deduplication and Compression overview on the top right gives administrators
an idea around the space savings and deduplication ratio that is being achieved, as
well as the amount of space that might be required if an administrator decided that they
wanted to disable the space efficiency features on VSAN and re-inflate any deduplicated
and compressed objects.
The space savings ratio increases with the more “similar” VMs that are deployed.
This is telling us that without deduplication and compression, it would have required
~11 GB of capacity to deploy the current workloads. With deduplication and
compression, we’ve achieved it with ~3.6 GB.
Towards the bottom of the Capacity Screen, we will get a breakdown of the objects.
1. Select RegionA01-COMP01
2. Select Monitor
3. Select vSAN
4. Select Capacity
File system overhead: Any overhead taken up by the on-disk file system (VirstoFS) on
the capacity drives, which is neither attributed to deduplication, compression or
checksum overhead. When deduplication and compression is enabled, file system
overhead is increased 10X to reflect the increase in the logical size of the vSAN
datastore.
Checksum overhead: Overhead to store all the checksums. When deduplication and
compression are enabled, checksum overhead is increased 10X to reflect the increase in
the logical size of the VSAN datastore. When a VM and a template are deployed on the
vSAN datastore, more objects appear:
Virtual disks: Capacity consumed by Virtual machine disks (VMDKs) objects that reside
on the vSAN datastore
Swap objects: Capacity consumed by VM Swap space that reside on the vSAN
datastore when a Virtual Machine is powered on.
Your Lab environment is currently running a 4 Node vSAN Cluster. To implement Raid
6, you would require a minimum of 6 hosts in the vSAN Cluster.
The VM Storage Policy will have a Failure Tolerance Method of Raid 5/6 - (
Erasure Coding ) - Capacity and the Primary Level of failures to tolerate set to
2.
In a Raid-6 you will consume x1.5 times the storage assigned to the VM.
In the Raid 6 configuration, there are 6 components and they are spread out
across the 6 ESXi hosts in the Cluster.
By default, swap objects are provisioned 100% up front, without the need to set object
space reservation to 100% in the policy. This means, in terms of admission control,
vSAN will not deploy the VM unless there is enough disk space to accommodate the full
size of the VM swap object. In vSAN 6.2, a new advanced host option
SwapThickProvisionDisabled has been created to allow the VM swap option to be
provisioned as a thin object. If this advanced setting is set to true, the VM swap objects
will be thinly provisioned.
To show this example, the only VM that we need powered on in our environment is the
VM called PFTT=1-Raid5 that we created earlier. If the VM is powered-off, power it on
now.
If you have other VM's running in the RegionA01-COMP01 cluster, power them off
now.
In the VM called PFTT=1-Raid5, we can see that we have 256 MB memory assigned.
Note the ESXi host that the VM is running on, it may be different than shown
here.
1. Select RegionA01-COMP01
2. Select Monitor
3. Select vSAN
4. Select Capacity
Scroll to the bottom of the CapacityView to the Used Capacity Breakdown section.
Here we can see the Swap Objects are taking around 548 MB
Power Off VM
1. Select RegionA01-COMP01
2. Select Monitor
3. Select vSAN
4. Select Capacity
As expected, there are No VM swap objects consuming space on the VSAN datastore
as the Virtual Machine is power off.
Open a puTTY session to the ESXi host that the PFTT=1-Raid5 VM is registered on.
The first thing to note is that this advanced setting needs to be set on each ESXi
host that is in the vSAN cluster.
In our environment, we will set it only on the ESXi Host that will run the VM.
Note : You can drag and drop the command from the manual or use the "send
text" top menu option.
esxcfg-advcfg -g /VSAN/SwapThickProvisionDisabled
To enable it:
esxcfg-advcfg -s 1 /VSAN/SwapThickProvisionDisabled
Power On VM
1. Select RegionA01-COMP01
2. Select Monitor
3. Select vSAN
4. Select Capacity
Now we can see that the Swap objects is now only consuming 36 MB of disk, instead
of the original 584 MB.
This will rely on how many VMs you have deployed, and how large the VM swap space is
(essentially the size of unreserved memory assigned to the VM).
Since we only enabled this vSAN advanced setting on one ESXi host, the vSAN Health
Check will report this as vSAN Configuration out of sync.
Return back to the PuTTY Session and run the following command to disable the setting
:
esxcfg-advcfg -s 0 /VSAN/SwapThickProvisionDisabled
Conclusion
In this module we demonstrated some of the VM Storage Policies features that are in
VMware vSAN 6.6.
We started by showing the Failure tolerance method where we could specify whether
the data replication method optimizes for Performance or Capacity. If you choose
Performance, vSAN uses more disk space to place the components of objects but
provides better performance for accessing the objects. If you select Capacity, vSAN
uses less disk space, but reduces the performance.
We then looked at the Sparse VM Swap Object. This new feature can provide a
considerable space-saving on capacity space consumed, meaning the VM swap objects
will be thinly provisioned.
Module 4 demonstrates the iSCSI feature in the v6.6 release and the use cases. This
feature provides block storage for physical and virtual workloads using the iSCSI
protocol.
Module 5 explores how to add a Key Management Server and how to enable vSAN
Encryption.
Module 6 unveils the PowerCLI 6.5 Release 1 and esxcli enhancements that have been
introduced to help automate, manage and monitor VMware Virtual SAN Environments.
Learn how to create a 2 Node Direct Connect vSAN Stretched Cluster with Witness
Traffic Separation.
If you would like to end your lab click on the END button.
Introduction
iSCSI SANs use Ethernet connections between computer systems, or host servers, and
high performance storage subsystems. The SAN components include iSCSI host bus
adapters (HBAs) or Network Interface Cards (NICs) in the host servers, switches and
routers that transport the storage traffic, cables, storage processors (SPs), and storage
disk systems.
iSCSI SAN uses a client-server architecture. The client, called iSCSI initiator, operates on
your host. It initiates iSCSI sessions by issuing SCSI commands and transmitting them,
encapsulated into iSCSI protocol, to a server. The server is known as an iSCSI target. The
iSCSI target represents a physical storage system on the network. It can also be
provided by a virtual iSCSI SAN, for example, an iSCSI target emulator running in a
virtual machine. The iSCSI target responds to the initiator's commands by transmitting
required iSCSI data.
Specifically, "iSCSI targets on vSAN" are managed the same as other objects with
Storage Policy Based Management (SPBM) which means functionality such as
deduplication, compression, mirroring, and erasure coding can be utilized.
Allows access to the LUNs simultaneously through all the storage ports that are
available without significant performance degradation. All the paths are active at all
times, unless a path fails.
A system in which one storage processor is actively providing access to a given LUN.
The other processors act as backup for the LUN and can be actively providing access to
other LUN I/O.
I/O can be successfully sent only to an active port for a given LUN. If access through the
active storage port fails, one of the passive storage processors can be activated by the
servers accessing it.
Allows access to all available LUNs through a single virtual port. These are active-active
storage devices, but hide their multiple connections though a single port. ESXi multi-
pathing does not make multiple connections from a specific port to the storage by
default. Some storage vendors supply session managers to establish and manage
multiple connections to their storage. These storage systems handle port failover and
connection balancing transparently. This is often referred to as transparent failover.
vSAN provides both enterprise-class scale and performance as well as new capabilities
that broaden the applicability to a wide variety of use cases. vSAN is suited to be the
storage for all your VM's and now with vSAN iSCSI targets you can now expand this to
physical workloads.
• Business-critical applications
• End user computing (VDI)
• Disaster recovery
• Remote office/branch office (ROBO)
Lab Preparation
If you have completed previous modules by completing the steps as outlined, then you
can skip the next few steps to prepare your environment for this lesson.
If you have skipped to this module, we will use our Module Switcher PowerCLI
Application to prepare the environment.We will use our Module Switcher PowerCLI
Application to prepare the environment.
Module Switcher
If you have not completed previous lessons, the Module Switcher is way in which we can
prepare the lab environment for you to carry out the steps in this lesson.
Module 4 Start
This Startup Routine can take a few minutes to complete - thank you for your patience!
Monitor Progress
Please Note that you cannot 'go back' and take Modules prior to the one you are
currently in unless you end the lab and start it over again
For example: If you Start Module 4, you cannot use the Module Switcher to
Start Labs 1, 2 or 3).
Click OK
Note: Before proceeding to the next step please make sure ALL of the tasks
are completed. Review the vSphere WebClient tasks for proper status.
Click OK
Note: Target IQN (unique iSCSI Qualified Name) may differ from lab to lab
The last step is adding initiator names to an initiator group, which controls access to the
target, as shown here.
Click OK
iSCSI Initiators
To access iSCSI targets, your host uses iSCSI initiators. The initiators transport SCSI
requests and responses, encapsulated into the iSCSI protocol, between the host and the
iSCSI target.
VMware's iSCSI adapter is built into the VMkernel. It allows your host to connect to the
iSCSI storage device through standard network adapters. The software iSCSI adapter
handles iSCSI processing while communicating with the network adapter. With the
software iSCSI adapter, you can use iSCSI technology without purchasing specialized
hardware.
A hardware iSCSI adapter is a third-party adapter that offloads iSCSI and network
processing from your host. Hardware iSCSI adapters are divided into categories.
• This type of adapter can be a card that presents a standard network adapter and iSCSI
offload functionality for the same port. The iSCSI offload functionality depends on the
host's network configuration to obtain the IP, MAC, and other parameters used for iSCSI
sessions
Implements its own networking and iSCSI configuration and management interfaces.
• Hardware iSCSI adapter is a card that either presents only iSCSI offload functionality or
iSCSI offload functionality and standard NIC functionality. The iSCSI offload functionality
has independent configuration management that assigns the IP, MAC, and other
parameters used for the iSCSI sessions.
This lesson will emulate a physical server connecting to a iSCSI vSAN cluster.
Note: Demo purposes only since attaching a vSAN iSCSI volume is only supported for
physical hosts)
Example:
iqn.1991-05.com.microsoft:controlcenter.corp.local
1. Paste (CTRL-V) the Initiator Name string in the Member initiator name and
click Add
Click OK
Verify that the Initiator has been added to the vSAN iSCSI Initiator Group
1. Locate one of the ESXi hosts in the vSAN Cluster. In this example, I will use
esx-01a.corp.local as an example.
2. Select Configure
3. Select Networking -> VMkernel Adapters
4. Note the IP Address of the vSAN VMkernel port group. ( 192.168.130.51 )
Click OK
Open the Computer Management Tool; Double click the Computer Management
shortcut on the desktop
1. Right click on Disk 1 the unallocated disk ( 2 GB ) and select New Simple
Volume
2. Accept all Wizard defaults and give the volume a new name ( vSAN iSCSI
Volume)
3. Click on Next and Finish to complete the wizard
1. Windows will prompt (see Taskbar) you to format the disk; click"Format Disk" to
begin the process
Note: Wait for the task to complete before starting the Format Process (next step) to
prevent an error.
Open Windows Explorer (shortcut on the taskbar) and Navigate to the C:\ folder
Conclusion
In this module we demonstrated the new iSCSI feature which is part of the latest
VMware vSAN release. We started by showing you how to enable the iSCSI target
services and how to configure the iSCSI Initiator. Once these were configured correctly
we demonstrated how to attach a host to the iSCSI Volume which can provide
additional, simple, cost effective storage solutions for physical hosts.
Module 5 explores how to add a Key Management Server and how to enable vSAN
Encryption.
Module 6 unveils the PowerCLI 6.5 Release 1 and esxcli enhancements that have been
introduced to help automate, manage and monitor VMware Virtual SAN Environments.
Learn how to create a 2 Node Direct Connect vSAN Stretched Cluster with Witness
Traffic Separation.
If you would like to end your lab click on the END button.
Module 5 - vSAN
Encryption (30 Minutes,
Beginner)
Introduction
You can use data at rest encryption to protect data in your Virtual SAN cluster.
Virtual SAN can perform data at rest encryption. Data is encrypted after all other
processing, such as deduplication, is performed. Data at rest encryption protects data
on storage devices, in case a device removed from the cluster.
Using encryption on your Virtual SAN cluster requires some preparation. After your
environment is set up, you can enable encryption on your Virtual SAN cluster.
Virtual SAN encryption requires an external Key Management Server (KMS), the vCenter
Server system, and your ESXi hosts. vCenter Server requests encryption keys from an
external KMS. The KMS generates and stores the keys, and vCenter Server obtains the
key IDs from the KMS and distributes them to the ESXi hosts.
vCenter Server does not store the KMS keys, but keeps a list of key IDs.
Lab Preparation
If you have completed the previous modules by completing the steps as outlined, then
you can skip the next few steps to prepare your environment for this lesson.
If you have skipped to this module, we will use our Module Switcher PowerCLI
Application to prepare the environment.
Module Switcher
Module 5 Start
Module 5 Progress
Please Note that you cannot 'go back' and take Modules prior to the one you are
currently in unless you end the lab and start it over again
For example: If you Start Module 4, you cannot use the Module Switcher to
Start Labs 1, 2 or 3).
Before you can encrypt the vSAN Datastore, you must set up a KMS cluster to support
encryption. That task includes adding the KMS to vCenter Server and establishing trust
with the KMS.
The vCenter Server provisions encryption keys from the KMS cluster.
The KMS must support the Key Management Interoperability Protocol (KMIP) 1.1
standard.
vSAN Encryption is data at rest encryption, this means data is encrypted at rest both on
the caching and capacity devices. Enabling encryption is a single click operation and is
designed to work seamlessly with any of the other vSAN and vSphere features
(example vMotion, HA and DRS). It is the first hyper-converged-infrastructure offering
included in a DISA STIG.
vSAN encryption technically is designed to work with KMS server that communicates
via KMIP 1.1(or above). However, we explicitly certify KMS servers from our partners to
provide a consistent user experience.
When you enable encryption, Virtual SAN encrypts everything in the Virtual SAN
datastore. All files are encrypted, so all virtual machines and their corresponding data
are protected. Only administrators with encryption privileges can perform encryption
and decryption tasks.
• vCenter Server requests an AES-256 Key Encryption Key (KEK) from the KMS.
vCenter Server stores only the ID of the KEK, but not the key itself.
• The ESXi host encrypts disk data using the industry standard AES-256 XTS mode.
Each disk has a different randomly generated Data Encryption Key (DEK).
• Each ESXi host uses the KEK to encrypt its DEKs, and stores the encrypted DEKs
on disk. The host does not store the KEK on disk. If a host reboots, it requests the
KEK with the corresponding ID from the KMS. The host can then decrypt its DEKs
as needed.
• A host key is used to encrypt core dumps, not data. All hosts in the same cluster
use the same host key. When collecting support bundles, a random key is
generated to re-encrypt the core dumps. Use a password when you encrypt the
random key.
When a host reboots, it does not mount its disk groups until it receives the KEK. This
process can take several minutes or longer to complete. You can monitor the status of
the disk groups in the Virtual SAN health service, under Physical disks > Software
state health.
The password recrypts core dumps that use internal keys to use keys that are
based on the password. You can later use the password to decrypt any encrypted
core dumps that might be included in the support bundle. Unencrypted core dumps
or logs are not affected.
• The password that you specify during vm-support bundle creation is not persisted
in vSphere components. You are responsible for keeping track of passwords for
support bundles.
There are three parties involved in vSAN encryption – (1) Key Management Server or
the KMS server (this is the entity that generates the keys)) (2) vCenter and (3) vSAN
host or ESXi host.
Before we attempt to encrypt any data on vSAN, the first step is to set up a domain of
trust among 3 parties (KMS, vCenter and vSAN host).
Setting up the domain of trust follows the standard Public Key Infrastructure (PKI) based
management of digital certificates. The exact steps are dependent on the KMS provider.
Once the domain of trust is set up, KMS, vCenter and the vSAN host can begin
communicating with each other. The exchange of key happens between the vSAN host
and the KMS server.
The vSAN host provides a key reference or key id to the KMS server and the KMS server
in response provides the key that is associated with the key id.
You add a Key Management Server (KMS) to your vCenter Server system from the
vSphere Web Client.
vCenter Server creates a KMS cluster when you add the first KMS instance. If you
configure the KMS cluster on two or more vCenter Servers, make sure you use the same
KMS cluster name.
• When you add the KMS, you are prompted to set this cluster as a default. You can
later change the default cluster explicitly.
• After vCenter Server creates the first cluster, you can add KMS instances from the
same vendor to the cluster.
• You can set up the cluster with only one KMS instance.
• If your environment supports KMS solutions from different vendors, you can add
multiple KMS clusters.
Note Do not deploy your KMS servers on the Virtual SAN cluster you plan to encrypt. If
a failure occurs, hosts in the Virtual SAN cluster must communicate with the KMS.
A Key Management Server (KMS) cluster provides the keys that you can use to encrypt
the Virtual SAN datastore.
Before you can encrypt the Virtual SAN datastore, you must set up a KMS cluster to
support encryption.
That task includes adding the KMS to vCenter Server and establishing trust with the
KMS. vCenter Server provisions encryption keys from the KMS cluster.
To use vSAN Encryption, a Key Management Server (KMS) is required. Nearly all KMIP
1.1-compliant KMS vendors are compatible, with specific testing completed for vendors
such as HyTrust®, Gemalto®, Thales e-Security®, CloudLink®, and Vormetric®. These
solutions are commonly deployed in clusters of hardware appliances or virtual
appliances for redundancy and high availability.
Click OK
Click Trust
Verify the Connection Status is Normal and the Certificate Status has a valid
certificate that will expire some time in the future.
After you add the KMS to the vCenter Server system, you must establish a trusted
connection. The exact process depends on the certificates that the KMS accepts, and
on company policy.
1. Select the KMS instance with which you want to establish a trusted connection. (
KMS-01 )
2. Click Establish trust with KMS ...
Select the option appropriate for your server and complete the steps.
Different KMS vendors require different means to trust the digital certificates of vCenter
and ESXi hosts. You should contact your Key Management Server vendor for your
Certificate option.
1. After establishing the trust with the Key Management server, click the Refresh
icon to update the Web Client status
Verify the Connection Status is Normal and the Certificate Status has a valid
certificate that will expire some time in the future for both the KMS Cluster and the
KMS Server.
vSAN Encryption is the industry’s first native HCI encryption solution; it is built right
into the vSAN software. With a couple of clicks, it can be enabled or disabled for all
items on the vSAN datastore, with no additional steps.
Because it runs at the hypervisor level and not in the context of the virtual machine, it
is virtual machine agnostic, like VM Encryption.
You can enable encryption by editing the configuration parameters of an existing Virtual
SAN cluster.
This can take a considerable amount of time – especially if large amounts of existing
data must be migrated as the rolling reformat takes place.
Enabling vSAN Encryption has an option to Erase disk before use. Do not enable
this option.
Click on the information button (i) for these options to get additional
information on these options.
Click OK
The Erase disks before use option will significantly reduce the possibility of data leak
and increase the attackers cost to reveal sensitive data. This option will also increase
the cost of time to consume disks.
You can monitor the vSAN Encryption process from the Recent Tasks window.
If you get and error and vSAN Encryption fails, Turn Off vSAN Encryption and
Enable vSAN Encryption again. In this lab environment we are using an Open
Source KMIP server to showcase this feature. In Customer Production
environment you should not see this using supported Key Management
Server.
This process is repeated for each of the Disk Groups in the vSAN Cluster.
You can also monitor the vSAN Encryption process from the Configure -> vSAN ->
General
Enabling vSAN Encryption will take a little time. Each of the Disk Groups in the vSAN
Cluster have to be removed and recreated.
Once the rolling reformat of all the disk groups task has completed, Encryption of data
at rest is enabled on the Virtual SAN cluster.
Virtual SAN encrypts all data added to the Virtual SAN datastore.
You have the option to generate new encryption keys, in case a key expires or
becomes compromised.
To show that vSAN Encryption is enabled on the disks, we can use the following
command :
From the output we can verify that Encryption is enable and the Disk Key is loaded :
• Encryption : true
• DiskKeyLoaded: true
There are vSAN Health Checks to verify that your vSAN Encryption is enabled and
healthy.
This check verifies whether ESXi hosts in the vSAN cluster have CPU AES-NI feature
enabled.
1. Select vCenter and all hosts are connected to Key Management Servers
2. Select vCenter KMS status
This vSAN Health Check verifies that the vCenter Server can connect to the Key
Management Servers
This vSAN Health Check verifies that the ESXi hosts can connect to the Key
Management Servers
Conclusion
With the addition of vSAN Encryption in vSAN 6.6 and with VM Encryption introduced in
vSphere 6.5, native data-at-rest encryption can be easily accomplished on hyper-
converged infrastructure (HCI) powered by vSAN storage or any other vSphere storage.
While vSAN Encryption and VM Encryption meet similar requirements, they do so a bit
differently, each with use cases they excel at.
Most importantly, they provide customers choice for when deciding how to provide
data-at-rest encryption for their vSphere workloads.
Module 6 unveils the PowerCLI 6.5 Release 1 and esxcli enhancements that have been
introduced to help automate, manage and monitor VMware Virtual SAN Environments.
Learn how to create a 2 Node Direct Connect vSAN Stretched Cluster with Witness
Traffic Separation.
If you would like to end your lab click on the END button.
Module 6 - vSAN
PowerCLI and ESXCLI (30
Minutes, Beginner)
Introduction
In this Module you will learn about the latest release of VMware PowerCLI (6.5 Release 1)
and the enhancements that have been introduced to help automate, manage and
monitor VMware Virtual SAN Environments.
We promise that there will be no, "Hello, World!" examples for you to work through. :)
• PowerCLI Overview
• PowerCLI vSAN Commands
• PowerCLI vSAN Automation
• ESXCLI Enhancements
Lab Preparation
We will use our Module Switcher PowerCLI Application to prepare the environment.
Module Switcher
Module 6 Start
Module 6 Progress
Please Note that you cannot 'go back' and take Modules prior to the one you are
currently in unless you end the lab and start it over again
For example: If you Start Module 4, you cannot use the Module Switcher to
Start Labs 1, 2 or 3).
PowerCLI Overview
VMware PowerCLI is a command-line and scripting tool built on Windows Powershell, and
provides more than 500 cmdlets for managing and automating vSphere, vSAN, Site
Recovery Manager, vRealize Operations Manager, vSphere Automation SDK, vCloud
Director, vCloud Air, vSphere Update Manager and VMware Horizon environments.
In this lesson we will examine our Lab PowerCLI environment and perform a few vSphere
Administrative Tasks.
Launch PowerCLI
Confirm Version
1. Type the following cmdlet name to retrieve our PowerCLI version information:
Get-PowerCLIVersion
You will notice that the get-powercliversion command is being deprecated, so lets
run the get-module cmdlet.
Get-Module
You'll notice that we are running the latest PowerCLI release/build and you can also see
a list of installed VMware Components. These components contain the various cmdlets
to manage their respective areas, for example, the vSAN cmdlets that you'll be using in
this Lab are contained within the 'VMware Storage PowerCLI component'.
Note: PowerCLI commands are not case sensitive; however, you can press Tab at any
time to attempt autocompletion. This is a valuable timesaver so take advantage of it!
Connect to vCenter
Connect-VIServer vcsa-01a.corp.local
The Connect-VIServer cmdlet can be used to connect and query across multiple
vCenter instances.
Powershell Providers
1. Type the following command to list the Powershell Providers that are available for
usage:
Get-PSProvider
Inventory Provider
When you connect to a server with Connect-VIServer, the cmdlet builds two default
inventory drives: vi and vis. The vi inventory drive shows the inventory on the last
connected server. The vis drive contains the inventory of all vSphere servers connected
within the current PowerCLI session. You can use the default inventory drives or create
custom drives based on the default ones.
cd vi:
2. List contents:
ls
cd RegionA01
4. List contents:
ls
cd vm
6. List contents:
ls
Datastore Provider
The Datastore Provider is designed to provide access to the contents of one or more
datastores. The items in a datastore are files that contain configuration, virtual disk, and
the other data associated with a virtual machine. When you connect to a server with
Connect-VIServer, the cmdlet builds two default datastore drives: vmstore and
vmstores. The vmstore drive provides a list of the datastores available on the vSphere
server that you last connected to.
If you establish multiple connections to the same vSphere server, the vmstore drive is
not updated. The vmstores drive contains all datastores available on all vSphere servers
that you connected to within the current PowerCLI session. You can use the default
datastore drives or create custom drives based on the default ones.
cd vmstore:
2. List contents:
ls
cd RegionA01
4. List contents (notice that we have two Datastores present -- a shared iSCSI
Datastore and our vsanDatastore):
ls
cd c:
PowerCLI Cmdlets
We used the 'Connect-VIServer' cmdlet earlier. Cmdlets are small programs that are
pre-compiled for your usage.
Let's use a few cmdlets to examine our vCenter environment by typing these commands
(remember that you can use the Tab key to autocomplete if desired).
Get-Datacenter
Get-Cluster
Get-VM
Get-Datastore
Cmdlets, cont.
1. Type the following command to pipe the output of Get-VM to the Format-Table
cmdlet and return only the Name and PowerState Columns:
2. 2. We can also pipe the result of Get-VM to the Where-Object cmdlet to filter
on specific information (like Power state):
Get-Help
You can use the Get-Help cmdlet to view the description, syntax information and
examples for any cmdlet that you are interested in learning more about.
Tip : You will need to scroll up the page to get to the beginning of the help text.
Get-Help Get-VM
For the final step of this Lesson we will clone an existing VM using the New-VM cmdlet
(this VM will be used in a later automation lesson on using Storage Policy Based
Management).
1. Type the following command and monitor the clone progress (you can also simply
highlight the entire command in your manual then drag and drop it into your
PowerCLI window if you prefer):
• Get-VsanDisk
• Get-VsanDiskGroup
• New-VsanDisk
• New-VsanDiskGroup
• Remove-VsanDisk
• Remove-VsanDiskGroup
Let's look at what has been added with PowerCLI 6.5 R1, next.
Whats New
Get-Command *vsan*
Get-Command *spbm*
3. You can also view all of the vSphere Storage related cmdlets contained within the
VMware.VimAutomation.Storage Module if desired (not shown in screenshot):
1. To make things easier, let's create a Variable named $cluster and set it equal to
the value of the Get-Cluster cmdlet:
$cluster = Get-Cluster
$cluster
Get-VsanClusterConfiguration $cluster
Note that we can see a few high level properties of our vSAN Cluster (vSAN is enabled,
Stretched Cluster is not, etc.)
Get-VsanClusterConfiguration
2. Pipe $vsanConfig into the Get-Member cmdlet to see all of the Methods and
Properties that are available:
$vsanConfig | Get-Member
Get-VsanClusterConfiguration, cont.
1. You can directly view individual Properties by appending their name to your
$vsanConfig variable. For example, try one or more of these:
$vsanConfig.HealthCheckEnabled
$vsanConfig.PerformanceServiceEnabled
$vsanConfig.VsanDiskClaimMode
2. To view all of the Properties and their results you can simply pass the
$vsanConfig variable to the Format-List cmdlet:
$vsanConfig | Format-List
The ability to Test vSAN Health and Performance was previously available within the
vSphere Web Client -- this functionality has now been made accessible via PowerCLI 6.5
as well.
These Health Tests check all aspects of a Virtual SAN configuration including hardware
compatibility, networking configuration and operations, advanced Virtual SAN
configuration options, storage device health as well as virtual machine object health.
The health check will provide two main benefits to administrators of Virtual SAN
environments:
• It will give administrators peace of mind that their Virtual SAN deployment is fully
supported, functional and operational
• It will provide immediate indications to a root cause in the event of a failure,
leading to speedier resolution times
Let's introduce a failure condition in our vSAN Cluster by running an existing PowerCLI
Script. We will then use one of our new 'Test-' cmdlets to troubleshoot the condition.
cd c:\hol
.\module4break.ps1
Test-VsanClusterHealth
1. Let's set a variable named $vsanHealth equal to the result of running the 'Test-
VsanClusterHealth' cmdlet against our vSAN Cluster:
Note: In our shared lab environment it is possible for this cmdlet to take a few
minutes to complete (thank you for your patience)!
2. Output the result of this test by typing the $vsanHealth variable and pressing
enter:
$vsanHealth
Test-VsanClusterHealth, cont.
We know that the test Failed; however, we still do not understand the specific reason
why.
1. Let's dig deeper and examine the Properties that Test-VsanClusterHealth is aware
of by using the Get-Member cmdlet:
$vsanHealth | Get-Member
$vsanHealth.OverallHealthDescription
Test-VsanClusterHealth, cont.
$vsanHealth.NetworkHealth
Notice that we are getting a False result for VsanVmknicPresent (each vSphere Host
participating in a vSAN Cluster must have a vmknic adapter enabled for vSAN Traffic).
Test-VsanClusterHealth, cont.
$vsanHealth.NetworkHealth.HostResult
Ah-ha! Notice that the Host, 'esx-03a.corp.local' does not have a vSAN vmknic
configured (you can compare this against one of the other Hosts returned).
Fix Host
Let's run a script to re-enable vSAN traffic on our impacted vSphere Host.
cd c:\hol
.\module4fix.ps1
Note: The command that is utilized to enable vSAN Traffic on the impacted host is
output in the console (and is also shown in the Lab Manual screenshot above).
Extra Credit (Optional): Re-run the previous steps beginning with the Test-
VsanClusterHealth cmdlet to confirm that the vSAN Cluster is now healthy and that
the vmknic has been properly enabled for the impacted host. You may receiving a
'warning' result via Test-VsanClusterHealth (this is expected as we are running vSAN
in a nested ESXi environment on virtual hardware).
Test-VsanVMCreation
This test creates a very simple, tiny virtual machine on every ESXi host in the Virtual
SAN cluster. If that creation succeeds, the virtual machine is deleted and it can be
concluded that a lot of aspects of Virtual SAN are fully operational (the management
stack is operational on all hosts, the Virtual SAN network is plumbed and is working, the
creation, deletion and I/O to objects is working, etc.).
By performing this test, an administrator can reveal issues that the passive health
checks may not be able to detect. By doing so systemically it is also very easy to isolate
any particular faulty host and then take steps to remediate the underlying problem.
2. Output the result of this test by typing the $testVM variable and pressing enter:
$testVM
Test-VsanVMCreation, cont.
$testVM | Get-Member
$testVM.HostResult
Notice that the Test Virtual Machine was successfully created on each vSphere Host.
Test-VsanNetworkPerformance
Warning: This test should only be run while the Virtual SAN cluster (or even the
physical switch attached to the Virtual SAN cluster) are not running in production. It is
advisable to run during a maintenance window or before placing the Virtual SAN cluster
into production. The reason for this is because this test will flood the network with
multicast packets, trying to find where an issue lies. If other users need bandwidth, they
may not get enough bandwidth while this test is running.
This test is designed to assess connectivity and multicast speed between the hosts in
the Virtual SAN cluster. It verifies that the multicast network setup can satisfy Virtual
SAN's requirements.
2. Output the result of this test by typing the $testNetwork variable and pressing
enter:
$testNetwork
Note: This Network test may report a 'Failed' status if the Cloud Environment where our
Lab is running overly busy. The command can be run multiple times if needed.
Test-VsanStoragePerformance
• Burn-in hardware to detect faulty hardware. As the test is very stressful to all
aspects of the Virtual SAN stack, including the network, flash devices, storage
capacity devices and storage controllers, it should be able to detect unreliable
hardware.
• A simple-to-use tool to assess the performance characteristics of a Virtual SAN
cluster. The test can run a number of different workloads, varying between
random and sequential, small and large I/O, and different mixes of read and write
I/O.
You can highlight this command then drag and drop it into the PowerCLI window if you
prefer:
2. Output the result of this test by typing the $testStorage variable and pressing
enter:
$testStorage
Test-VsanStoragePerformance, cont.
$testStorage | Get-Member
$testStorage.HostResult
Notice that we gain visibility to all sorts of interesting information: IssueFound?, Latency,
IOPS etc.
Virtual SAN APIs can also be accessed through PowerCLI cmdlets. IT administrators can
automate common tasks such as assigning storage policies and checking storage policy
compliance. Consider a repeatable task such as deploying or upgrading two-node Virtual
SAN clusters at 100 retail store locations. Performing each one manually would take a
considerable amount of time. There is also a higher risk of error leading to non-standard
configurations and possibly downtime. vSphere PowerCLI can instead be used to ensure
all of the Virtual SAN clusters are deployed with the same configuration. Lifecycle
management, such as applying patches and upgrades, is also much easier when these
tasks are automated.
In this Lesson, we will walk through a few automation examples and will also highlight
referencing links for further information on end-to-end vSAN automation.
• Get-VsanSpaceUsage (NEW with PowerCLI 6.5, can be used to monitor vSAN disk
capacity )
Update-VsanHclDatabase
As its name implies, the Update-VsanHclDatabase cmdlet can be used to grab the
latest vSAN HCL Database file either online (requires Internet access) or imported from
a locally staged .json file.
Once updated, you can use the Test-VsanClusterHealth cmdlet that we learned about
previously to validate compatibility.
Since our Lab does not have external Internet access, we have staged an HCL .json file
locally.
$testVSAN.HclInfo
4. Examine the test result if desired (note that we receive a 'Warning' in our Lab
since we are running vSAN in a nested Virtualized Environment):
$testVSAN
Get-VsanSpaceUsage
$vsanUsage = Get-VsanSpaceUsage
$vsanUsage
Get-VsanSpaceUsage, cont.
$vsanUsage | Get-Member
Get-VsanSpaceUsage, cont.
1. Enter this simple script to check the amount of disk free and respond accordingly.
Note: You can highlight then drag and drop the above script contents to your PowerCLI
window if you prefer.
Storage Policy Based Management (SPBM) enables precise control of storage services.
Virtual SAN provides services such as availability level, striping for performance, and the
ability to limit IOPS. Policies that contain one or more rules can be created using the
vSphere Web Client and/or PowerCLI.
These policies are assigned to virtual machines and individual objects such as a virtual
disk. Storage policies can easily be changed and/or reassigned if application
requirements change. These changes are performed with no downtime and without the
need to migrate (Storage vMotion) virtual machines from one location to another.
Applying new Storage Policies could be very cumbersome if you had to apply them
manually to individual Virtual Machines. In this section we will create a new Storage
Policy and illustrate how easy it is to apply it to multiple Virtual Machines.
This new Storage Policy will set an IOPS Limit of 500 per VM -- this could be helpful if
you wanted to prioritize certain VM's over others.
2. Set a Variable named $vms equal to all Virtual Machines that start with the word
'photon', then confirm variable contents:
$vms
Start-VM $vms
New-SpbmStoragePolicy
Set-SpbmStoragePolicy
1. Apply the newly created Storage Policy to our multiple Virtual Machines:
Note: This command may take a while to complete in our Lab environment. In the
meantime, please feel free to continue on to our final section of this Lesson.
ESXCLI Enhancements
VMware Virtual SAN has several documented ESXCLI commands that can be used to
explore & configure individual ESXi hosts.
In this lesson, we will provide some useful commands to use with Virtual SAN. Feel free
to follow along. Do note that if you run any commands outside the scope of this lesson,
you could potentially have an adverse effect on the lab and may not be able to continue
with any remaining modules or the remainder of this module. We will use some of these
commands later in this module, too.
Launch PuTTY
Choose esx-01a.corp.local
By typing:
esxcli vsan
This will give you a list of all the possible esxcli commands related to Virtual SAN, with a
brief description for each.
1. To view details about the Virtual SAN Cluster, like it's Health or whether it is a
Master or Backup Node, you can type the following:
Please note that the UUID typically used to reference the VSAN cluster is listed as the
"Sub-Cluster UUID".
If you ever were to issue the corresponding "esxcli vsan cluster join" command you
would furnish this value for the UUID.
Here we can see that the Network VmkNic is vmk3 and the Traffic Type on this
VMKernel port is vsan.
By the way, if you run an esxcli vsan network list, multicast information will
still be displayed even though it may not be used.
To view the details on the physical storage devices on this host that are part of the vSAN
Cluster, you can use this command:
Please note that this command does NOT list the storage devices available in the ESXi
host - it only reports those storage devices that have already been assigned to VSAN as
part of VSAN Disk Group. If no disks are configured for vSAN on the ESXi host, then the
output from this command will be blank.
1. To view the Policies in effect, such as how many failures the Virtual SAN can
tolerate, the command can be executed:
Notice that the policy may contain different capabilities for different VSAN object types -
here this is reflected as specifying the additional capability of "forceProvisioning"
exclusively for the vmswap object. This makes sense for vmswap object type since it is
not a permanent attribute of the VM and will be recreated if the VM needs to migrate to
another host in the cluster (vMotion, DRS, etc.)
The following two ESXCLI commands have been added to support vSAN Health Checks
on an individual ESXi host:
• vsan.health.cluster get
• vsan.health.cluster list
1. To get a summary view of all vSAN Health Checks, you can run the following
command:
esxcli vsan health cluster get -t "ESXi vSAN Health service installation"
esxcli vsan health cluster get -t "All hosts have a vSAN vmknic configured"
A new esxcli command to assist with troubleshooting has also been added to the latest
vSphere release.
This out will be quite lengthy depending on how many objects are present in your lab
environment.
Use “vsan debug object” command to get the health of the vSAN components,
Component configuration, the owner host information and other information like the VM
Storage Policy, Component State and Type.
The following new esxcli command will tell which hosts are using unicast (it does not
list the host where the command is being run from however):
Conclusion
In this Module you have spent time learning about PowerCLI 6.5 and how it can be used
to monitor, manage and automate VMware Virtual SAN.
We hope that this information has sparked ideas around how you can utilize PowerCLI in
your own environments.
As you would expect, there is a wealth of additional information available to assist you
in your PowerCLI with vSAN journey.
Learn how to create a 2 Node Direct Connect vSAN Stretched Cluster with Witness
Traffic Separation.
If you would like to end your lab click on the END button.
Module 7 - vSAN
Stretched Cluster (30
Minutes, Beginner)
Introduction
In this Module we will learn about how to setup a 2 Node vSAN Stretched Cluster with
Witness Traffic Separation.
vSAN 6.5 newly supports the use of network crossover cables in 2-node configurations.
This is especially beneficial in use cases such as remote office and branch office (ROBO)
deployments where it can be cost prohibitive to procure, deploy, and manage 10GbE
networking equipment at each location. This configuration also reduces complexity and
improves reliability. In the VMware Hands On Labs platform we aren't able to fully
simulate this configuration, but the steps in this lab module show how to prepare a
2-node stretched cluster and separate the Witness VM traffic just as one would do in a
direct-connect cluster.
Preferred domain/preferred site is simply a directive for vSAN. The “Preferred” site is the
site that vSAN wishes to remain running when there is a failure and the sites can no
longer communicate. One might say that the “Preferred" site is the site expected to
have the most reliability.
Since virtual machines can run on any of the two sites, if network connectivity is lost
between site 1 and site 2, but both still have connectivity to the Witness, the preferred
site is the one that survives and its components remains active, while the storage on
the non-preferred site is marked as down and components on that site are marked as
absent.
Since virtual machines deployed on vSAN Stretched Cluster will have compute on one
site, but a copy of the data on both sites, vSAN will use a read locality algorithm to read
100% from the data copy on the local site, i.e. same site where the compute resides.
This is not the regular vSAN algorithm, which reads in a round-robin fashion across all
replica copies of the data.
This new algorithm for vSAN Stretched Clusters will reduce the latency incurred on read
operations.
If latency is less than 5ms and there is enough bandwidth between the sites, read
locality could be disabled. However please note that disabling read locality means that
the read algorithm reverts to the round robin mechanism, and for Virtual SAN Stretched
Clusters, 50% of the read requests will be sent to the remote site. This is a significant
consideration for sizing of the network bandwidth. Please refer to the sizing of the
network bandwidth between the two main sites for more details.
Read locality is enabled by default when vSAN Stretched Cluster is configured – it should
only be disabled under the guidance of VMware’s Global Support Services organization,
and only when extremely low latency is available across all sites.
If not already open from a prior Lab Module, launch the vSphere Web Client using the
Google Chrome icon in the Windows Taskbar.
When configuring your vSAN stretched cluster, only data hosts must be in the cluster
object in vCenter.
1. The vSAN Witness Host must remain outside of the cluster, and must not be
added to the cluster at any point. In your lab environment, we have already
deployed the vSAN Witness host.
Thus for a 1 (host) +1 (host) +1 (witness) configuration, there is one ESXi host at each
site and one ESXi witness host.
Networking
The vSAN Witness Appliance contains two network adapters that are connected to
separate vSphere Standard Switches (VSS).
The vSAN Witness Appliance Management VMkernel is attached to one VSS, and the
WitnessPG is attached to the other VSS. The Management VMkernel (vmk0) is used to
communicate with the vCenter Server for appliance management. The WitnessPG
VMkernel interface (vmk1) is used to communicate with the vSAN Network. This is the
recommended configuration. These network adapters can be connected to different, or
the same, networks, provided they have connectivity to their appropriate services.
The Management VMkernel interface could be tagged to include vSAN Network traffic
as well as Management traffic. In this case, vmk0 would require connectivity to both
vCenter Server and the vSAN Network. In many nested ESXi environments (such as the
platform VMware uses for this Hands On Lab), there is a recommendation to enable
promiscuous mode to allow all Ethernet frames to pass to all VMs that are attached to
the port group, even if it is not intended for that particular VM. The reason promiscuous
mode is enabled in many nested environments is to prevent a virtual switch from
dropping packets for (nested) vmnics that it does not know about on nested ESXi hosts.
The Witness has a portgroup pre-defined called witnessPg. Here the VMkernel port to be
used for vSAN traffic is visible. If there is no DHCP server on the vSAN network (which is
likely), then the VMkernel adapter will not have a valid IP address.
The final step before a vSAN Stretched Cluster can be configured, is to ensure there is
connectivity among the hosts in each site and the Witness host. It is important to verify
connectivity before attempting to configure vSAN Stretched Clusters.
When using vSAN 6.1, 6.2, or 6.5 (without a specified gateway), administrators must
implement static routes. Static routes, as highlighted previously, tell the TCPIP stack to
use a different path to reach a particular network. Now we can tell the TCPIP stack on
the data hosts to use a different network path (instead of the default gateway) to reach
the vSAN network on the witness host. Similarly, we can tell the witness host to use an
alternate path to reach the vSAN network on the data hosts rather than via the default
gateway.
Note once again that the vSAN network is a stretched L2 broadcast domain between the
data sites as per VMware recommendations, but L3 is required to reach the vSAN
network of the witness appliance. Therefore, static routes are needed between the data
hosts and the witness host for the vSAN network, but they are not required for the data
hosts on different sites to communicate to each other over the vSAN network.
In vSphere 6.5, a default gateway can be specified for each VMkernel interface and does
not require static routes when specifying a default route for the vSAN tagged VMkernel
interfaces.
Other useful commands are esxcfg-route –n, which will display the network neighbors on
various interfaces, and esxcli network ip route ipv4 list, to display gate ways for various
networks. Make sure this step is repeated for all hosts.
The first step is to create a vSphere Cluster for the 2 ESXi hosts that we will use to form
the 2 Node vSAN Stretched Cluster.
2-Node-Stretched-Cluster
Click OK
Once we have the vSphere Cluster created, move the 2 ESXi hosts called
esx-05a.corp.local and esx-06a.corp.local into the vSphere Cluster.
1. Drag the ESXi host and drop it on top of the vSphere cluster called 2-Node-
Stretched-Cluster
or
2. Right click the ESXi host and select Move To..., select the vSphere cluster called
2-Node-Stretched-Cluster and click OK
Repeat these steps for the other ESXi host in the vSphere cluster called 2-Node-
Stretched-Cluster
Verify that your 2-Node-Stretched-Cluster looks like the screenshot before we continue.
Verify that you have a vSphere Cluster containing 2 ESXi host and that they are not in
Maintenance Mode.
Verify Networking
Verify that each of the ESXi hosts have a VMkernel port for vSAN and the vSAN traffic
service is enabled.
2. Select Configure
3. Select Networking -> VMkernel Adapters
4. Select vmk3 ( vSAN enabled port-group )
5. Verify that the vSAN service is enabled on the port-group.
Verify Storage
Verify that each of the ESXi hosts have a Storage devices available to create the vSAN
Disk Groups and enable the creation of a vSAN Datastore.
As shown in the screenshot, we will use the 2 x 5 GB disks for the cache tier and the 4 x
10 GB disks for the Capacity tier when creating the vSAN Disk Groups.
VMware vSAN 6.5 and later supports the ability to directly connect two vSAN data nodes
using one or more crossover cables.
Metadata traffic destined for the Witness vSAN VMkernel interface, can be done through
an alternate VMkernel port. It is called “Witness Traffic Separation” (or WTS).
With the ability to directly connect the vSAN data network across hosts, and send
witness traffic down an alternate route, there is no requirement for a high speed switch
for the data network in this design.
This lowers the total cost of infrastructure to deploy 2 Node vSAN. This can be a
significant cost savings when deploying vSAN 2 Node at scale.
To prepare the ESXi hosts for the 2 Node vSAN Stretched Cluster,open a Putty session
to the following hosts.
You will find the PuTTY application on the taskbar of your Main Console.
esx-05a.corp.local
esx-06a.corp.local
2. Here you will see that we have a Traffic Type : vsan configured on each host.
To use ports for vSAN today, VMkernel ports must be tagged to have “vsan” traffic. This
is easily done in the vSphere Web Client.
To tag a VMkernel interface for “Witness” traffic, today it has to be done at the
command line.
To add a new interface with Witness traffic as the type, the command is:
Note : Remember it is the Management Network that we are going to use for
the Witness Traffic, which in our environment is vmk0
Here you will see that we have a Traffic Type : vsan and Traffic Type : witness
configured on each host.
Now that we have configured the Networking, lets create our 2 Node vSAN Stretched
Cluster.
The following steps should be followed to install a new vSAN stretched cluster. This
example is a 1+1+1 deployment, meaning one ESXi hosts at the preferred site, one
ESXi hosts at the secondary site and 1 witness host.
The initial wizard allows for choosing various options like disk claiming method, enabling
Deduplication and Compression (All-Flash architectures only with Advanced or greater
licensing), as well as configuring fault domains or stretched cluster.
Click Next
Validate Network
Network validation will confirm that each host has a VMkernel interface with vSAN
traffic enabled.
Select Next.
Claim Disks
Disks will be selected for their appropriate role ( cache and capacity ) in the vSAN
cluster.
As shown in the screen shot, the 5 GB disks from each of the ESXi hosts have been
selected as the Cache tier, and the 10 GB disks have been selected for the Capacity
tier.
Select Next
The Witness host detailed earlier must be selected to act as the witness to the two
Fault Domains.
Click Next
Just like physical vSAN hosts, the witness needs a cache tier and a capacity tier.
Note: The witness does not actually require SSD backing and may reside on a
traditional mechanical drive.
Click Next
Ready to Complete
Select Finish.
Monitor Tasks
You will see tasks for the Reconfigure Virtual SAN cluster, Creating disk
groups,Converting to Stretched Cluster and Adding disks to the Disk groups.
Lets now verify that we have created the vSAN stretched cluster.
Disk Management
Lets now have a look at the Disk Groups that have been created.
1. Select 2-Node-StretchedCluster
2. Select Configure
3. Select vSAN > Disk Management
4. We can see that we have a disk group on the ESXi hosts called
esx-05a.corp.local and esx-06a.corp.local. We also have a disk group on
esx-08a.corp.local which is the vSAN witness host in our Stretched Cluster
configuration.
Lets now have a look at the Fault Domains and Stretched Cluster configuration.
1. Select 2-Node-Stretched-Cluster
2. Select Configure
3. Select vSAN > Fault Domains and Stretched Cluster
4. vSAN Stretched Cluster is enabled with the witness host esx-08a.corp.local.
5. We can also see the 2 Fault Domains that have been created and their
respective ESXi hosts.
Conclusion
This concludes the lesson on creating a vSAN 6.6 2 Node Stretched Cluster with witness
traffic separation.
The vSAN Health runs a comprehensive health check on your vSAN environment to
verify that it is running correctly and will alert you if it finds some inconsistencies and
options on how to fix these.
Lets have a look at how the health check works and what we can report on.
1. Select 2-Node-Stretched-Cluster
2. Select Monitor
3. Select vSAN
4. Select Health
Here you will see the high level list of the vSAN Health checks that can be performed.
5. To re-run the vSAN Health Check at any time,you can click the Retest button.
Towards the bottom of the screen , you will see the results of these tests.
Spend some time having a look at the other tests and the data that we return from the
tests.
Conclusion
The vSAN health check is great help to get more deeper into the testing performance
and health check of vSAN installations. The vSAN Health Check should be the first place
you should go to monitor your vSAN environment.
It is good practice to rerun the vSAN Health Check so that you retrieve the current state
of the environment.
With Local Affinity, customers can use policies to keep data on a single site. In this case
Primary Failures to Tolerate (PFTT) = 0.
This ensures that objects are not replicated to the secondary site thereby reducing the
bandwidth required between sites.
For example, to test local affinity, you can set PFTT = 0, SFTT = 2, FTM = RAID 5. The
outcome of this test is that all IOs should be done locally and not on the secondary site.
This way, you can seamlessly achieve host/disk protection for objects that do not
require site protection.
Click Next
Click Next
Click Next
Local Affinity VM
Click Next
Click Next
Click Next
1. Verify that the Virtual Machine called Local Affinity VM has been created in the
2-Node-Stretched-Cluster.
2. Select Summary tab
3. Verify that the VM Storage Policy called Single Site with Mirroring has been
applied to the VM and the policy is Compliant.
2. Select Configure
3. Select vSAN -> Fault Domains & Stretched Cluster
4. Note the Preferred fault domain and the ESXi host in the Preferred Fault Domain.
In the example shown here, the ESXi host called esx-06a.corp.local is in the
Preferred fault domain.
Policy Compliance
1. From the Home button of the vSphere Web Client , select Policies and Profiles
2. Select VM Storage Policies
3. Select Single Site with Mirroring
4. Click Edit
1. Select Rule-Set 1
2. Change the Affinity setting from Preferred Fault Domain to Secondary Fault
Domain
Click OK
1. From the Home page of the vSphere Web Client, select Hosts and Clusters
2. Select the VM called Local Affinity VM
3. Select Monitor
4. Select Policies
5. Select Hard Disk 1
6. Select Physical Disk Placement
7. Verify that the Component is now on the Secondary Fault Domain, which in our
case is the ESXi host called esx-05a.corp.local
Conclusion
In this lesson we looked at how to configure a 2 Node vSAN Stretched Cluster. We gave
you some background and some important features to understand before you configure
your stretched vSAN Cluster environment.
Once of the features that we wanted to show here was the Witness and vSAN data
separation. We showed you how to configure a Management VMKernel port for Witness
traffic.
We then completed a 2 Node vSAN Cluster Stretched Cluster configuration. In the end
we showed you how to monitor the vSAN Health and how to run the vSAN Health
Checks.
Additional information is available here on vSAN Clusters and vSAN Stretched Clusters :
• VMware Blogs
• VMware vSAN
vSAN on Youtube
• vSAN on Youtube
If you would like to end your lab click on the END button.
Conclusion
Thank you for participating in the VMware Hands-on Labs. Be sure to visit
http://hol.vmware.com/ to continue your lab experience online.
Version: 20171101-190411