Sie sind auf Seite 1von 53

Migrating to vSAN

First Published On: 10-09-2017 Last Updated On: 04-25-2018

1

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Table of Contents

1. Migrating from VMFS/NFS

1.1.Introduction

1.2.Modes of vMotion

1.3.Preparation

1.4.Migration Scenarios 1.5.Limits and Considerations

1.6.References

1.7.About the Authors

2. Migrating RDMs to vSAN

2.1.Introduction

2.2.Migrating non-shared RDMs to vSAN 2.3.Migrating Windows shared disk quorum to File Share Witness 2.4.Migrating VMs with shared RDMs to vSAN 2.5.About the Author

3. Migrating physical machines to vSAN 3.1.Migrating physical machines to vSAN

4. Orchestrating mass a migration to vSAN

4.1.Introduction

4.2.vSphere Replication interoperability with vSAN 4.3.Using SRM to migrate virtual machines with vSphere Replication

5. Native WSFC support on vSAN via iSCSI Target

5.1.Introduction

5.2.Reference Architecture

5.3.Demonstration

5.4.Concepts and Architecture

5.5.Conguration

2

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

1. Migrating from VMFS/NFS

Outlines the simple migration from VMFS and NFS based datastores to vSAN.

3

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

1.1 Introduction

Introduction

Migration strategies and options for vSAN are numerous depending on your environment and implementation of vSphere. This article will discuss the native options for migrating virtual machine workloads to vSAN. The methodologies presented are valid for vSAN in general, vSAN Ready Nodes clusters as well as hyper-converged infrastructure (HCI) appliances such as Dell EMC VxRail™ Appliances.

Minimal or no reconguration will be emphasized as well as maintaining virtual machine uptime, avoiding downtime where possible.

While third-party options and solutions, such as backup, recovery, and replication are valid options; those items are out of scope for this document due to extra cost and resources involved to deploy, congure and implement. Recommendations presented are based on current VMware best practices.

We will cover topics including migration within an existing data center with both shared and non- shared storage, from physical servers direct to vSAN and migrating between physically disparate data centers.

1.2 Modes of vMotion

Modes of vMotion

Migration of a virtual machine can be either compute only, storage only or both simultaneously. Also, you can use vMotion to migrate virtual machines across: vCenter Server instances; virtual and physical data centers; and subnets. vMotion operations are transparent to the virtual machine being migrated. If errors occur during migration, the virtual machine reverts to its original state and location.

Compute vMotion

Compute mode vMotion operations usually occur within the same logical vSphere cluster, The two hosts involved in a vMotion can, however, reside in separate logical or physical clusters.

Storage vMotion

Storage vMotion is the migration the les, that belong to a running virtual machine, residing on one discrete datastore to another discrete datastore.

Combined vMotion

When you choose to change both the host and the datastore, the virtual machine state moves to a new host and the virtual disks move to another datastore.

Shared-nothing vMotion

Also known as vMotion without shared storage, allows you to utilize vMotion to migrate virtual machines to a dierent compute resource and storage simultaneously. Unlike Storage vMotion, which requires a single host to have access to both the source and destination datastore, you can migrate virtual machines across storage accessibility boundaries.

vMotion does not require shared storage. This is useful for performing cross-cluster migrations when the target cluster machines might not have access to the source cluster's storage.

4

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Cross-vCenter vMotion

Also known as vMotion between vCenter instances and long-distance vMotion allows for the migration of VMs across vCenter boundaries both within and outside an SSO domain as well as over links with up to a 150ms RTT (Round Trip Time).

Migration between two vCenter servers within the same SSO domain is accomplished within the vSphere web interface, which leverages enhanced linked mode (ELM). While migration between two vCenter servers that are members of dierent SSO domains require APIs/SDK initiation.

Migration of VMs between vCenter instances moves VMs to new virtual networks; the migration process issues checks to verify that the source and destination networks are similar. vCenter performs network compatibility checks to prevent the following miscongurations:

MAC address incompatibility on the destination host

vMotion from a distributed switch to a standard switch

vMotion between distributed switches of dierent versions

vMotion to an isolated network

vMotion to a distributed switch that is not functioning properly

Despite these checks, however, it is prudent to ensure that:

Source and destination distributed switches are in the same broadcast domain

Source and destination distributed switches have the same services congured

1.3 Preparation

Preparation

To allow for a successful migration of VM workloads onto vSAN a review of your current virtual infrastructure is advised. Extension of the existing vMotion network into the new vSAN environment is required, allowing for migration of the VM workload from its current location to the new vSAN infrastructure.

There are many possible valid congurations for compute and storage, but for migration into vSAN there are specic requirements listed below:

Source Environment

Licensing

Essentials Plus or higher for vMotion feature

Enterprise Plus or higher for Cross-vCenter vMotion

Enterprise Plus for long-distance vMotion

NTP

Uniform time synchronization is required for the vCenter and ESXi hosts

vCenter Topology

One vCenter, one SSO domain

Two vCenters, one SSO domain

5

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Two vCenters, two SSO domains

Networking

L2 (Layer two) adjacency between source and destination VM networks

VSS or VDS conguration at, or greater than, version 6.0.0

ESXi

ESXi v6.0 or above for Cross-vCenter migration

Clusters

If EVC (Enhanced vMotion Compatibility) is enabled, the source cluster must be at

a lower or equal EVC level to the target cluster

Virtual Machine

Application dependencies

RDMs – either converted to VMFS or migrated to in-guest iSCSI

VMTools will require an update if VM is migrated to a newer ESXi version

Destination Environment

The destination vSphere environment requires network access for the virtual machine matching the source environment, for example, VLAN access and IP addresses must be considered. Additionally, advanced congurations such as DRS anity rules, and Storage Policies will need to be re-created on the target environment if they are still required.

Cold Migration Considerations

While this operation can be done "live," organizations may choose to migrate with VMs powered o. vMotion of a powered-oor suspended virtual machine is known a cold migration and can be utilized to move virtual machines from one data center to another. A cold migration can be operated manually or via a scheduled task.

By default, data migrated in a cold state via vMotion, cloning, and snapshots is transferred through the management network. This trac is called provisioning trac and is not encrypted.

On a host, you can dedicate a separate VMkernel interface to provisioning trac, for example, to isolate this trac on another VLAN. A provisioning VMkernel interface is useful if you plan to transfer high volumes of virtual machine data that the management network cannot accommodate or have a dedicated network for migration data between clusters or datacenters.

For information about enabling provisioning trac on a separate VMkernel adapter, see the vSphere networking documentation .

1.4 Migration Scenarios

6

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migration Scenarios

The previous sections highlighted Compute, Network and Virtual Machine conguration recommendations and requirements; we will now focus on the vCenter and SSO conguration. The main migration topologies supported are listed below.

Topology A: Single vCenter, Single SSO domain

Topology B: Two vCenters, Single SSO domain

Topology C: Two vCenters, Two SSO domains

We recommend that the source vCenter be v6.0 or higher. If using a VDS, it must be version 6.0 or above for cross vCenter migration. The initiation of the vMotion operations can be via the vSphere Web Client or API (PowerCLI).

In addition to the supported topologies, there are source and destination vCenter versions that need to be adhered to:

Source vCenter

version

Target vCenter

version

Supported

Method

6.0

6.0

Yes

UI and API

6.0

6.5

Yes

API

6.5

6.0

No

N/A

6.5

6.5

Yes

UI and API

Single vCenter, Single SSO Domain

The migration is initiated from the vSphere web interface. As both clusters are in the same vCenter, no special considerations need to be made. A migration of both compute resource and storage takes place with no shared storage available across both clusters. The source datastore is not accessible from the target cluster.

7

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN Two vCenters, Two SSO Domains The migration is initiated from the API (via

Two vCenters, Two SSO Domains

The migration is initiated from the API (via PowerCLI), the vCenter servers are in dierent SSO domains. A migration of both compute resource and storage takes place with no shared storage available across both clusters. The source datastore is not accessible from the target cluster.

The source datastore is not accessible from the target cluster. 8 Copyright © 2018 VMware, Inc.

8

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Two vCenters, Single SSO Domain

The migration is initiated from the vSphere web interface, Enhanced Linked Mode (ELM) is utilized meaning vCenter servers are in the same SSO domain. A migration of both compute resource and storage takes place with no shared storage available across both clusters. The source datastore is not accessible from the target cluster.

source datastore is not accessible from the target cluster. 1.5 Limits and Considerations Limits and Considerations

1.5 Limits and Considerations

Limits and Considerations

Simultaneous Migrations

vCenter places limits on the number of simultaneous VM migration and provisioning operations that can occur on each host, network, and datastore. Each operation, such as a migration with vMotion or cloning a VM, is assigned a resource cost. Each host, datastore, or network resource, has a maximum cost that it can support at any one time. Any new migration or provisioning operation that causes a resource to exceed its maximum cost is queued until the other in-ight operations reach completion.

Each of the network, datastore, and host limits must be satised for the operation to proceed. vMotion without shared storage, the act of migrating a VM to a dierent host and datastore simultaneously, is a combination of vMotion and Storage vMotion. This migration inherits the network, host, and datastore costs associated with both of those operations.

Network Limits

Network limits apply only to migrations with vMotion. Network limits depend on the version of ESXi and the network type.

9

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Operation

ESXi Version

Network Type

Maximum concurrent vMotions per Host

vMotion

5.0, 5.1, 5.5, 6.0, 6.5

1 GbE

4

vMotion

5.0, 5.1, 5.5, 6.0, 6.5

10 GbE

8

Considerations must be made for uplink speed of the NIC assigned to the vMotion service. For example, if you are using vMotion from a 1GbE source vMotion network to a vSAN Target destination with 10GbE, you will be throttled to the lower speed of the two.

Datastore Limits

Datastore limits apply to migrations with vMotion and with Storage vMotion. Migration with vMotion and Storage vMotion have individual resource costs against a VM's datastore. The maximum number of operations per datastore are listed below.

Operation

ESXi Version

Max per Datastore

vMotion

5.0, 5.1, 5.5, 6.0, 6.5

128

Storage vMotion

Host Limits

5.0, 5.1, 5.5, 6.0, 6.5

8

Host limits apply to migrations with vMotion, Storage vMotion, and other provisioning operations such as cloning, deployment, and cold migration. All hosts have a maximum number of operations they can support. Listed below are the number of operations that are supported per host - note that combinations of operations are allowed and are queued and executed automatically by vCenter when resources are available to the host.

Operation

ESXi Version

Max operations per Host

vMotion

5.0, 5.1, 5.5, 6.0, 6.5

8

Storage vMotion

5.0, 5.1, 5.5, 6.0, 6.5

2

Shared-Nothing vMotion

5.1, 5.5, 6.0, 6.5

2

Other provisioning operations

5.0, 5.1, 5.5, 6.0, 6.5

8

1.6 References

References

PowerCLI

An example migration script for moving VMs between vCenters and SSO domains, using PowerCLI, is shown below. The script moves myVM from myVC1 to myVC2 on to target port group myPortGroup and datastore vsanDatastore .

10

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Connect-VIServer 'myVC1' -Username <username> -Password <pass> Connect-VIServer 'myVC2' -Username <username> -Password <pass> $vm = Get-VM 'myVM' -Location 'hostOnVC1' $destination = Get-VMHost 'hostOnVc2' $networkAdapter = Get-NetworkAdapter -VM $vm $destinationPortGroup = Get-VDPortgroup -VDSwitch 'VDSOnVC2' -Name 'myPortGroup' $destinationDatastore = Get-Datastore 'vsanDatastore' $vm | Move-VM -Destination $destination -NetworkAdapter $networkAdapter - PortGroup $destinationPortGroup -Datastore $destinationDatastore

More information and detail on the Move-VM command can be found here: https://

KBs and Whitepapers

vMotion Shared Storage Requirements

vSphere vMotion Networking Requirements

Networking Best Practices for vSphere vMotion

Enhanced vMotion Compatibility

“EVC and CPU Compatibility FAQ” - https://kb.vmware.com/kb/1005764

“Enhanced vMotion Compatibility (EVC) processor support" - https://kb.vmware.com/

“Long Distance vMotion Requirements” - https://kb.vmware.com/kb/2106949

“Cross vCenter vMotion Requirements in vSphere 6.0” - https://kb.vmware.com/

1.7 About the Authors

About The Authors

Vuong Pham

Vuong Pham is a Senior Solutions Architect who has been in IT for 19 years in many aspects of IT. Pre- sales, Design, Implementation, and Operations of small, medium and enterprise environments across multiple industries. He is SME in virtualization, data protection and storage solutions for multiple vendors. His current focus is HCIA VxRail solutions. VCP 3,4,5,VCAP Design, VCAP Administration, EMCIE. You can follow Vuong on Twitter as: @Digital_kungfu

Myles Gray

Myles Gray is a Senior Technical Marketing Architecture for VMware in the Storage and Availability business unit, primarily focused on storage solutions. With a background as a customer and partner in infrastructure engineering, design, operations, and pre-sales roles. He is a VCIX6-NV and VCAP6-DCV. You can nd him on Twitter as: @mylesagray

11

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

2. Migrating RDMs to vSAN

Outlines moving VMs with physical and virtual mode RDMs to vSAN.

12

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

2.1 Introduction

Introduction

Traditionally, there have been two particular reasons why people use RDMs in a vSphere environment:

To allow the addition of disks to VMs that were larger than 2TB in size; For shared-disks, such as quorum and shared-data drives for solutions like SQL FCI, Windows CSVs.

The rst of these is trivial to address - the limitation for 2TB VMDKs was removed with ESXi 5.5 and VMFS-5. The limit is now the same as with RDMs at 62TB, and as such RDMs should no longer be considered for this use-case.

The second is the main reason RDMs may still be in use today: Shared-disk quorum and data between VMs.

In this section, we will address the migration of non-shared disk RDMs to native vSAN objects, as well as the transition of shared-disks from the legacy RDM based approach to in-guest iSCSI initiators.

2.2 Migrating non-shared RDMs to vSAN

Virtual Mode

Non-shared RDMs are trivial to migrate to vSAN, as they can be live storage vMotioned to VMDKs. To start with your RDMs must be in virtual compatibility mode to leverage a storage vMotion conversion to VMDK. After converting any physical mode RDMs you have to virtual mode, you may then initiate a storage vMotion to vSAN directly. You can see in the below example, I svMotion a VM with a virtual mode RDM, live, to a vSAN datastore and its RDM is converted to a native vSAN object:

datastore and its RDM is converted to a native vSAN object: Choose Migrate and Change storage

Choose Migrate

and Change storage only:

13

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN Change the policy to your chosen SPBM policy and choose the target vSAN

Change the policy to your chosen SPBM policy and choose the target vSAN datastore:

chosen SPBM policy and choose the target vSAN datastore: After the migration has completed, you will

After the migration has completed, you will notice that the disk type is no-longer RDM, rather it is listed as a VDMK and is editable as it is not a rst-class citizen of the datastore:

14

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN Physical Mode If you have physical mode RDMs they cannot have the LUN

Physical Mode

If you have physical mode RDMs they cannot have the LUN contents migrated live and would require a cold migration. Under the consideration that most physical mode RDMs are created for large data sets, to minimise downtime from a cold migration we recommend converting the RDMs to virtual mode rst, then carrying out the necessary storage vMotion to convert the disk to a VMDK which can be done while the VM is operational.

The process for this can be found in our KB here: KB 1006599

Bus-sharing SCSI Controllers

N.B: If any of the SCSI controllers in the VM are engaged in bus-sharing (they shouldn't be if the disks are not shared between VMs), whether physical or virtual mode, the storage vMotion will fail validation and not allow the migration to vSAN with the below error:

and not allow the migration to vSAN with the below error: 2.3 Migrating Windows shared disk

2.3 Migrating Windows shared disk quorum to File Share Witness

15

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Introduction

Shared disk RDMs in either virtual or physical compatibility mode have been enabled typically to provide support for guest OS clustering quorum mechanisms. Since Windows Server 2008, the need for a dedicated quorum shared disk has not been necessary. Instead, you can use a FSW (File Share Witness), the FSW can be a standard Windows server on a vSAN datastore.

File Share Witness fault-detection provides the same level of redundancy and failure detection as traditional shared-disk quorum techniques, without the additional operational and installation complexity that those solutions command.

Migration

Below you can see I have a SQL FCI cluster with two nodes, currently utilizing a shared-disk for cluster quorum:

nodes, currently utilizing a shared-disk for cluster quorum: We are going to convert this cluster to

We are going to convert this cluster to File Share Witness quorum, I have a le server in the environment (le01) and have created a standard Windows le share on it called: sql-c-quorum. N.B:

This can be done live and is not service aecting.

Firstly, right click on the cluster and got to More Actions -> Congure Cluster Quorum Settings

16

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN Then we will select "select the quorum witness": Tell the cluster we are

Then we will select "select the quorum witness":

Then we will select "select the quorum witness": Tell the cluster we are going to use

Tell the cluster we are going to use a FSW:

17

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN Insert the fi le share we con fi gured when prompted: You will

Insert the le share we congured when prompted:

Insert the fi le share we con fi gured when prompted: You will see a dialogue

You will see a dialogue telling you that the cluster voting is enabled and was successful:

18

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN We can then verify we are operating in FSW mode on the main

We can then verify we are operating in FSW mode on the main dialogue of the Failover Cluster Manager:

mode on the main dialogue of the Failover Cluster Manager: The VM no longer requires the

The VM no longer requires the RDMs used for cluster quorum or voting and they can be removed - this VM can now be migrated to vSAN by a simple storage vMotion and no downtime is required for the entire operation.

2.4 Migrating VMs with shared RDMs to vSAN

19

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Introduction

Shared RDMs have traditionally been an operational blocker to any migration or maintenance due to the complexity they create in an environment as well as the version dependencies they introduce and specic VM congurations they command. Organizations may wish to simplify their operations by having their VMs all operating under a single compute cluster with homogenous congurations at a vSphere level.

Detailed below is the process for migrating VMs with existing shared RDMs, from physical and virtual mode RDMs to instead using in-guest iSCSI initiators; This allows clustered VMs to be migrated into a vSAN environment to reduce operational complexity while leaving data in place on the existing SAN.

Example Setup

The use case covered is a WSFC (Window Server Failover Cluster) for a SQL FCI. In the below gure; there are three disks shared between the VMs for data access for: SQL Data, Logs, and Backups. Volumes presentation to the VM utilizes physical mode RDMs.

Volumes presentation to the VM utilizes physical mode RDMs. Figure 1 - Computer Management layout of

Figure 1 - Computer Management layout of cluster disks

Note: in the below example, the RDMs are in physical mode and are on Virtual Device Node; "SCSI controller 1". This information is essential to record for later as it will be necessary to remove this SCSI controller after removing the RDMs from the VM conguration.

20

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN Figure 2 - Disk attachment via physical mode RDMs As a point of

Figure 2 - Disk attachment via physical mode RDMs

As a point of reference, RDMs are provided in this environment via an EMC Unity array with iSCSI connectivity on four uplink ports (Ethernet Port 0-3) with IPs of 10.0.5.7-10 respectively.

(Ethernet Port 0-3) with IPs of 10.0.5.7-10 respectively. Figure 3 - iSCSI target connectivity on the

Figure 3 - iSCSI target connectivity on the array side

Preparation

21

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

To migrate existing RDMs, whether in physical mode or virtual mode the simplest option is to move the LUNs to an in-guest iSCSI initiator. Given RDMs are simply raw LUNs mapped through to a VM directly, storage presentation to the VM remains the same. VMs will have the same control over LUNs as they would have with an RDM and application operations will be unaected by the migration.

In preparation, there are a few steps that must be completed on each VM in the cluster to allow for iSCSI connectivity to the SAN presented LUNs. Firstly, we will need to add a NIC connected to the iSCSI network to the VM.

need to add a NIC connected to the iSCSI network to the VM. Figure 4 -

Figure 4 - iSCSI network attached to VM via separate NIC

Next, the Windows iSCSI Initiator needs to be initialized. When prompted to have the iSCSI service start automatically on boot, select Yes.

the iSCSI service start automatically on boot, select Yes. Figure 5 - iSCSI initiator service auto-start

Figure 5 - iSCSI initiator service auto-start prompt

In the following window, add one of the SAN's iSCSI targets into the Quick Connect section of the dialogue box. There is no need to add every target here; after MPIO is congured the array should

22

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

communicate all target paths that can be used for LUN connectivity to the iSCSI Initiator, providing load balancing and failover capabilities.

providing load balancing and failover capabilities. Figure 6 - Adding in iSCSI targets to Windows iSCSI

Figure 6 - Adding in iSCSI targets to Windows iSCSI Initiator

At this point, you can apply MPIO policies specic to your array and OS version. Refer to your vendor's documentation for conguring MPIO in a Windows environment. Next, add the VM’s iSCSI initiator into the SAN’s zoning policy for the RDM LUNs. This again will vary from vendor to vendor. You can see below that the host object has been created on the SAN and has been given access to the three LUNs that are used for shared data between VMs.

to the three LUNs that are used for shared data between VMs. Figure 7 - Allowing

Figure 7 - Allowing the iSCSI initiator access to the RDM LUNs

23

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

VM RDM Reconguration

At this point, migration from RDM to in-guest termination can begin. It would be prudent to start with the secondary node in the cluster, and given that WSFC is not transparent during role transferral, carrying out this work during a maintenance window is advised. Firstly, place the node undergoing reconguration into the "Paused" mode from the Failover Cluster Manager console, choosing to "Drain Roles" during maintenance.

choosing to "Drain Roles" during maintenance. Figure 8 - Pausing node membership in a WSFC Shut

Figure 8 - Pausing node membership in a WSFC

Shut down the secondary VM, and remove the RDMs and the shared SCSI controller from it. It is important to note that when you are deleting the disks from this node that you should not click "delete from datastore", remember: These are still in use by the primary node in the WSFC. Navigate to the VM in the vSphere Web Client and choose "Edit Settings" from here remove the disks and click "Ok."

24

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN Figure 9 - Removing RDMs from VM con fi guration on the secondary

Figure 9 - Removing RDMs from VM conguration on the secondary node

It is necessary to enter "Edit Settings" once more, now that the bus-sharing SCSI controller we recorded at the start is unused, and remove it. N.B: using controller SCSI0:* is not supported for

shared/clustered RDMs, so RDMs should always be on a tertiary SCSI controller - you can verify this by checking the sharing mode on the controller.

verify this by checking the sharing mode on the controller. Figure 10 - Removing the bus-sharing

Figure 10 - Removing the bus-sharing SCSI controller previously used by the RDMs

Power up the secondary VM and log in. Currently, the shared disks are not presented to the VM. Open up the iSCSI Initiator dialogue; your targets should all have connected at this point.

25

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN Figure 11 - iSCSI targets all reconnected on boot Navigate to the "Volumes

Figure 11 - iSCSI targets all reconnected on boot

Navigate to the "Volumes and Devices" section, and click "Auto Congure", this will mount the disks and display their MPIO identiers in the Volume List.

26

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN Figure 12 - Volume Auto Con fi gure list Opening up the Windows

Figure 12 - Volume Auto Congure list

Opening up the Windows disk management dialogue, you should now be able to see the disks connected but in the "Reserved" state. The reserved and oine state is expected, as this node is not the active node in the cluster, once a role transfer is complete you will see these disks listed via their volume identier (D:\, E:\, F:\). Right-clicking on one of the disks and selecting "Properties" you will be able to see each disk's LUN ID as well as specics on MPIO, multi-pathing policies, and partition type.

27

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN Figure 13 - Disk management dialogue showing the disks re-presented via iSCSI Reintroduce

Figure 13 - Disk management dialogue showing the disks re-presented via iSCSI

Reintroduce the VM into the WFSC, open the Failover Cluster Manager and right-click the secondary node that has been undergoing maintenance, choose "Resume" selecting "Do not fail back roles".

"Resume" selecting "Do not fail back roles". Figure 14 - Adding the secondary node back into

Figure 14 - Adding the secondary node back into the WSFC

28

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Ensure the WSFC console says the cluster is healthy, that both nodes are "Up" and

transfer any roles from primary to secondary. The disks will automount via iSCSI at this time as the volume signature has remained the same. To transfer the roles over to the

secondary node navigate to "Roles," right click, choose "Move" and "Select Node choose the recongured node.

", then

Node choose the recon fi gured node. ", then Figure 15 - Transferring roles over to

Figure 15 - Transferring roles over to the secondary node

Ensure your services are operating as expected, as mentioned earlier, in disk manager on the secondary node now, volumes will be listed but, with their volume identiers.

29

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN Figure 16 - Disk manager showing the volumes as active and identi fi

Figure 16 - Disk manager showing the volumes as active and identied correctly

As before, enter the node to be migrated into "Paused" mode and choose "Drain Roles," then shut down the VM.

and choose "Drain Roles," then shut down the VM. Figure 17 - Draining roles from the

Figure 17 - Draining roles from the node to undergo maintenance

30

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

In the vSphere console, locate the VM (in this case; sql-c-01) and "Edit Settings". Remove the RDMs as before, but this time choose “delete from datastore,” this is safe to do as no other nodes are actively using these RDM pointer les anymore. Note; choosing "delete from datastore" does not delete data from the underlying LUN, which remains unaected, this operation only removes the RDM pointer les from the VMFS upon which, they are situated.

fi les from the VMFS upon which, they are situated. Figure 18 - Removing and deleting

Figure 18 - Removing and deleting RDM pointer les from the VM

As previously, navigate back into "Edit Settings" and delete the bus-sharing SCSI controller from the VM's conguration.

the bus-sharing SCSI controller from the VM's con fi guration. 31 Copyright © 2018 VMware, Inc.

31

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Figure 19 - Deleting the bus-sharing SCSI controller from the VM conguration

Power the VM on and open the iSCSI Initiator dialogue, verify that the targets are all listed as "Connected," navigate to the Volumes and Devices dialogue and click "Auto Congure". The volumes will now show up in the volume list, detailed by their MPIO identier.

up in the volume list, detailed by their MPIO identi fi er. Figure 20 - Volume

Figure 20 - Volume list detailing the MPIO identiers for the iSCSI mounted volumes

Verify the disks show up in the Windows disk management snap-in and exhibit a "Reserved" and an oine state; again this is normal for the passive node in the cluster, only the active node mounts the volumes.

32

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN Figure 21 - Volumes are shown in disk manager as Reserved and o

Figure 21 - Volumes are shown in disk manager as Reserved and oine

Open the Failover Cluster Manager dialogue again and navigate to the "Nodes" section, then resume the node's participation in the cluster, choosing "Do not fail back roles". Ensure the cluster is reformed healthily and both nodes indicate a status of "Up".

healthily and both nodes indicate a status of "Up". Figure 22 - WSFC is shown as

Figure 22 - WSFC is shown as healthy, and both nodes are in the "Up" state

33

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

At this point the WSFC disk migration is complete, both VMs have had their RDMs removed and now rely on in-guest iSCSI initiators for connectivity to shared disks. You can optionally transfer the WSFC roles back to the primary node, as a matter of preference.

Migration to vSAN

With the RDMs and bus-sharing SCSI controllers gone, we can now migrate the VM to vSAN. Note:

This only migrates the VM's objects that are accessible to vSphere (VMX, swap, namespace, OS and non-shared VMDKs), the data for the shared disks still resides on the SAN. Please refer to the documentation on migrating a VM residing on VMFS/NFS to vSAN .

Rollback

In the circumstance you wish to migrate a VM back from the new mode of operation to the previous mode of operation, this is achievable by Storage vMotioning the VM from the vSAN datastore to a VMFS volume (required for RDM and bus-sharing compatibility) and following the below steps:

Enter secondary node into "Paused" mode in WSFC

Attach a bus-sharing SCSI controller to the secondary node in your chosen mode (physical/virtual)

“Disconnect” active iSCSI sessions on the secondary node and remove all iSCSI Initiator conguration

Connect RDMs to the secondary node in the same mode as the SCSI controller

Check the volumes show up as "Reserved" in disk manager

Failover WSFC roles from primary to secondary

Enter primary node into "Paused" mode in WSFC

Attach a bus-sharing SCSI controller to the primary node in your chosen mode (physical/virtual)

“Disconnect” active iSCSI sessions on the primary node and remove all iSCSI Initiator conguration

Connect RDMs to the primary node in the same mode as the SCSI controller

Check the volumes show up as "Reserved" in disk manager

Optionally, fail WSFC roles back to the primary node

2.5 About the Author

About the Author

Myles Gray

Myles Gray is a Senior Technical Marketing Architecture for VMware in the Storage and Availability business unit, primarily focused on storage solutions. With a background as a customer and partner in infrastructure engineering, design, operations, and pre-sales roles. He is a VCIX6-NV and VCAP6-DCV. You can nd him on Twitter as: @mylesagray

34

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

3. Migrating physical machines to vSAN

Migrating legacy physical hardware based machines to VMs on vSAN.

35

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

3.1 Migrating physical machines to vSAN

Migrating physical machines to vSAN

VMware Converter supports the migration of physical Windows and Linux hosts, as well as VMs to a new virtual environment with minimal downtime to support shutdown and destination startup, as such this is an out of hours migration procedure. This support has been extended to vSAN and enables organizations to migrate their existing physical hosts direct to vSAN with no interim steps required.

When using VMware Converter, choose the vSAN Datastore as the target location for the converted machine - it will migrate to the datastore with the default vSAN storage policy.

to the datastore with the default vSAN storage policy. Figure 1: Select vsanDatastore as the destination

Figure 1: Select vsanDatastore as the destination for the new VM

Ensure to change all disk types to thin during the migration - in the options section, select Advanced then adjust all disk types to thin .

36

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN Figure 2: Enter Advanced disk modi fi cation mode Figure 3: Change all

Figure 2: Enter Advanced disk modication mode

to vSAN Figure 2: Enter Advanced disk modi fi cation mode Figure 3: Change all disk

Figure 3: Change all disk types to thin

37

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

For additional information, please consult the VMware Converter documentation.

Constraints

Be aware that if your physical machines participate in a WSFC or shared-disk clustering, please reference our guide on migrating these machines to in-guest iSCSI termination before attempting a migration to vSAN to ensure supportability throughout and after the migration process. The process is a similar process on physical machines as it would be on VMs with RDMs.

38

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

4. Orchestrating mass a migration to vSAN

Details the use of Site Recovery Manager and vSphere Replication for migrations to vSAN

39

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

4.1 Introduction

Introduction

Customers may wish to migrate hundreds, or thousands of VMs in a predictable and repeatable fashion, there are a number of ways to orchestrate the migration of large numbers of VMs, in this section we will cover the use of vSphere Replication and Site Recovery Manager to migrate large numbers of VMs in a similar fashion to a destination vSAN datastore.

Included in this coverage will be migrations to a vSAN datastore within the same datacenter (in a dierent cluster), in another vCenter and in a separate SSO domain.

4.2 vSphere Replication interoperability with vSAN

vSphere Replication interoperability with vSAN

vSphere Replication supports vSAN in its entirety, as a source and destination datastore, Storage Policy Based Management is also supported with vSphere Replication, this allows customers to select disparate storage policies for source and destination datastores. Utilising storage policies in this way allows for storage eciencies and cost savings.

For example: If at the primary site you have a large cluster utilizing a storage policy with FTT (Failures to tolerate) set to two, this allows for extra redundancy in the event of hardware failures. However, on the secondary site, a smaller cluster is utilized to save costs, VMs can be replicated with a storage policy specifying that FTT is set to one in order to save space on the smaller copies, the lower redundancy on the target site can save on ongoing capital and operational expenses while still providing an eective replication target for DR.

For more information on using vSphere Replication with vSAN, check out our Tech Note here . A click- through demo is also available to demonstrate this capability.

4.3 Using SRM to migrate virtual machines with vSphere Replication

Using SRM to migrate VMs with vSphere Replication

Site Recovery Manager can be used in conjunction with vSphere Replication to migrate large numbers of VMs from an existing vSphere cluster to a new vSAN based one. This approach has the caveat of requiring VM downtime, however, can be used eectively when vMotion cannot be used or a large- scale migration of VMs is required, it is prudent to schedule this work during a maintenance window as it will require VM downtime.

Migrating large numbers of VMs

The migration of large numbers of VMs usually requires orchestration - SRM provides this capability when paired with vSphere Replication through the use of Protection Groups and Recovery Plans. It is important to note that in order to use SRM, the target vSAN cluster must be in a separate vCenter instance to the source vCenter, this is a limitation imposed by SRM from continuity and DR perspective.

vSphere Replication is limited to the recovery/migration of a single VM at once - SRM, conversely, can support concurrent migration of up to 2,000 VMs. SRM also provides the ability to orchestrate changes to VMs upon migration, for example, IP addresses if the migrations are across L3 (Layer 3

40

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

network) boundaries. In addition to these benets, the migration plan can also be tested multiple times, with no ill-eect on production workloads, providing predictability and peace of mind to the process.

Demo

For an example of migrating large numbers of VMs concurrently with SRM on vSAN, there is a demo you can nd here showing the recovery of 1000 VMs in 26 minutes with vSphere Replication and SRM, on top of vSAN.

It is wise to note that when replicating VMs with SRM and vSphere Replication that the policy selected when initially creating the replicas will be applied to the target VM container from then on, any subsequent changes to SPBM policy will only be replicated to the target VM once it has been recovered via a failover. Again, the testing process will allow you to account for this and model any rebuild trac generated on the target side post-failover.

Migration testing

SRM oers the unique ability to test a migration or failover scenarios prior to actually enacting any change. This is especially useful in the case of large-scale migrations where multiple applications and dependencies are aected. The ability to test the logic and operation of a migration to a new environment prior to actually doing the migration is invaluable. With SRM, users can test their application group failovers by remotely connecting to a test bubble environment and ensuring applications are operating as expected prior to an actual production migration taking place.

41

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

5. Native WSFC support on vSAN via iSCSI Target

Describes how to set up supported native WSFC deployments on vSAN. This can be used for greeneld or migrations from any of the other storage methods or platforms listed in this section.

42

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

5.1 Introduction

Introduction

vSAN 6.7 introduces support for Windows Server Failover Clusters (WSFC) using the vSAN iSCSI target service. If you currently host WSFC instances on your infrastructure that use RDMs for shared disks in use cases such as quorum, SQL Failover Cluster Instances (FCI) and Scale-out File Server (SOFS), these can now fully migrate to vSAN without the use of RDMs. For more details on migrating RDMs to VMFS or vSAN via iSCSI, see this section on StorageHub .

vSAN has supported Microsoft applications with native data replication (such as SQL AAG and Exchange DAG) since the start, however, legacy clusters and FCI instances weren’t supported until this release.

As of this release, fully transparent failover of LUNs is now possible with the iSCSI service for vSAN when used in conjunction with WSFC. This feature is incredibly powerful as it can protect against scenarios in which the host that is serving a LUN's I/O fails. This failure might occur for any reason:

power, hardware failure or link loss. In these scenarios, the I/O path will now transparently failover to another host with no impact to the application running in the WFSC.

5.2 Reference Architecture

Reference Architecture

The vSAN solutions team have developed a reference architecture for two WSFC roles (Scale-Out File Server and SQL FCI) on vSAN using the iSCSI target service, and is available to view here . This is a very useful resource if you want to learn best practices when it comes to planning and scaling a production deployment that uses WSFCs on vSAN. In addition to the reference architecture, a KB is also available here .

the reference architecture, a KB is also available here . 5.3 Demonstration 43 Copyright © 2018

5.3 Demonstration

43

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Demonstration

A

demonstration outlining a total host failure that is serving IO to a SQL FCI instance running on vSAN

is

available to view below. It shows the environment conguration, database load testing as well as the

failure and transparent recovery processes that take place when WSFC are used with the vSAN iSCSI

service.

take place when WSFC are used with the vSAN iSCSI service. Click to see topic media

5.4 Concepts and Architecture

Concepts and Architecture

Targets to congure per guest

When conguring iSCSI targets within Windows that are resident on a vSAN cluster it is important to take note of the conguration maximums supported for the vSAN iSCSI Target service when used in conjunction with WSFCs.

A maximum of 16 targets each with 16 LUNs are supported when used with WSFCs in order to

support transparent failover within Windows timeouts. In addition, a maximum of 128 iSCSI sessions

per host is supported.

As such, when adding targets to Windows clusters we recommend only adding enough to satisfy the SPBM requirements (add FTT+1 vSAN iSCSI targets to each Windows iSCSI initiator).

A number of examples are illustrated below using the FTT+1 calculation, where SPBM is congured

with:

• FTT=1 - add two targets to the Windows iSCSI initiator.

44

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

• FTT=2 - add three targets to the Windows iSCSI initiator.

• FTT=1, multiple fault domains - add two targets per fault domain to the Windows iSCSI initiator.

• FTT=2, multiple fault domains - add three targets per fault domain to the Windows iSCSI initiator.

These congurations ensure that the iSCSI targets will always be available as long as the SPBM policy is not violated. It is not necessary to congure more than FTT+1 targets in the Windows iSCSI Initiator as the FTT level of the vSAN objects denes the data availability level, exceeding this would just ensure the iSCSI target is available even though the data would not be.

5.5 Conguration

Conguration

vSAN

In order to take advantage of the support for WSFC on vSAN we need to enable the iSCSI target service within the vCenter UI. This can be found by navigating to the vSAN cluster -> Congure -> iSCSI Target Service. From here, click “Enable” as shown below to congure the service on all hosts in the cluster.

to con fi gure the service on all hosts in the cluster. You will be prompted

You will be prompted for some information on how the iSCSI target service should be set up, we recommend using a dedicated vmkernel port for iSCSI target trac (preferably on a dedicated physical NIC) – it is important to note down the IPs associated with these vmks as they will need to be put into the Windows iSCSI initiator later in the process.

At this point you can choose to change the SPBM policy associated with this target, however this can be changed at any time in the future as with all vSAN native objects.

45

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN After clicking Apply the option to create a new iSCSI Target will be

After clicking Apply the option to create a new iSCSI Target will be shown, for this example we will congure a single iSCSI target with four LUNs attached to it.

gure a single iSCSI target with four LUNs attached to it. The con fi guration we

The conguration we are using in this example is shown below, we have given the target an Alias to make it easy for us to identify – as well as setting the storage policy and vmkernel port we will use. (Note: the iqn section will automatically get lled in).

46

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN Within the target we will now be able to setup the LUNs to

Within the target we will now be able to setup the LUNs to be presented to the Windows iSCSI initiator by clicking the “Add” button under vSAN iSCSI LUNs.

by clicking the “Add” button under vSAN iSCSI LUNs. As an example I have fi lled

As an example I have lled out a Quorum disk’s conguration below, this would simply be repeated for every disk to be presented (in this case, four disks: logs, backups, data and quorum).

47

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN Con fi guration up to this p oint has been the same as

Conguration up to this point has been the same as it is when you congure the vSAN iSCSI Target normally and follows our documentation found here .

If desired, initiator groups can be set up to restrict access to targets to specic groups of hosts – if none are set then all hosts can see all vSAN presented iSCSI targets. For the sake of simplicity in this document, we will be using the default and allowing visibility to all iSCSI targets from all hosts.

Windows

Within windows there are a few things that need to be set up, assuming the Windows instances are fresh installations they will need the iSCSI initiator service enabled, MPIO and Windows Server Failover Cluster features enabled and MPIO settings brought in line with our supported gures.

We have made an automated script that will do all of these requiring just a single reboot of the target Windows machine, it is available here .

If you prefer to manually congure your windows instances the conguration steps are listed below.

Open Windows Server Manager and click Manage -> Add Roles and Features and add the “Failover Clustering” and “Multipath I/O” features.

48

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN Allow installation to complete and navigate to the Windows Start menu and search

Allow installation to complete and navigate to the Windows Start menu and search for “iSCSI Initiator” and open it – you will be prompted if you want the service to start with Windows, again, click yes.

After completing this action, once again navigate to the Start menu and open the “MPIO” service. Click the “Discover Multi-Paths” tab and check the “Add support for iSCSI devices” checkbox. Reboot the guest when prompted.

49

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN Once the guest has rebooted, open a PowerShell command prompt and paste in

Once the guest has rebooted, open a PowerShell command prompt and paste in the following:

Set-MPIOSetting -CustomPathRecovery Enabled -NewPathRecoveryInterval 20 - NewRetryCount 60 -NewPDORemovePeriod 60 -NewPathVerificationPeriod 30

This command sets iSCSI path timeouts and retry counts and is required in order for WSFC to be supported on the vSAN iSCSI Target service – a KB detailing what each parameter is can be found here (TONY’S KB HERE).

If you want to check what your current MPIO settings are, this can be done by running the following command in PowerShell:

Get-MPIOSetting

running the following command in PowerShell: Get-MPIOSetting Given the vSAN iSCSI Target service only support

Given the vSAN iSCSI Target service only support “fail-over only” as the path selection policy within windows we need to set that explicitly via CLI in windows with the following command:

50

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

mpclaim -l -t "VMware Virtual SAN

1

Again, as before the current MPIO claim failover policies can be listed by running the following (correct conguration is indicated by “LB Policy”: “FOO”):

mpclaim.exe -s -t

indicated by “LB Policy”: “FOO”): mpclaim.exe -s -t At this point we are ready to add

At this point we are ready to add targets to the Windows iSCSI Initiator, open the iSCSI Initiator dialogue, navigate to the “Discovery” tab, and click “Discovery Portal” and add the vmk IP addresses of the iSCSI targets on your chosen hosts (remember, it is only necessary to ad FTT+1 hosts per fault domain into the Windows iSCSI Target, adding more provides no benet).

Windows iSCSI Target, adding more provides no bene fi t). Navigating back to the “Targets” tab

Navigating back to the “Targets” tab you should see the iqn of the target that we created earlier on the vSAN iSCSI Target service – select this target then click “Properties”, from the popup dialogue open the “Add Session” section.

From this screen we will add multiple paths (one per congured discovery portal). Check the “Enable multi-path” checkbox for every session you wish to add. Click the “Advanced” button and change the “Target portal IP” to one of the target IP addresses. Repeat this until a session has been created for each target IP listed.

This conguration ensure multi-path works as expected and can failover if the host serving I/O fails.

51

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN If all is set up as expected in Devices -> MPIO you should

If all is set up as expected in Devices -> MPIO you should see a session listed for each target congured, one Active path and a Standby path for every other session, the Load balance policy should also be listed as “Fail Over Only”.

policy should also be listed as “Fail Over Only”. At the top level of the iSCSI

At the top level of the iSCSI Target Service, open the “Volumes and Devices” tab and click “Auto Congure” – each of your LUNs should now show up here with a format similar to the below (this indicates the MPIO driver is used).

\\?\mpio#disk&ven_vmware&prod_virtual_san

52

Copyright © 2018 VMware, Inc. All rights reserved.

Migrating to vSAN

Migrating to vSAN At this point, your disks will now be available within the Windows Computer

At this point, your disks will now be available within the Windows Computer Management console (compmgmt.msc) under the Disk Management section, they can be formatted as normal (we recommend a 64K block size) and then added to the Failover Cluster Manager by clicking “Add Disk” – the disks are now ready for use by the SQL FCI installation.

the disks are now ready for use by the SQL FCI installation. Creating a SQL FCI

Creating a SQL FCI instance is outside the scope of this document, however, we recommend following the ocial Microsoft guide found here .

53

Copyright © 2018 VMware, Inc. All rights reserved.