Sie sind auf Seite 1von 10

FOR ALL WORKLOADS

SAN BASED

For workloads running on FCP or iSCSI (offline migration), the CirrusData Data Migration Server (DMS)
appliance can be used to migrate data from existing storage systems into Nimble Storage using the SAN
Fabric. This is achieved by inserting Data Migration Server appliances into a host’s data path using a
patented Transparent Data Interception technology, which requires no production downtime. The target
LUN’s on the Nimble Storage are automatically provisioned by the device based on the source LUNs
requirements. The appliances intercept I/O and track data changes, enabling online data migration to be
performed in production systems. For remote sites that do not have direct FC connectivity, a remote
DMS appliance can be used for migration over a WAN via TCP/IP. Quality of Service features in the DMS
appliance ensures that there is no performance impact during migration of data. They can be set at the
beginning of each migration job in the GUI to migrate minimally, moderately or aggressively based on
time of day and day of week (all customizable). The appliance actively monitor the arrays that are
migrating and will dynamically yield to production I/O. It also monitors the array’s idle time (pending I/O,
throughput, etc.)
The Data Migration Server (DMS) appliance consists of 3
types of ports

 Nexus Upstream (Target) ‐ connect to Fibre


Channel switches or hosts
 Nexus Downstream (Initiator) ‐ connect directly Destination Nexus Nexus
to production storage controller ports or Fibre Storage Ports Downstream Upstream
Channel switches Port Port
 Storage Ports (Initiators) – connect to Fibre
Channel switches or to the Nimble Storage array/s.

If the Nimble Storage is connected via a switch, it needs to be zoned to the DMS appliance.

Somnath Guha – Solution Architect - GSI 1|P a g e


The below diagram depicts an overview of how the appliances are connected (with Host Side
Insertion) to perform migration of data for a host or a group of hosts.

There are four types of connection methods used to determine the insertion mode of the data migration
server appliances as given below
Physical Insertion
 Host Side Insertion
 Storage Side Insertion
Logical Insertion
 VSAN Insertion
 Zoning Insertion
Physical Insertion

 Host Side Insertion

This is ideal when there are a small number of hosts or migration is carried out in groups of server.
This mode is also ideal for a large scale hosting data center with multitenant storage farm
environments. Host side insertion guarantees that only those hosts and the assigned storage
owned by the customer being migrated are affected. With host side insertion, the appliance is
physically chained between the host and the FC switch. All FC paths from all hosts to the FC
switches must be inserted in order to capture all of the I/O to the set of LUNs being migrated for
the hosts. The insertion process disconnects the host FC cable from the switch port and connects

Somnath Guha – Solution Architect - GSI 2|P a g e


it to the upstream port of the appliance. The downstream port of the appliance is then connected
to the switch port to which the hosts was originally connected as shown below.

Source Storage Source Storage

 Storage Side Insertion

This is ideal when there are more number of hosts or migration is carried out in groups of more
than 8 servers. Storage side insertion brings less disruption and enables all hosts connected to
the storage be migrated at one go. With storage side insertion, the appliance is physically
chained between the source storage controller ports and the FC switch. Every storage controller
port involved with presenting the set of source LUNs being migrated (known as the “migration
set”) must be inserted, so that all I/O is going through the Data Migration Server (DMS)
appliances. The insertion process disconnects one cable from a downstream port on the FC
switch and then reconnects the path through a set of DMS ports (consisting of an upstream and
a downstream FC port).Therefore, if the source storage uses two ports – two from each controller
– to present the migration set of LUNs, it is necessary to use four Nexus ports (2 on each fabric)
on DMS. In a high availability configuration, the four ports should be spread across two DMS
appliances, as shown below:

Source Storage Source Storage

Somnath Guha – Solution Architect - GSI 3|P a g e


Logical Insertion

 VSAN Insertion

When existing FC switches support virtual SAN, CDS appliances are inserted into data paths by
connecting to available switch ports. VSANs are added to include the ports to be intercepted as well
as the nexus ports to use. The desired data paths are thereby intercepted transparently, in an
identical manner to the physical intercept method. Multiple links – either host or storage – can be
intercepted by a single nexus.

This requires no physical manipulation of cables in the SAN and provides flexibility to insert into large
environments and selectively intercept a large number of data paths with fewer nexus to reduce
configuration complexity. It allows same configuration to support host-side or storage-side insertion
The existing switches should support virtual SAN and have available (free) switch ports.

Somnath Guha – Solution Architect - GSI 4|P a g e


 Zoning Insertion

In zoning insertion, CDS appliances are inserted between unused storage ports and unused switch
ports. After this, selected hosts are moved logically to the inserted ports through zoning.
This requires no physical cabling changes on active ports. Only hosts required to be intercepted are
included in the intercepted paths. This mode is optimized for operations that require projects be
done in waves.
This requires unused, inactive ports on storage, as well as free switch ports in fabric. This also requires
LUN masking changes, zoning changes, and host rescan for new paths.

The below steps provide a high level details on performing migration.

Step 1 – Insertion of the Data Migration Server (DMS)

In this step and based on the above scenarios, the DMS needs to be inserted into the fabric for it to
discover the storage landscape. While the steps won’t be different Host side Insertion is assumed in this
scenario. Each of the FC cables need to be inserted sequentially with each insertion followed by a path
validation on all the DMS appliances.

Step 2 – Verification (DMS)

The DMS appliance automatically gathers all of the discovered initiator WWPNs, target WWPNs, and LUNs
from all Nexus ports to build a picture of the SAN environment. The host initiator WWPNs are correlated
so that all of the initiators that “see” the same LUN are assumed to be a single host entity. Once the DMS
appliance is inserted the configurations need to be validated. Using the DMS console verify
a. Verify that the ports are all connected
b. Verify that the storage controller and initiator WWPN details are correct
c. Verify SAN configuration – discovered initiator WWPNs, target WWPNs, and LUNs
Step 3 – System Configuration

This steps comprises of configuring the DMS appliance and setting migration parameters required for
migration. This involves setting up user credentials, network and security settings, alerts, migration
calendar (times when migration can happen).

Somnath Guha – Solution Architect - GSI 5|P a g e


This is performed using the web based console by clicking on the “Settings” tab. The “Storage
Configuration” sections allows WWPN’s to be added in case they are not automatically discovered. This is
useful for offline migration when the host is not connected. If the original LUN masking remains for the
storage, the initiators can be added back as virtual initiators so that the storage configuration can be
reconstructed to what it used to be, allowing migration to be performed without reconfiguring the existing
storage system.
Sometimes initiators need to re-assigned (typically in case of cluster systems) when two different initiators
are assigned to the same LUN but resided on different physical hosts.
This section also helps in renaming LUNs and hosts to meaningful names to increase operational efficiency
during migration activity.
The “Storage Resources” tab in the web console provides details and help configure storage device LUNs
that needs to be migrated and the Destination LUNs (local or remote) where it needs to be migrated. It
also provides information on LUN activity, Storage Ports and Paths.
The SAN configuration tab in the web console displays information about the discovered configuration
between the host (i.e., application server, file server, etc.) and their primary storage (currently being used
by the host; the storage containing the data to be intercepted) in different ways. The SAN Configuration
screen shows detailed information about each LUN, including current activity, status, throughput, and
paths.
The following views are available:
Topology explorer – Displays a logical diagram of the discovered SAN configuration
Host-centric view – Displays information from the perspective of each host
Storage-centric view – Displays information from the perspective of each storage device

Step 4 – Migration

A migration session is a policy that defines the source LUNs (LUNs to be migrated), the target LUNs, and
the conditions under which migration will occur. The conditions include: the schedule and the general
aggressiveness (Quality of Service) of the data copy process. This is an 8 step process comprising of the
below steps.
a. Creating a new Migration Session
b. Selecting the source LUN to migrate
c. Specifying the destination type (Local, Remote, Swing Box)
d. Pairing destination LUNs for each source LUN
e. Enter description and specify migration options like Holiday schedule, QoS, Thin Migration,
Time zone etc.
f. Specify the schedule for migration and mode for each time frame.
g. Verify License Usage and click “Finish” to start the migration.

Progress of migration and details can be viewed in the Migration tab.

Step 5 – Validation of Migrated Data

LUN verification validates data on a migrated LUN by comparing it with the data on the source LUN.

Somnath Guha – Solution Architect - GSI 6|P a g e


During migration, hashes are created and stored. After a valid image is obtained on the destination, LUN
verification can be manually run. During verification, the stored hashes are compared with the hashes
calculated using data read from the destination LUNs. If there is any indication the data on the destination
device may not be consistent with the source, the LUN verification process will detect the discrepancies.
All detected data inconsistencies will be reported and marked for repair in the changed data map and the
next resynchronization will correct any bad data.
Test View Devices enable mounting of a data image of a migrated LUN to test data validity and integrity.
This is done by creating a Test View Device and assigning it to any host with an iSCSI Initiator. This allows
the host to use an actual application to test the migrated data. The image testing can accommodate both
read and write. The migrated data image is not changed; any data written is written to the Test View
Device and will be discarded once the test is completed. Before creating a Test View Device, at least one
migration session must be complete or pending complete and a metadata LUN (for capturing writes) must
have been enlisted to become a Test View Device. This is performed on the Storage Resources screen.

Step 6 – Migration Completion (Cut Over)

Once initial synchronization has completed and the data has been validated, production is almost ready
to move over to the migrated disks. A final synchronization must be performed for data that has changed
since the last synchronization. This would involve the following steps.

a. Prior to cut-over time, check amount of data awaiting migration and use “Sync-Delta” manually
or schedule periodically to minimize delta during cut-over
b. Stop application on Source system and unmount Source LUN / Volume / Drive.
c. Complete the migration session to trigger last round of synchronization. This completes data
migration.
d. Un-assign the Source LUNs from the hosts. Rescan to ensure they disappeared.
e. Use “Auto-Provision” feature to automatically cause the new storage to remove the destination
LUNs from the DMS system and present them to the host entities (also automatically
created). Create required zoning to publish new LUNs to the host directly.
f. Rescan host for the new drives / LUNs. Windows should automatically assign the correct Drive
Letters. For Linux and Unix file systems based on Disk Labels, the existing mount-points should
still be valid, otherwise adjust the host file for mounting the new drives / LUNs and start
application.
g. Once application has validated the former production storage can be retired, or optionally
scrubbed using DMS (separate license required).

Details of each of these steps and how they needs to be executed on the web console are available in the
User Guide for DMS.

Somnath Guha – Solution Architect - GSI 7|P a g e


FOR VIRTUAL WORKLOADS
HYPERVISOR BASED - VMWARE

For workloads running on VMWare Hypervisor, Storage VMotion can be used to migrate data from
existing arrays to the new Nimble Storage Array. Storage VMotion relocates virtual machine disk files
from one shared storage location to another shared storage location with zero downtime, continuous
service availability and complete transaction integrity. Storage VMotion is fully integrated with VMware
vCenter Server to provide easy migration and monitoring.

The below provides a detailed 4 Step migration solution to move VMs from an existing storage into a
Nimble storage across datacenters using Storage VMotion and Nimble Replication.

Step 1 – Configure a New Datastore

Publish storage volumes from the Nimble Storage (which would be a part of the swing kit) to the VSphere
Cluster and configure the LUNs as a new datastore in VCenter. Please ensure that there is adequate space
to host all the required VMs in the new datastore.

Step 2 – Trigger a Storage vMotion

Right Click the VM that needs to be migrated and select “Migrate”. On the next screen select the option
“Change Datastore”. Then select the datastore configure on Nimble Storage. Once the configuration
changes are reviewed click the “Finish” button to start the Storage vMotion.

Somnath Guha – Solution Architect - GSI 8|P a g e


Step 3 – Replicate the Nimble Storage with the Remote Datacenter

Setup Replication using Nimble Storage SmartReplicate between the arrays in the old datacenter with
the array in the new datacenter. Setting up replication between two Nimble Storage arrays is simple. By
creating a replication partner on each of the two arrays, replication is automatically set up. Replication is
automatic, based on the protection schedule assigned to the volume or volume collection. The details of
creating a replication partner can be found in the Admin Guide.

Somnath Guha – Solution Architect - GSI 9|P a g e


Step 4 – Bring up VMs on the new Datacenter

Once all the VMs are migrated to the Nimble Storage datastore at source and data replicated to the
Nimble Storage in target DC we can initiate a handover. The Nimble Handover process automatically
pauses the replication partnership and the volume ownership is transferred, allowing the VMs to be
brought online in the target DC.

Somnath Guha – Solution Architect - GSI 10 | P a g e

Das könnte Ihnen auch gefallen