Beruflich Dokumente
Kultur Dokumente
White Paper
September 2012
Table of Contents
Introduction................................................................................................................................................................................................. 2
Tested Solution Components ..................................................................................................................................................................... 3
Hardware Components .................................................................................................................................................................... 3
Hitachi Virtual Storage Platform ............................................................................................................................................ 4
Hitachi Compute Blade 500 .................................................................................................................................................. 4
Brocade Ethernet Network .................................................................................................................................................... 5
Brocade Fibre Channel Fabric .............................................................................................................................................. 5
Software Components ..................................................................................................................................................................... 6
Hitachi Unified Compute Platform for VMware vSphere integrated solution ......................................................................... 7
Hitachi TrueCopy Heterogeneous Remote Replication Bundle............................................................................................. 7
Hitachi Universal Replicator .................................................................................................................................................. 7
Hitachi ShadowImage Heterogeneous Replication ............................................................................................................... 8
VMware vSphere 5.1 ............................................................................................................................................................ 8
VMware vCenter Site Recovery Manager 5.1 ....................................................................................................................... 8
Solution Implementation ............................................................................................................................................................................. 9
Configure the Storage Area Network ............................................................................................................................................... 9
Configure the Storage Replication Link ......................................................................................................................................... 12
Hitachi TrueCopy Remote Replication Bundle .................................................................................................................... 13
Hitachi Universal Replicator ................................................................................................................................................ 13
Creating Journal Groups for Hitachi Universal Replicator ................................................................................................... 14
Configure Storage.......................................................................................................................................................................... 14
Create and Attach Datastore to Host via UCP .................................................................................................................... 15
Create LDEVs within the Management Pool (Journal Vol, & Command Device) ................................................................ 16
Add Volumes to Management Node HSDs ......................................................................................................................... 16
Create a Command Device ................................................................................................................................................. 16
UCP Pro for VMware vSphere SRM environment ......................................................................................................................... 17
Configure Command Control Interface (CCI) for Replication ......................................................................................................... 19
Install Command Control Interface ...................................................................................................................................... 19
Configure Command Devices ............................................................................................................................................. 20
Configure Remote Replication Using Command Control Interface ..................................................................................... 20
(Optional) Configure Local Replication Using Command Control Interface ........................................................................ 22
Implement Storage Replication for Site Recovery Manager .......................................................................................................... 23
Enabling replication for TrueCopy Synchronous ................................................................................................................. 24
Enabling replication for Universal Replicator....................................................................................................................... 24
(Optional) Enabling replication for ShadowImage ............................................................................................................... 24
Configure Storage Replication Adapter with VMware Site Recovery Manager .............................................................................. 24
Other Implementation Actions ....................................................................................................................................................... 26
Configuration Files ................................................................................................................................................................................... 27
Hitachi TrueCopy Remote Replication Bundle Files ...................................................................................................................... 27
Hitachi Universal Replicator Files .................................................................................................................................................. 28
Hitachi ShadowImage Heterogeneous Replication Files ............................................................................................................... 29
For More Information ................................................................................................................................................................................ 31
i
ii
Introduction
When deploying a virtualized data center with Hitachis Unified Compute Platform Pro for VMware vSphere, there are several
benefits gained in IT infrastructures:
Expanding data centers across multiple locations provides an opportunity to increase these layers of protection beyond a
single data center.
VMware vCenter Site Recovery Manager is a business continuity and disaster recovery solution that integrates VMware
vSphere infrastructures with data replication on storage systems to protect large sites and business critical applications. It
does the following:
Using an automated and tested recovery plan, a production data center can do a disaster recovery failover in an organized
and proven manner.
Remote data replication is a key function in building out stable and reliable disaster recovery environments. Replicating data to
a remote secondary site represents the most effective insurance policy against a catastrophic failure. Although you can
perform data replication at the server level, you can perform data replication more effectively within the storage infrastructure.
Hitachi Virtual Storage Platform is an integral piece in building out a robust business continuity and disaster recovery solution.
VMware vCenter Site Recovery Manager integrates tightly with Hitachi Virtual Storage Platform using the Hitachi Storage
Replication Adapter. The advanced functions found in the Virtual Storage Platform fulfill the requirements of a virtual
infrastructure and provide reliable protection by managing data replication across data centers.
This paper's intended use is by IT administrators charged with the storage, deployment, or administration of Hitachis UCP Pro
platform. It assumes familiarity with storage area network (SAN)-based storage systems, VMware vSphere, Hitachi data
replication technologies, and common IT storage practices.
Hardware Components
The following tables describe the hardware needed to deploy this solution. The UCP Pro solution for each site includes all of
the components listed.
Hardware
Quantity
Configuration
Role
Hitachi Compute
Blade 500 chassis
Compute Nodes
Hitachi CR210 1u
servers
Brocade 5460
Brocade 6510
Brocade 6746
Access layer
Brocade 6720
Ethernet switches
Aggregation layer
Hardware
Hitachi Virtual
Storage Platform
(VSP)
Quantity
1
Configuration
Role
For purposes of this paper, we will refer to each UCP Pro instance as Site A and Site B respectively. The figure below
illustrates a high-level configuration of connectivity between two UCP sites. Detailed cabling designs within a base compute
rack, and its associated base storage rack can be found in the UCP Hardware Assembly & Configuration Manual ({official
doc number here}), while intra-site connectivity and configuration will be detailed later in this paper.
= 8Gb/sec FC
= 16Gb/sec FC ISL
= 10 Gb/sec IP
Site A
LAN/WAN
Site B
Scale Up Meet increasing demands by dynamically adding processors, connectivity, and capacity in a single unit.
Provide the highest performance for both open and mainframe environments.
Scale Out Meet multiple demands by dynamically combining multiple units into a single logical system with shared
resources. Support increased demand in virtualized server environments. Ensure safe multi-tenancy and quality of service
through partitioning of cache and ports.
Scale Deep Extend storage value by virtualizing new and existing external storage systems dynamically. Extend the
advanced functions of Hitachi Virtual Storage Platform to multivendor storage. Offload less demanding data to external
tiers to save costs and to optimize the availability of tier one resources.
For more information, see Hitachi Virtual Storage Platform on the Hitachi Data Systems website.
For more information, see Hitachi Compute Blade Family on the Hitachi Data Systems website.
Brocade VDX 6720 is a high-performance, ultra-low latency wire-speed 10 Gigabit Ethernet (GbE) fixed configuration
switch. The Brocade VDX 6720 is an ideal platform for a variety of Top-of-Rack (ToR) fabric deployments. Within the
UCP Pro base compute rack, there are two 6720s configured as aggregation layer switches.
Brocade VDX 6746 is an OEM, ultra-low latency wire-speed 10 Gigabit Ethernet (GbE) in-chassis switch. For each CB
500 Compute Chassis, there are two 6746s that are implemented as access layer switches and connected to the 6720 at
the top of the rack of the UCP Pro base compute rack.
The Brocade 6510 fibre channel switch meets the demands of hyper-scale, private cloud storage environments by
delivering market-leading 16 Gbps Fibre Channel technology and capabilities that support highly virtualized environments.
The Brocade 5460 in-chassis fibre channel switch is an OEM developed switch based on the popular 5000-series FC
switch line.
Software Components
The table below lists the software needed to deploy this solution.
Software
Version
VMware vCenter
51 build 799733
UCP Director
7.3.1-02
7.3.0-02
02.01.3
01-28-03/05
Note: Ver. 01-24/03/13 or later is required
Microcode dependent
Replication across any distance without significant negative effect on host performance Hitachi Universal Replicator
has been successfully used in replication configurations that span thousands of miles.
No acknowledgement dependencies from secondary site Hitachi Universal Replicator replicates to a remote site
without the performance impact of waiting to acknowledge each individual record. Instead, Hitachi Universal Replicator
manages the remote relationship at a controller level. During a disruption of communication to the remote unit or while
exceeding the capability of the replication circuit, Hitachi Universal Replicator retains replicated write data in local journals,
then updates the write data when the condition is corrected.
There is a potential for some data lag between remote and primary sites, particularly at longer distances. Manage the recovery
point objective with the configuration of the data communication lines. When I/O activity at the primary site exceeds the
capacity of the communication channel, the data is staged and moved to the secondary site in the same order as it was written
at the primary site.
ESXi 5.1 This is a hypervisor that loads directly on a physical server. It partitions one physical machine into many
virtual machines that share hardware resources. UCP Pro utilizes VMWares Autodeploy feature to minimize footprint by
utilizing Stateless ESXi where the hypervisor is loaded directly into memory, and is not booted from disk.
vCenter Server This allows management of the vSphere environment through a single user interface. With vCenter,
there are features available such as vMotion, Storage vMotion, Distributed Resource Scheduler, high availability, and fault
tolerance.
Solution Implementation
Although we have detailed the configuration of two UCP Pro sites with Compute Blade 500 chassis that are fully populated
with all 8 blades, the following section will focus on demonstrating replication for a single host, however the steps and
concepts can be expanded to include all blades within the site.
Follow these steps to deploy this solution:
1.
2.
3.
4.
5.
6.
Configure the Storage Area Network for virtual fabric and replication links
Configure the Storage Replication Link
Configure Storage
Configure vSphere environment
Configure Command Control Interface (CCI) for Replication
Implement Storage Replication for Site Recovery Manager
= 8Gb connection
= 16Gb x 2 Channel ISL
Site A
Site B
Brocade
6510
Brocade
6510
VSP
VSP
Vfab 1
Vfab 1
CL1-A, CL2-A
CL1-A, CL2-A
Vfab 128
Vfab 128
CL1-B, CL2-B,
CL5-B, CL6-B
CL5-A, CL6-A
CL7-A, CL8-A
CL1-B, CL2-B,
CL5-B, CL6-B
Vfab 2
Vfab 2
Vfab 2
Vfab 2
CL3-B, CL4-B,
CL7-B, CL8-B
CL7-A, CL8-A
CL3-B, CL4-B,
CL7-B, CL8-B
Vfab 128
Vfab 128
CL3-A, CL4-A
CL3-A, CL4-A
Vfab 1
CL5-A, CL6-A
Vfab 1
Each server blade at the local and remote site uses dual-port Fibre Channel mezzanine cards. They connect internally to the
internal Fibre Channel switch modules located in the Hitachi Compute Blade 500 chassis as depicted earlier in figure 3.
Four inter-switch links from each internal Fibre Channel switch modules connect to a Brocade 6510. In turn, each Brocade
6510 switch connects to eight target ports on the Hitachi Virtual Storage Platform.
The Hitachi Virtual Storage Platform supports active-active multipath connectivity. UCP Best practices ensures that at least
four unique paths exist from the ESXi host to the storage system to maximize availability. The multipathing policy was set to
round robin in ESXi 5.1.
The tables below show the zoning configuration for the protected site and the recovery site.
Host
Host HBA/Port
Fabric
Zone Name
Storage
Port
Protected Site
Compute1
HBA Port 0
6510 #1
Port0_Compute1_CL1B_VSP_10_20_90_66
CL1B
Port0_ Compute1_CL2B_VSP_10_20_90_66
CL2B
Port1_ Compute1_CL3B_VSP_10_20_90_66
CL3B
Port1_ Compute1_CL4B_VSP_10_20_90_66
CL4B
Port0_Mgmt1_ CL1A_VSP_10_20_90_66
CL1A
Port0_Mgmt1_ CL2A_VSP_10_20_90_66
CL2A
Port1_Mgmt1_ CL3A_VSP_10_20_90_66
CL3A
Port1_Mgmt1_ CL4A_VSP_10_20_90_66
CL4A
Port0_Mgmt2_ CL1A_VSP_10_20_90_66
CL1A
Port0_Mgmt2_ CL2A_VSP_10_20_90_66
CL2A
Port1_Mgmt2_ CL3A_VSP_10_20_90_66
CL3A
Port1_Mgmt2_ CL4A_VSP_10_20_90_66
CL4A
vFab #128
Protected Site
Compute 1
HBA Port 0
6510 #1
vFab #128
Protected Site
Compute 1
HBA Port 1
6510 #2
vFab #128
Protected Site
Compute 1
HBA Port 1
6510 #2
vFab #128
Protected Site
Mgmt1
HBA Port 0
6510 #1
vFab #1
Protected Site
Mgmt1
HBA Port 0
6510 #1
vFab #1
Protected Site
Mgmt1
HBA Port 1
6510 #2
vFab #1
Protected Site
Mgmt1
HBA Port 1
6510 #2
vFab #1
Protected Site
Mgmt2
HBA Port 0
6510 #1
vFab #1
Protected Site
Mgmt2
HBA Port 0
6510 #1
vFab #1
Protected Site
Mgmt2
HBA Port 1
6510 #2
vFab #1
Protected Site
Mgmt2
HBA Port 1
6510 #2
vFab #1
Table 1 Site A zoning, Management virtual fabric (1) & Compute virtual fabric (128)
10
Host
Host HBA/Port
Fabric
Zone Name
Storag
e Port
Recovery Site
Compute1
HBA Port 0
6510 #1
Port0_Compute1_CL1B_VSP_10_20_90_67
CL1B
Port0_ Compute1_CL2B_VSP_10_20_90_67
CL2B
Port1_ Compute1_CL3B_VSP_10_20_90_67
CL3B
Port1_ Compute1_CL4B_VSP_10_20_90_67
CL4B
Port0_Mgmt1_ CL1A_VSP_10_20_90_67
CL1A
Port0_Mgmt1_ CL2A_VSP_10_20_90_67
CL2A
Port1_Mgmt1_ CL3A_VSP_10_20_90_67
CL3A
Port1_Mgmt1_ CL4A_VSP_10_20_90_67
CL4A
Port0_Mgmt2_ CL1A_VSP_10_20_90_67
CL1A
Port0_Mgmt2_ CL2A_VSP_10_20_90_67
CL2A
Port1_Mgmt2_ CL3A_VSP_10_20_90_67
CL3A
Port1_Mgmt2_ CL4A_VSP_10_20_90_67
CL4A
vFab #128
Recovery Site
Compute 1
HBA Port 0
6510 #1
vFab #128
Recovery Site
Compute 1
HBA Port 1
6510 #2
vFab #128
Recovery Site
Compute 1
HBA Port 1
6510 #2
vFab #128
Recovery Site
Mgmt1
HBA Port 0
6510 #1
vFab #1
Recovery Site
Mgmt1
HBA Port 0
6510 #1
vFab #1
Recovery Site
Mgmt1
HBA Port 1
6510 #2
vFab #1
Recovery Site
Mgmt1
HBA Port 1
6510 #2
vFab #1
Recovery Site
Mgmt2
HBA Port 0
6510 #1
vFab #1
Recovery Site
Mgmt2
HBA Port 0
6510 #1
vFab #1
Recovery Site
Mgmt2
HBA Port 1
6510 #2
vFab #1
Recovery Site
Mgmt2
HBA Port 1
6510 #2
vFab #1
Table 2 Site B zoning, Management virtual fabric (1) & Compute virtual fabric (128)
In addition, replication requires dedicated Fibre Channel connections between storage systems at the local and the remote
site. For this solution, each Hitachi Virtual Storage Platform uses a total of two initiator ports and two RCU target ports.
The following table and diagram shows the storage replication paths and zoning between storage systems on each site.
11
Storage System
Storage Port
Fabric
Zone Name
5A - Initiator
6510 #1 (both
sites)
Storage System
Storage Port
VSP1_5A_VSP2_5A
5A - RCU Target
VSP1_6A_VSP2_6A
6A - Initiator
VSP1_7A_VSP2_7A
7A - Initiator
VSP1_8A_VSP2_8A
8A - RCU Target
vFab #2
Protected Site VSP1
6A - RCU
Target
6510 #1 (both
sites)
vFab #2
7A - RCU
Target
6510 #2 (both
sites)
vFab #2
8A - Initiator
6510 #2 (both
sites)
vFab #2
VSP Site A
VSP Site B
5A
Initiator
5A
RCU
Target
7A
RCU
Target
7A
Initiator
6A
RCU
Target
6A
Initiator
8A
Initiator
8A
RCU
Target
12
Replication Type
Low RPO
Flexible RPO
The following describes how to configure the storage replication link between the local and remote storage systems for the
respective replication technologies.
From the Actions menu, point to Remote Copy, then point to TrueCopy, and then click RCU Operation.
Modify settings on the storage ports.
1 Click the Pen button in the top right area of the pane to enter Modify mode.
2 Click the Port option.
3 Right-click the storage ports to change the port attribute to Initiator or RCU Target, as appropriate. This solution uses
ports for Initiator and RCU Target as depicted above in figure 5.
4 Click the MCU&RCU option.
5 Click CU Free from the navigation tree. TrueCopy software selects the first available control unit to be the RCU.
6 Right-click the blank space on the right pane. A shortcut menu opens.
Add the RCU (Fibre) connections.
1 Click RCU Operation and then click Add RCU (Fibre) from the shortcut menu. The Add RCU (Fibre) dialog box
opens.
2 Provide the following information in the Add RCU (Fibre) dialog box:
1 Type the serial number for the remote storage system in S/N.
2 Click the LDKC of the remote storage system from the LDKC list.
3 Click 6 (VSP) from the Controller ID list.
4 Select the Default check box for the Path Group ID.
5 Click the local storage system port and its corresponding pair port in the MCU-RCU Path area. This solution uses
ports for Initiator and RCU Target as depicted above in figure 5.
3 Set options.
1 Click Option.
2 Type values in Minimum Paths, RIO MIH Time (sec.) and Round Trip Time (ms).
3 Click Set.
Accept the changes and complete the changes.
1 Click Apply. A confirmation dialog box opens.
2 Click OK to confirm applying the changes. Another confirmation dialog box opens.
3 Click OK.
Repeat these steps on the remote storage system to add an RCU (Fibre) connection for reverse replication. This is necessary
for VMware Site Recovery Manager to perform failback operations.
13
From the Actions menu, point to Remote Copy, then point to Universal Replicator, and then click DKC Operation.
Modify settings on the storage ports.
1 Click the Pen button in the top right area of the pane to enter Modify mode.
2 Click the Port option.
3 Right-click the storage ports to change the port attribute to Initiator or RCU Target, as appropriate. This solution uses
ports for Initiator and RCU Target as depicted above in figure 4.
4 Click the DKC option. The LDKC navigation tree displays in the left pane.
5 Click LDKC#00 on the navigation tree.
Repeat these steps on the remote storage system to add a DKC connection for reverse replication. This is necessary for
VMware Site Recovery Manager to perform failback operations.
From the Actions menu, point to Remote Copy, then point to Universal Replicator, and then click Journal Operation.
Modify settings for the journals.
1 Click the Pen button in the top right area of the pane to enter Modify mode.
2 In the Journals tree, click to expand Free.
3 Select (highlight) a free journal group from Free. Details of the selected free journal group display in the right area.
4 Right-click the highlighted group in the right area, and then click Edit JNL VOLs from the shortcut menu. The Edit JNL
Volumes dialog box opens.
5 Populate the journal volumes.
1 Click the CU option.
2 From the menu on the right, click a CU. The Free Volumes pane populates.
6 Populate the journal volumes.
1 Select one or more volumes in the Free Volumes area. The Add button becomes available.
2 Click Add. The JNL Volumes area populates.
Save the changes.
1 Click Set and then click Apply. A confirmation dialog box opens.
2 Click Yes. A final confirmation opens.
3 Click OK.
Repeat these steps on the remote storage system to add a journal group.
Configure Storage
This is how to configure your storage for this solution using Hitachi Dynamic Provisioning on Hitachi Virtual Storage Platform.
The storage configuration for this solution includes the following:
Over provisioning
Wide striping
On-line expansion of dynamic provisioning pools
This solution uses a dynamic provisioning pool comprised of a single RAID group with eight 600 GB 10k RPM SAS drives in a
RAID-6 (6D+2P) configuration for each storage system. To increase performance and capacity needs of your virtual
environment, add more RAID groups to the dynamic provisioning pool.
Using a RAID-6 configuration lowers the risk of data loss or pool failure, which is a primary concern for virtual machines
protected by VMware Site Recovery Manager. Refer to Hitachi VSP Architecture guide for further details on Pool
configuration/benefits at: http://www.hds.com/assets/pdf/hitachi-architecture-guide-virtual-storage-platform.pdf .
The figure below shows the configuration for the dynamic provisioning pool on each storage system. For simplicity sake of this
paper, LDEV IDs are identical between sites.
14
Jnl
Management Pool
Datastore
1
Datastore
2
Journal Vols*
CMD
The UCP Pro solution at each site will have the Management Pool and volumes pre-configured at the factory will contain 2
shared Datastores where the UCP Management virtual machines will be stored.
The compute pool will be configured to customer specifications onsite during the UCP system installation and configuration.
Upon completion of the UCP Pro system, the entire Server/Storage/Network stack will be in an operational state. For
purposes of this paper, this solution provisions 3 additional LDEVs to the VMware infrastructure at both sites for:
The creation and presentation of the LDEVs in Figure 3 on the storage system involves the following steps:
1
2
Journal Volume
Command Device
Attach Existing Volume to via UCP
The following describe in detail how to configure your storage for this solution.
15
1
2
3
4
5
6
From within vSphere client, right-click on the server that you want to add a volume to, and then click on the Configure
Host Storage link.
Select the Create New Volume option.
Click on the Next button.
On the Create New Volume step:
In the Disk Size field, type the total amount of space to use to create the volume in GB.
To format the volume as a datastore, select the Format as Datastore option. If this option is not selected, the
created volume will need to be formatted later before it can be used.
To manually select the array ports to use, select the Manually SelectArray Ports option, and then select the
appropriate ports for each fabric.
In the Pools table, select the pool to create the data store from.
Click on the Finish button.
If this is the first and only volume created within the array so far, the volume number (LDEV) should be 00:00:00. Verify
the LDEV ID, and if it is not 00:00:00, please record the actual value to be used later in the replication configuration.
Create LDEVs within the Management Pool (Journal Vol, & Command Device)
This procedure assumes completion of dynamic pool creation during UCP Pro initial configuration in your environment.
To create volumes that will be used for Journal Volume and Command Device, follow these steps on BOTH sites.
1
2
3
4
5
From the Actions menu, point to Logical Device and then click Create LDEVs. The Create LDEVs window opens.
From the Provisioning Type list, click Dynamic Provisioning.
From the Emulation Type list, click OPEN-V.
From the RAID Level list, click RAID-6 (6D+2P). These options allow you to filter the available RAID group volumes.
Select the pool options.
Select the pool that represents the Management Pool (Pool 0) in the Available Pools area and click OK.
Type the capacity in LDEV Capacity and click a unit of measure from the list. This solution uses 5GB LDEVs for the
command device..
Type the number of LDEVs of the size in LDEV Capacity to create in Number of LDEVs.
In the LDEV Name area, type a prefix in Prefix and type an initial number in Initial Number.
Add the LDEVS.
Click Next.
Click Apply.
Repeat steps 1-6 for Journal Volume. Capacity for Journal volume in this solution is 50GB.
From the Actions menu, point to Logical Device and then click Add LUN Paths. The Add LUN Paths window opens.
From the Available LDEVs list, select the 5GB volume that will become the command device, and click the Add Button to
add it to the selected LDEVs list.
Click the Next button.
From the Available Host Groups list, select all 8 Management HSDs on ports (one for each Mgmt Host on CL1-A, CL2-A,
CL3-A, and CL4-A).
Click Add button to add Host Groups to the Selected Host Group list.
Click Next Button
Verify LUN paths, and click Finish button.
Create an LDEV in the Hitachi Dynamic Provisioning pool as described in "Create LDEVs within a Dynamic Provisioning
Pool " section and map to the created host groups.
Convert an LDEV to a command device in Storage Navigator by following these steps:
1 Select the LDEV from the Explorer > Logical Devices area.
2
3
4
5
From the Actions menu, point to Logical Device and then click Edit Command Devices.
In the Edit Command Devices window, click the Enable option.
Click Finish.
Click Apply.
AD VM
AD VM
VC5 VM
SQL VM
VC5 VM
UCP VM
UCP VM
HCS VM
SiteA
ESXi01
Util VM
SiteA
ESXi02
Boot
SQL VM
Boot
VSP SiteA
HCS VM
Util VM
SiteB
ESXi01
SiteB
ESXi02
Boot
Boot
VSP SiteB
00:FD:03
Management
Datastore1
00:FD:03
Datastore1
00:FD:04
Management
Datastore2
00:FD:04
Datastore2
For this SRM/SRA solution, an additional set of components are required to be configured in the management cluster, and in
the pool for the management volumes on both sites.
1
2
3
4
5
17
6
7
Once completed, the management cluster will look like the following figure:
Management Cluster Site A
AD VM
AD VM
VC5 VM
SQL VM
VC5 VM
UCP VM
UCP VM
Util VM
HCS VM
SRM VM
SiteA
ESXi02
Boot
Util VM
HCS VM
SRM VM
SiteA
ESXi01
SQL VM
Boot
VSP SiteA
SiteB
ESXi01
SiteB
ESXi02
Boot
Boot
VSP SiteB
00:FD:03
Management
Datastore1
00:FD:03
Datastore1
00:FD:04
Management
Datastore2
00:FD:04
Datastore2
00:FD:05
CMD Dev
(as RDM)
00:FD:05
CMD Dev
(as RDM)
Looking at both site configurations with the Management environment, in conjunction with the compute environment and the
SRM protection group, the UCP Pro / SRM solution will look like the following:
18
AD VM
AD VM
VC5 VM
VC5 VM
SQL VM
UCP VM
SRM VM
SRM VM
SiteA
Host 1
Util VM
HCS VM
Util VM
HCS VM
SQL VM
UCP VM
SiteA
Host2
Boot
Boot
Boot
Boot
VSP SiteA
VSP SiteB
00:FD:03
Management
Datastore1
00:FD:03
Datastore1
00:FD:04
Management
Datastore2
00:FD:04
Datastore2
00:FD:05
CMD Dev
(as RDM)
00:FD:05
CMD Dev
(as RDM)
SiteB
Compute
Datastore
TC or HUR replication
VM
VM
Protection Group
P-Vol
Placeholder VM
S-Vol
Placeholder VM
Repeat these steps on the primary site and the recovery site.
19
Insert the installation media for command control interface into an I/O device (for example, CD-ROM) connected to the
virtual machine.
Run Setup.exe on the installation media. Follow the on-screen instructions to complete the installation.
Verify installation of the latest version of command control interface by typing the following command at a command
prompt from the Hitachi Open Remote Copy Manager installation directory (C:\HORCM\etc):
raidqry h
The output looks something like the following:
Model: RAID-Manager/WindowsNT
Ver&Rev: 01-25-03/11
Usage : raidqry [options] for HORCM
From the VMware vSphere client, add the command device LDEV to the Site Recovery Manager virtual machine as a
physical RDM virtual disk.
Configure the command device in the guest operating system.
1 In Microsoft Windows 2008, from the Server Manager menu, point to Storage and then click Disk Management.
2 Right-click the RDM disk and click Online.
3 Right-click the RDM disk and click Initialize Disk.
4 Click MBR (Master Boot Record) as the partition style.
Do not install a volume. Define and configure the command device as a raw device with no file system and no mount
operation.
Storage System Command devices and Hitachi TrueCopy or Hitachi Universal Replicator volumes (P-VOL and S-VOL)
Server Hitachi Open Remote Copy Manager (HORCM), configuration definition files (for example, horcm0.conf), and
command control interface commands
The Hitachi Open Remote Copy Manager operates as a daemon process on the VMware Site Recovery Manager virtual
machines. When activated, Open Remote Copy Manager refers to the configuration definition files. The instance
communicates with the storage sub-system and remote servers.
Modify the services file in the C:\Windows\System32\drivers\etc folder to register the port name and number for each Open
Remote Copy Manager instance on each server. The port name entries for Open Remote Copy Manager in the services file
must be the same on all the servers. For example, if the service number for port name horcm0 is 11000/udp, the service
number for port name horcm1 must be 11001/udp in the services file on the Site Recovery Manager server at the local and the
remote site.
The figure below shows the two SRM server and their-HORCM instances configuration used with command control interface
to manage TrueCopy or Universal Replicator replication.
20
The Open Remote Copy Manager configuration file defines the communication path and the logical units to be controlled.
Each instance has its own configuration file saved in the C:\Windows directory.
The content of the horcm0.conf file and the horcm1.conf file used to define a TrueCopy pair are available in Hitachi TrueCopy
Remote Replication Bundle Files."
The content of the horcm0.conf file and the horcm1.conf file used to define a Universal Replicator pair are available in "Hitachi
Universal Replicator Files."
After adding the port name entries and defining the configuration files, start the Hitachi Open Remote Copy Manager instance.
At the command prompt, type the following:
cd c:\HORCM\etc
horcmstart.exe *
An asterisk [*] is used in place of the instance number in the command. For example, for the server containing horcm0.conf,
type horcmstart.exe 0.
After executing the instructions described in the previous section at both sites, Open Remote Copy Manager instances should
be running on the Site Recovery Manager servers at both sites. Verify the pair relationship by running the pairdisplay
command from the horcm0 instance on the primary site's Site Recovery Manager server:
pairdisplay.exe -g <grp> -IH<HORCM instance #> -fcx
Initially, the volumes are in simplex (SMPL) mode. The volumes are not paired and synchronized until running the paircreate
command.
The figures below show the pairdisplay.exe output of the TrueCopy pair defined in "Hitachi TrueCopy Remote Replication
Bundle, and the the Universal Replicator pair defined in "Hitachi Universal Replicator."
21
22
23
Figure 15 Pairdisplay output showing copy status of TrueCopy pair during paircreate
Figure 16 - Pairdisplay output showing copy status of HUR pair during paircreate
Figure 17 - Pairdisplay output showing copy status of ShadowImage pair during paircreate
From the Site Recovery Manager Management window, click Array Managers.
In the right pane, click the SRAs tab and click Rescan SRAs. As shown in Figure 18, the output shows the installed
Hitachi adapter version and lists supported array models.
Figure 18 SRA
3
4
5
In the Array Managers pane, click the local site and click Add Array Manager. The Add Array Manager wizard opens.
Type a Display Name, click RAID Manager Storage Replication Adapter from the SRA Type list, and then click Next.
In the Connection to HORCM Server window, add the following connection parameters:
6
7
25
If user authentication is not configured on the command device, type user for Username and password for
Password.
If user authentication is configured on the command device, use your user name and password.
Click Next.
The wizard says Array Manager was added successfully.
Click Finish.
Enable the use of the array pairs.
Click one of the array managers and click the Array Pairs tab.
8
9
Click Enable to use the array pair with VMware Site Recovery Manager.
Click the Devices tab to confirm that Site Recovery Manager reports the storage replication correctly.
Figure 20 shows a Site Recovery Manager environment configured to monitor a TrueCopy and Universal Replicator pair. If the
storage replication status does not report correctly, verify the pair relationship as follows:
1
2
3
For more information about setting up Site Recovery Manager, see the Site Recovery Manager Administration Guide
(http://www.vmware.com/support/pubs/srm_pubs.html).
26
Configuration Files
These are the configuration files you need for this implementation.
HORCM_MON Contains information needed to monitor a HORCM instance such as IP address, HORCM instance or
service, pooling interval for monitoring paired volumes, and timeout period for communication with remote server
HORCM_CMD Contains device path information about the command device
HORCM_LDEV Defines the storage sub-system device address for the paired logical volume names
HORCM_INST Network address of the remote server.
Copy horcm0.conf, shown in Figure 21, to the C:\Windows folder on the Site Recovery Manager server at the local site.
Copy horcm1.conf, shown in Figure 22, to the C:\Windows folder on the Site Recovery Manager server at the remote site.
Use the IP addresses and serial numbers for your environment in the configuration files.
Figure 21 -
27
Figure 22 -
HORCM_MON Contains information needed to monitor a HORCM instance such as IP address, HORCM instance or
service, pooling interval for monitoring paired volumes, and timeout period for communication with remote server
HORCM_CMD Contains device path information about the command device
HORCM_LDEV Defines the storage sub-system device address for the paired logical volume names
HORCM_INST Network address of the remote server
Copy horcm0.conf, as shown in Figure 23, to C:\Windows on the Site Recovery Manager server at the local site.
Copy horcm1.conf, as shown in Figure 24, to C:\Windows on the Site Recovery Manager server at the remote site.
Use the IP addresses and serial numbers for your environment in the configuration files.
Figure 23
28
Figure 24
HORCM_MON Contains information needed to monitor a HORCM instance such as IP address, HORCM instance or
service, pooling interval for monitoring paired volumes, and timeout period for communication with remote server
HORCM_CMD Contains device path information about the command device
HORCM_LDEV Defines the storage sub-system device address for the paired logical volume names
HORCM_INST Network address of the remote server
This pair definition uses the TrueCopy pair's S-VOL as the ShadowImage pair's PVOL and modifies the TrueCopy
horcm1.conf configuration file in Hitachi TrueCopy Remote Replication Bundle Files. Only specify mirror unit numbers (MU#)
when defining ShadowImage pairs. Only initiate ShadowImage replication after establishing TrueCopy replication.
Copy the horcm1.conf (Figure 25) and horcm2.conf (Figure 26) configuration files to C:\Windows on the Site Recovery
Manager server at the remote site.
Use the IP addresses and serial numbers for your environment in the configuration files.
29
Figure 25
Figure 26
30
31
Corporate Headquarters
750 Central Expressway
Santa Clara, California 95050-2627 USA
www.hds.com
Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other countries. Hitachi Data Systems is a registered trademark and service mark of Hitachi, Ltd., in
the United States and other countries. All other trademarks, service marks and company names in this document or on this Web site are properties of their respective owners.
Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be
offered by Hitachi Data Systems Corporation.
Hitachi Data Systems Corporation 2010. All Rights Reserved. Month YYYY