Sie sind auf Seite 1von 68

Business Continuity and Disaster Recovery

Solution
V100R002C10

User Guide (Active-Active Mode)

Issue 02
Date 2016-01-06

HUAWEI TECHNOLOGIES CO., LTD.


Copyright © Huawei Technologies Co., Ltd. 2016. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any means without prior written
consent of Huawei Technologies Co., Ltd.

Trademarks and Permissions

and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.

Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.

The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.

Huawei Technologies Co., Ltd.


Address: Huawei Industrial Base
Bantian, Longgang
Shenzhen 518129
People's Republic of China

Website: http://e.huawei.com

Issue 02 (2016-01-06) Huawei Proprietary and Confidential i


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) Contents

Contents

1 About This Document.................................................................................................................. 1


2 Solution Description.....................................................................................................................3
2.1 Positioning...................................................................................................................................................................... 4
2.2 Network Diagrams..........................................................................................................................................................4
2.3 Highlights....................................................................................................................................................................... 6

3 Configuration Guide.....................................................................................................................8
3.1 Configuration Process...................................................................................................................................................13
3.2 Configuration Preparations...........................................................................................................................................16
3.3 Configuration Planning.................................................................................................................................................18
3.4 Configuring Switches................................................................................................................................................... 21
3.4.1 Configuring Ethernet Switches..................................................................................................................................21
3.4.2 Configuring Fibre Channel Switches........................................................................................................................ 23
3.5 Configuring Load Balancers.........................................................................................................................................25
3.5.1 Configuring GSLBs................................................................................................................................................... 25
3.5.2 Configuring Local Load Balancers............................................................................................................................26
3.6 Configuring Middleware.............................................................................................................................................. 28
3.7 Configuring the VIS6600T Cluster.............................................................................................................................. 30
3.8 Configuring Storage Arrays..........................................................................................................................................33
3.9 Configuring the VIS6600T........................................................................................................................................... 34
3.9.1 Configuring Storage Virtualization............................................................................................................................35
3.9.2 Configuring a Mirror................................................................................................................................................. 38
3.9.3 Configuring Quorum Policies....................................................................................................................................40
3.9.4 Configuring Active-Active Storage...........................................................................................................................40
3.10 Configuring UltraPath................................................................................................................................................ 42
3.11 Configuring ReplicationDirector................................................................................................................................ 44
3.11.1 Configuring ReplicationDirector (4-Node VIS6600T Cluster)............................................................................... 44
3.11.2 Configuring ReplicationDirector (8-Node VIS6600T Cluster)............................................................................... 46
3.12 Checking the Configuration Result............................................................................................................................ 47

A Appendix......................................................................................................................................51
A.1 Product Introduction.................................................................................................................................................... 52
A.1.1 Storage Arrays (OceanStor 18000 Series)................................................................................................................ 52
A.1.2 Storage Arrays (OceanStor V3 Converged Storage Systems)..................................................................................52

Issue 02 (2016-01-06) Huawei Proprietary and Confidential ii


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) Contents

A.1.3 Storage Arrays (OceanStor T Series)....................................................................................................................... 53


A.1.4 Storage Virtualization Gateways (OceanStor VIS6600T)........................................................................................ 53
A.1.5 Fibre Channel Switches (OceanStor SNS Series).................................................................................................... 54
A.1.6 Multipathing Software (OceanStor UltraPath)......................................................................................................... 55
A.1.7 DR management software (OceanStor ReplicationDirector)................................................................................... 55
A.1.8 Load Balancer (L2800).............................................................................................................................................56
A.2 Zone Division for Fibre Channel Switches................................................................................................................. 56
A.3 Glossary....................................................................................................................................................................... 62

Issue 02 (2016-01-06) Huawei Proprietary and Confidential iii


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 1 About This Document

1 About This Document

Overview
HUAWEI Business Continuity and Disaster Recovery Solution includes four sub-solutions:
Disaster Recovery Data Center Solution (Active-Active Mode), High-Availability (HA)
Solution, Disaster Recovery Data Center Solution (Active-Passive Mode), and Disaster
Recovery Data Center Solution (Geo-Redundant Mode). This document describes the
positioning, characteristics, as well as configuration process and steps of the Disaster
Recovery Data Center Solution (Active-Active Mode).

Intended Audience
This document is intended for:
l Technical support engineers
l Maintenance engineers

Symbol Conventions
The symbols that may be found in this document are defined as follows:

Symbol Description

Indicates an imminently hazardous situation which, if


not avoided, will result in death or serious injury.
DANGER

Indicates a potentially hazardous situation which, if not


avoided, could result in death or serious injury.
WARNING

Indicates a potentially hazardous situation which, if not


avoided, may result in minor or moderate injury.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 1


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 1 About This Document

Symbol Description

Indicates a potentially hazardous situation, which if not


avoided, could result in equipment damage, data loss,
performance degradation, or unexpected results.
This symbol does not indicate human body injuries.

NOTE Calls attention to important information, best practices


and tips.
NOTE is used to address information not related to
personal injury, equipment damage, and environment
deterioration.

Change History
Changes between document issues are cumulative. The latest document issue contains all the
changes in earlier issues.

Issue 02 (2016-01-06)
This issue is the second official release.

Issue 01 (2015-06-30)
This issue is the first official release.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 2


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 2 Solution Description

2 Solution Description

About This Chapter

This chapter describes the positioning, highlights, and involved products of the Disaster
Recovery Data Center Solution (Active-Active Mode).

2.1 Positioning
The Disaster Recovery Data Center Solution (Active-Active Mode) is an end-to-end solution
that enables storage systems, applications, and networks to work in active-active mode,
ensuring zero data loss and ongoing services.
2.2 Network Diagrams
This section describes the network diagrams of the Disaster Recovery Data Center Solution
(Active-Active Mode) and how active-active is achieved on the storage layer, application
layer, and network layer.
2.3 Highlights
The Disaster Recovery Data Center Solution (Active-Active Mode) features robust reliability,
wide compatibility, and flexible scalability.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 3


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 2 Solution Description

2.1 Positioning
The Disaster Recovery Data Center Solution (Active-Active Mode) is an end-to-end solution
that enables storage systems, applications, and networks to work in active-active mode,
ensuring zero data loss and ongoing services.
As enterprise services grow, service breakdown exerts increasing negative impact on
corporate image and operations. Enterprises have more demanding requirements for business
continuity and 24/7 availability of critical services.
According to statistics, traditional data center disaster recovery (DR) solutions have the
following problems:
l Long recovery periods with a certain amount of data loss
l Long service downtime due to manual service switchover upon faults
l Low resource utilization but high total cost of ownership (TCO)
Constructing active-active DR systems has become a trend in the medical, social security,
finance, and government sectors, which gives rise to the Disaster Recovery Data Center
Solution (Active-Active Mode). This solution has two data centers that work in active-active
mode and provide services at the same time, improving the service capability and resource
utilization of data centers. The two data centers serve as backup for each other. When either
data center fails, services are failed over to the other to ensure business continuity.
This solution achieves active-active operation at the storage layer, application layer, and
network layer, eliminating single points of failure and ensuring business continuity.

2.2 Network Diagrams


This section describes the network diagrams of the Disaster Recovery Data Center Solution
(Active-Active Mode) and how active-active is achieved on the storage layer, application
layer, and network layer.
Figure 2-1 shows the solution network diagram.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 4


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 2 Solution Description

Figure 2-1 Network diagram of the Disaster Recovery Data Center Solution (Active-Active
Mode)

Third-place quorum site


WAN
Data center A Data center B
Router Router

WDM WDM
Network layer

GSLB GSLB

Core switch Core switch SLB


SLB

Access switch Access switch


Application

FusionSphere/
Oracle
layer

Fusion RAC/VMware Fusion


Oracle VMware cluster Oracle VMware
Sphere Sphere

VIS6600T
cluster
Storage layer

Fibre Channel Fibre Channel


switch switch

Storage system Storage system

Network cable Optical fiber Raw fiber

GSLB: Global Server Load Balance SLB: Server Load Balancing


RAC: Real Application Clusters FC: Fibre Channel

The Disaster Recovery Data Center Solution (Active-Active Mode) is an end-to-end solution
that covers the network layer, application layer, and storage layer.
l Network layer
The global server load balancer (GSLB) balances loads between data centers based on
the latency and service traffic. The server load balancer (SLB) balances loads between
applications servers in data centers. IP and Fibre Channel networks between data centers
can communicate with each other. When one data center breaks down, its services are
automatically switched to another data center for failover, ensuring business continuity.
l Application layer
Host clusters, database clusters, and application clusters work at the same time on the
two data centers and serve as backup for each other. When one data center breaks down,
its services are automatically switched to another data center for failover, ensuring
business continuity.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 5


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 2 Solution Description

At the same time, UltraPath is installed on application servers to improve data


transmission reliability, ensuring secure paths between application servers and storage
systems.
l Storage layer
The VIS6600T on the two data centers form a VIS cluster. The virtualization function of
the VIS6600T is used to take over heterogeneous storage arrays in a unified manner and
organize heterogeneous storage resources in the data centers into resource pools, thereby
consolidating and optimizing storage resources for higher resource use efficiency.
Meanwhile, the mirroring function of the VIS6600T enables data to be written onto both
data centers at the same time, allowing data to be synchronized between data centers in
real time. In this way, the two data centers are available at the same time. When any
device at the storage layer of the two data centers becomes faulty, upper-layer services
are automatically switched to another device for failover, without affecting upper-layer
applications.
The VIS6600T uses its virtualization function to consolidate heterogeneous storage
systems into a unified resource pool to provide virtual volumes for hosts. The VIS6600T
volume management enables a volume to span over multiple physical disks from
heterogeneous arrays, and overcomes the limitations and differences imposed by
hardware devices.

NOTICE
If the two data centers are interconnected using a wavelength division multiplexing (WDM)
device in 1+1 link protetcion mode, a switchover between the active link and standby link will
have the following impacts upon the breakdown of the active link:
l Links between Fibre Channels are interrupted unexpectedly.
l The link recovery takes about 15 seconds in cascaded mode of Fibre Channel switches.
As a result, arbitration is started due to a timeout of the VIS6600T cluster heartbeat.
l After a link switchover is completed, restore the two data centers in the system to active-
active status.
l Services are not affected. Before system recovery, the system performance deteriorates.

2.3 Highlights
The Disaster Recovery Data Center Solution (Active-Active Mode) features robust reliability,
wide compatibility, and flexible scalability.

l Robust reliability
The active-active architecture ensures zero data loss and service interruption (RPO = 0,
RTO = 0) when one data center breaks down.
l Wide compatibility
This VIS6600T is widely compatible with storage systems from EMC, IBM, HDS, HP,
and Sun, making full use of storage resources while protecting the existing investment.
l Flexible scalability

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 6


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 2 Solution Description

The solution integrates value-added features such as remote replication and can be
smoothly upgraded to the Disaster Recovery Data Center Solution (Geo-Redundant
Mode), enhancing the DR capability.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 7


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

3 Configuration Guide

About This Chapter

This chapter describes prerequisites, configuration process, and detailed configuration steps of
the core devices involved in the Disaster Recovery Data Center Solution (Active-Active
Mode).
An enterprise uses the Disaster Recovery Data Center Solution (Active-Active Mode). The
distance between the two data centers is shorter than 25 km. Figure 3-1 shows the typical
network.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 8


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Figure 3-1 Typical network of the Disaster Recovery Data Center Solution (Active-Active
Mode)

Third-party quorum site


WAL

Data center A Data center B

Router Router
F5 Big-IP F5 Big-IP
Network layer

CSS Cascading CSS

L2800 L2800
Core switch Core switch

iStack iStack

Access switch Access switch


Application layer

Oracle RAC
cluster

host 1 host 2 Management server host 3

VIS6600T
cluster
Storage layer

FC switch 1 FC switch 3 FC switch 4 FC switch 2


Cascading

6800 V3 Storage array

Network cable Optical fiber

Table 3-1 describes the components and their functions in the Disaster Recovery Data Center
Solution (Active-Active Mode).

Table 3-1 Components and their functions in the Disaster Recovery Data Center Solution
(Active-Active Mode)
Component Product Description

Global server load F5 Big-IP One GSLB is deployed in each data


balancer (GSLB) center for service load balancing between
data centers.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 9


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Component Product Description

Local load balancer L2800 Two local load balancers (active and
standby) are deployed in each data center
for service load balancing in the data
center.

Core switch CE12800 Two core switches are deployed in each


data center.

Access switch CE6800 Two access switches are deployed in each


data center.

Middleware Apache, Oracle Middleware in each data center forms a


WebLogic, and IBM cluster. Web servers between two data
WebSphere centers serve as backup resource pools for
Application Server each other. When a data center fails, its
(WAS) services are automatically switched to the
other data center.

Database service Oracle RAC 11g R2 -

Application server Oracle RAC 11g R2 Oracle RAC servers form an Oracle RAC
server cluster. When an application server fails,
its services are automatically switched to
another application server.

Management server Common server One management server is deployed in a


preferred data center that will survive first
in disasters. The ReplicationDirector
disaster recovery management software is
installed on the management server.
In this example, Oracle RAC is deployed
in 2 + 1 mode. Two hosts are deployed in
data center A and one host is deployed in
data center B. When Oracle RAC
encounters a heartbeat failure, data center
A with more RAC nodes (hosts in Figure
3-1) will survive first in case of the
disaster.
In other scenarios, the preferred data
center is planned based on service
requirements.

Fibre Channel switch SNS2248 Two Fibre Channel switches are deployed
in each data center for cascaded
connection across data centers and active-
active storage at the storage layer.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 10


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Component Product Description

Storage virtualization VIS6600T One storage virtualization gateway is


gateway deployed in each data center. A 4-node
cluster (each VIS6600T device has two
nodes) is formed across data centers for
unified storage resource management in
the two data centers.

Storage array l 6800 V3 One storage array is deployed in each data


l Other center. Data is concurrently written to the
mainstream two storage arrays by using the VIS6600T
storage arrays HyperMirror.

Multipathing software UltraPath The multipathing software is installed on


each application server to improve data
transfer reliability and ensure security of
paths between application servers and
storage arrays.

Disaster recovery ReplicationDirector The disaster recovery management


management software software is installed on the management
server for unified disaster recovery
system management.

Third-party quorum - Quorum LUNs must be configured for a


site third-party quorum site and mapped to the
VIS6600T cluster. See the related
document to complete the configuration.

NOTE

l This document describes how to configure the core devices (especially, the devices at the storage
layer) involved in the Disaster Recovery Data Center Solution (Active-Active Mode). Existing
network infrastructure, upper-layer hosts, and applications must be prepared by users, application
providers, or integrators.
l This document only helps you configure the basic active-active storage architecture and does not
involve users' service systems and data migration. The configuration of users' service systems and
data migration require independent professional services or can be completed by users.
l In this solution, the OceanStor V3 series can be used. For different models of storage systems in the
same series, the configurations are the same. The configuration guide uses the OceanStor 6800 V3
as examples to describe how to configure the Disaster Recovery Data Center Solution (Active-
Active Mode).
l This document uses the typical network in Figure 3-1 as an example. Plan and configure the
application layer based on actual services. The configuration methods used at the storage layer and
network layer are similar to the method described in the guide.
l The device connection diagram refers to Networking Assistant.

3.1 Configuration Process


Before configuring the Disaster Recovery Data Center Solution (Active-Active Mode), you
must know the configuration process to ensure smooth operations.
3.2 Configuration Preparations

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 11


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

This section describes prerequisites and documents that you must prepare before you
configure the Disaster Recovery Data Center Solution (Active-Active Mode).
3.3 Configuration Planning
This section describes items that you must plan before the configuration, including the service
IP addresses of L2800 devices and application servers, storage arrays, and volumes of the
VIS6600T cluster.
3.4 Configuring Switches
This section describes the configuration requirements of core switches and access switches at
the network layer and the configuration procedure of Fibre Channel switches at the storage
layer. Ethernet switches serve as the core switches and access switches at the network layer
and Fibre Channel switches are used at the storage layer.
3.5 Configuring Load Balancers
This section describes the configuration requirements for load balancing between data centers
and how to configure service load balancers in a data center.
3.6 Configuring Middleware
This section describes middleware configuration requirements. The middleware is used to
connect applications to databases. Apache, Oracle WebLogic, and IBM WebSphere
Application Server (WAS) are all middleware.
3.7 Configuring the VIS6600T Cluster
Two VIS6600T devices form a 4-node cluster. To ensure proper working of the cluster, you
must set the cluster heartbeat mode to external heartbeat mode.
3.8 Configuring Storage Arrays
Storage array configuration includes creating a LUN/LUN group, a host/host group, and a
mapping relationship between storage arrays and the VIS6600T cluster. In this way, the
VIS6600T cluster can centrally manage storage resources.
3.9 Configuring the VIS6600T
This section describes how to configure quorum policies, storage virtualization, mirroring,
and active-active storage. After the configuration, a mirroring relationship is established
between storage arrays and two data centers that undertake services concurrently. When the
heartbeat is interrupted between two VIS6600T devices, services can be quickly recovered
based on the quorum policy that has been configured.
3.10 Configuring UltraPath
This section describes how to configure the UltraPath software. In the Disaster Recovery Data
Center Solution (Active-Active Mode), the UltraPath software is configured to improve the
I/O processing efficiency and reduce the access latency.
3.11 Configuring ReplicationDirector
This section describes how to configure the OceanStor ReplicationDirector disaster recovery
management software for unified management of the Disaster Recovery Data Center Solution
(Active-Active Mode).
3.12 Checking the Configuration Result
This section describes how to verify the configuration of the Disaster Recovery Data Center
Solution (Active-Active Mode). After configuring the solution, you must verify that read and
write operations are performed based on the path that has been designed to ensure that the
solution can work properly.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 12


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

3.1 Configuration Process


Before configuring the Disaster Recovery Data Center Solution (Active-Active Mode), you
must know the configuration process to ensure smooth operations.

Figure 3-2 shows the configuration process of the Disaster Recovery Data Center Solution
(Active-Active Mode).

Figure 3-2 Configuration process of the Disaster Recovery Data Center Solution (Active-
Active Mode)

Start

Configuration
Preparations.

Plan the configuration.


Prepare for the configuration.
Configure Ethernet
switches.
Configure switches.
Configure Fibre Channel
Configure GSLBs. switches.
Configure load
Configure local load balancers.
balancers. Configure service load balancing
Configure middleware. policies and active-active links for
the two data centers.

Configure the
VIS6600T cluster.
Configure storage
virtualization.
Configure storage
arrays.
Configure a mirror.
Configure the VIS6600T cluster Configure the
and active-active storage. VIS6600T. Configure a quorum policy.

Configure the local priority Configure the Configure active-active


policies for services. UltraPath software. storage.

Configuring ReplicationDirector
(4-Node VIS6600T Cluster). Configure Configure the unified disaster
ReplicationDirector. recovery management platform.
Configuring ReplicationDirector
(8-Node VIS6600T Cluster).
Check the Verify the configuration.
configuration result.

End

Configuration item Configuration sub-item

Optional configuration sub-item

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 13


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Table 3-2 describes the configuration process of the Disaster Recovery Data Center Solution
(Active-Active Mode).

Table 3-2 Configuration process of the Disaster Recovery Data Center Solution (Active-
Active Mode)
No. Configuration Description Operation Location
Procedure

1 3.2 This section describes prerequisites -


Configuration and documents that you must prepare
Preparations before you configure the Disaster
Recovery Data Center Solution
(Active-Active Mode).

2 3.3 Describes items that you must plan -


Configuration before the configuration, including
Planning the service IP addresses of L2800
devices and application servers,
storage arrays, and volumes of the
VIS6600T cluster.

3 3.4 Describes the configuration 1. 3.4.1 Configuring


Configuring requirements of core switches and Ethernet Switches:
Switches access switches at the network layer core switches and
and the configuration procedure of access switches in
Fibre Channel switches at the storage each data center
layer. 2. 3.4.2 Configuring
Ethernet switches serve as the core Fibre Channel
switches and access switches at the Switches: either
network layer and Fibre Channel cascading switch of
switches are used at the storage layer. Fibre Channel
switches

4 3.5 Describes the configuration 1. 3.5.1 Configuring


Configuring requirements for load balancing GSLBs: global
Load between data centers and how to server load balancer
Balancers configure service load balancers in a (GSLB)
data center. 2. 3.5.2 Configuring
Local Load
Balancers: any
L2800 device in each
data center

5 3.6 Describes middleware configuration The middleware


Configuring requirements. The middleware is configuration
Middleware used to connect applications to requirements are based
databases. Apache, Oracle on the actual conditions
WebLogic, and IBM WebSphere of the used middleware.
Application Server (WAS) are all
middleware.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 14


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

No. Configuration Description Operation Location


Procedure

6 3.7 Two VIS6600T devices form a 4- Two VIS6600T devices


Configuring node cluster. You must set the
the VIS6600T heartbeat mode of the cluster to
Cluster external heartbeat mode for proper
working of the cluster.

7 3.8 Configuration of a storage array 6800 V3


Configuring includes creating a LUN/LUN group,
Storage Arrays a host/host group, and a mapping
relationship between the storage
array and VIS6600T cluster so that
the VIS6600T cluster can manage
storage resources in a unified
manner.

8 3.9 Describes how to configure quorum Either VIS6600T device


Configuring policies, storage virtualization, Configuration steps
the VIS6600T mirroring, and active-active storage. include:
After the configuration, a mirroring 1. 3.9.1 Configuring
relationship is established between Storage
storage arrays and two data centers Virtualization
that undertake services concurrently.
When the heartbeat is interrupted 2. 3.9.2 Configuring a
between two VIS6600T devices, Mirror
services can be quickly recovered 3. 3.9.3 Configuring
based on the quorum policy that has Quorum Policies
been configured. 4. 3.9.4 Configuring
Active-Active
Storage

9 3.10 Describes how to configure the UltraPath of each


Configuring UltraPath software. In the Disaster application server
UltraPath Recovery Data Center Solution
(Active-Active Mode), the UltraPath
software is configured to improve the
I/O processing efficiency and reduce
the access latency.

10 3.11 Describes how to configure the ReplicationDirector of


Configuring OceanStor ReplicationDirector the management server
ReplicationDir disaster recovery management
ector software for unified management of
the Disaster Recovery Data Center
Solution (Active-Active Mode).

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 15


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

No. Configuration Description Operation Location


Procedure

11 3.12 Checking Describes how to verify the Any VIS6600T, 6800


the configuration of the Disaster V3, and heterogeneous
Configuration Recovery Data Center Solution storage array
Result (Active-Active Mode). After
configuring the solution, you must
verify that read and write operations
are performed based on the path that
has been designed to ensure that the
solution can work properly.

3.2 Configuration Preparations


This section describes prerequisites and documents that you must prepare before you
configure the Disaster Recovery Data Center Solution (Active-Active Mode).

Prerequisites
Prerequisites for configuring the Disaster Recovery Data Center Solution (Active-Active
Mode) are as follows:
l All products have been installed.
– All products hardware installation and software installation has been completed.
– Products have been connected based on the low-level design (LLD) of the solution.
– The required licenses have been applied and installed based on the license operation
guide.
– The UltraPath software and ReplicationDirector Agent have been installed on
application hosts, and has been started the agent service.
– OceanStor ReplicationDirector Server has been installed on the management server
and can be accessed by devices through their management network ports.
l The IP/Fibre Channel network is working properly between the two data centers.
l The third-party quorum LUN has been deployed in the third-party quorum site and
mapped to the VIS6600T cluster.
l The compatibility between the VIS6600T and the heterogeneous storage array has been
checked.
NOTE

For details about the models of heterogeneous storage arrays that are compatible with the
VIS6600T, see the Huawei OceanStor Virtual Intelligent Storage Interoperability Matrix. Log in
to http://enterprise.huawei.com. In the search box, enter a document name and click the search
button. Select Collateral to download the document.

Obtaining Related Documentation


Before the configuration, Log in to http://enterprise.huawei.com, get the documentation in
Table 3-3 ready.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 16


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Table 3-3 Documentation list


Document Name Application How to Obtain
Scenario

OceanStor 3.4.2 Configuring Choose Support > Product Support > IT


SNS2124&SNS2224 Fibre Channel > Storage > Disk Storage > SNS Fibre
&SNS2248&SNS30 Switches Channel Switch > OceanStor SNS2248 to
96&SNS5192&SNS download the document.
5384 FC Switch
V100R002C01
Product
Documentation

OceanStor 5300 3.8 Configuring Choose Support > Product Support > IT
V3&5500 V3&5600 Storage Arrays > Storage > Disk Storage > V3 Series
V3&5800 V3&6800 Unified Storage > OceanStor 6800 V3 to
V3&6900 V3 download the document.
Storage System NOTE
V300R002 Basic In this document, the V3 series is used as an
Storage Service example. To obtain documentation of the
Guide OceanStor 18000/T series storage arrays, choose
Support > Product Support > IT > Storage >
Disk Storage.

OceanStor 3.9 Configuring the Choose Support > Product Support > IT
VIS6600T VIS6600T > Storage > Disk Storage > VIS Virtual
V200R003 Product Storage > OceanStor VIS6600T to
Documentation download the document.

OceanStor UltraPath 3.10 Configuring Choose Support > Product Support > IT
for Linux UltraPath > Storage > Storage Software > Storage
V100R008C0 User Management Software > UltraPath to
Guide download the document.

OceanStor 3.11 Configuring Choose Support > Product Support > IT


ReplicationDirector ReplicationDirec- > Storage > Storage Software > Data
V100R003C10 tor Replication Software > OceanStor
Product ReplicationDirector to download the
Documentation document.

L2800 Load 3.5.2 Configuring Choose Support > Product Support > IT
Balancer Local Load > Server > APP Server > L2800 to
V100R001C00 Balancers download the document.
Product
Documentation

NOTE

This document uses product versions in Table 3-3 as an example. If the actual product versions are
different from these in the document, obtain the corresponding documentation and complete the
configuration.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 17


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

3.3 Configuration Planning


This section describes items that you must plan before the configuration, including the service
IP addresses of L2800 devices and application servers, storage arrays, and volumes of the
VIS6600T cluster.

L2800 Planning
Table 3-4 describes the service IP address planning of L2800 devices.

Table 3-4 Service IP address planning of L2800 devices


Host Name Service Network Network Port Description
Port IP Address/ Name
Mask

L2800-Master l 192.168.10.1/24 ServGE0 Master L2800; port:


l 192.168.20.1/24 8080.
l 192.168.10.1/24:
service network
port IP address
l 192.168.20.1/24:
floating service
IP address

L2800-Standby 192.168.10.2/24 ServGE0 Standby L2800.

Application Host Planning


Table 3-5 describes the service IP address planning of application servers.

Table 3-5 Service IP address planning of application servers


Host Name Service Network Port IP Port Number
Address/Mask

App-Server01 192.168.10.101/24 80

App-Server02 192.168.10.102/24 80

App-Server03 192.168.10.103/24 80

App-Server04 192.168.10.104/24 80

App-Server05 192.168.10.105/24 80

Table 3-6 describes storage array planning of application servers. The storage planning is the
same on application servers.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 18


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Table 3-6 Planning application hosts (SQL Server)


Driv Source Size File Description
e System
Lette
r

C:\ Local disk 50 GB NTFS Disk type: basic


Applicable to storage operating systems
and software

E:\ Remaining 100 GB NTFS Disk type: basic


local space Applicable to common storage software

F:\ External 1 TB NTFS Disk type: basic


data Storage space shared by storage arrays for
sharing application hosts (The storage space is
disk used to store database files.)

G:\ External 1 TB NTFS Disk type: basic


data Storage space shared by storage arrays for
sharing application hosts (The storage space is
disk used to store database files.)

Storage Array Planning


The configuration planning of the storage arrays in the two data centers is the same. Table 3-7
shows an example.

Table 3-7 Planning storage arrays


Data Name Value Example Description

Disk domain DiskDomain01 -

Disk type SAS -

Disk quantity 10 -

Storage pool Storagepool01 -

RAID RAID 6 (8D+2P) -

RAID capacity 4800 -


(GB)

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 19


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Data Name Value Example Description

LUN name: LUN l LUN_Data01: 1 l LUN_Data01 and LUN_Data02: data


capacity TB disks, store database files.
l LUN_Data02: 1 l LUN_DCO: DCO disk, stores DCOs.
TB l Fendisk: quorum disks. Generally, the
l LUN_DCO: 1 GB capacity is set to 2 GB.
l Fendisk: 2 GB NOTE
Recommended sizes of DCO disks are as follows
(G indicates the total capacity of LUNs used for
storing database files):
l G < 6 TB, DCO capacity ≥ 1 GB
l 6 TB ≤ G < 64 TB, DCO capacity ≥ 10 GB
l 64 TB ≤ G < 64 TB x 16, DCO capacity ≥ 160
GB
l 64 TB x 16 ≤ G < 64 TB x 16, DCO capacity ≥
640 GB

LUN group LUN_Group Including all LUNs.

Host l VIS_Node01 You are advised to use each VIS6600T node


l VIS_Node02 as a host.
l VIS_Node03
l VIS_Node04

Host group VIS6000T Including all Hosts.


Used to map LUNs to application hosts.

VIS6600T Cluster Planning


Table 3-8 and Table 3-9 describe the VIS6600T cluster planning.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 20


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Table 3-8 Logical Disk Group planning

Logical Disk Logical Disk Alias Description


Group

ArrayGroup l 6800V3_LUNData0 LUN names that correspond to aliases of


1 logical disks are as follows:
l 6800V3_LUNData0 l 6800V3_LUNData01: LUN_Data01
2 (data center A)
l 6800V3_DCO l 6800V3_LUNData02: LUN_Data02
l otherdisk_LUNData (data center A)
01 l 6800V3_DCO: LUN_DCO (data center
l otherdisk_LUNData A)
02 l otherdisk_LUNData01: LUN_Data01
l otherdisk_DCO (data center B)
l otherdisk_LUNData02: LUN_Data02
(data center B)
l otherdisk_DCO: LUN_DCO (data
center B)

Table 3-9 Volume planning

Volume Name Logical Disk LUN Name Description


Alias

Data_1 6800V3_LUN LUN_Data01 (data center 6800V3_LUNData01 and


Data01 A) otherdisk_LUNData01
configured as a mirrored
logical disk.

Data_2 6800V3_LUN LUN_Data02 (data center 6800V3_LUNData02 and


Data02 A) otherdisk_LUNData02
configured as a mirrored
logical disk.

3.4 Configuring Switches


This section describes the configuration requirements of core switches and access switches at
the network layer and the configuration procedure of Fibre Channel switches at the storage
layer. Ethernet switches serve as the core switches and access switches at the network layer
and Fibre Channel switches are used at the storage layer.

3.4.1 Configuring Ethernet Switches


Ethernet switches are used as access switches and core switches in the two data centers. This
section describes how to configure Ethernet switches to achieve information exchange in and
between the two data centers.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 21


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Network Planning
Figure 3-3 shows the network planning in and between the two data centers in the Disaster
Recovery Data Center Solution (Active-Active Mode).

Figure 3-3 Ethernet planning

WAN

Data center A Data center B

Egress Egress
router router
CSS CSS
Cascading

Core Core
switch switch
iStack iStack

Access Access
switch switch

Application Application Management Application


host 1 host 2 server host 3
网线 光纤

Configuration Requirements
The Ethernet switch configuration requirements are as follows:
l Core switches are cascaded to enable L2 interconnection between the two data centers.
– Cascading configuration for the two data centers with a distance shorter than 25 km
Single-mode optical fibers are used to cascade core switches. Single-mode optical
modules must be configured on the core switches for long-distance transmission.
This section uses this configuration mode as an example.
NOTE

You must ensure that the used switches support the single-mode optical modules of the
matching model and the longest distance supported by the optical modules must be larger
than the actual transmission distance.
– Cascading configuration for the two data centers with a distance equal to or larger
than 25 km
Dense wavelength division multiplexing (DWDM) devices are used for
interconnection.
l Virtual local area networks (VLANs) are used to isolate different services.
l Core switches are used to configure a CSS loop-free Ethernet.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 22


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

l Access switches are used to configure an iStack loop-free Ethernet.


l Core switches are used to deploy active-active gateways.

NOTE

In this solution, you are advised to use HUAWEI CloudEngine 12800 series and 6800 series as core
switches and access switches respectively.
Based on the preceding configuration requirements, see the corresponding documents to configure the
Ethernet switches. Log in to http://support.huawei.com and obtain the related documents from the
following paths:
l CE12800: Support > Product Support > Fixed Network > Carrier IP > Switch&Security >
Carrier Switch > CloudEngine 12800
l CE6800: Support > Product Support > Fixed Network > Carrier IP > Switch&Security >
Carrier Switch > CloudEngine 6800

3.4.2 Configuring Fibre Channel Switches


Fibre Channel switch configuration includes setting domain IDs and creating zones. By
creating zones, Fibre Channel switches enable interworking between the VIS6600T nodes,
VIS6600T and application servers, as well as controllers of storage arrays. The SNS2248 is
used as an example to describe how to configure Fibre Channel switches.

Prerequisites
The accounts and passwords for logging in to Fibre Channel switches have been obtained.

Context
Fibre Channel switches are cascaded across data centers and zones are created to enable
interworking between nodes in the VIS6600T cluster and storage arrays as well as application
servers and the VIS6600T nodes and form redundant links, ensuring ongoing services in the
event of single points of failure.
l Cascaded connection across data centers
Four Fibre Channel switches are cascaded across active-active data centers, building a
foundation for the mirroring relationship between storage arrays in the two data centers.
When Fibre Channel switches are cascaded, domain IDs must be set to prevent ID
conflicts on the network. A domain ID is the unique identifier of a Fibre Channel. Table
3-10 describes the domain ID planning.
l Zone division
In a zone, a specific switch and device can communicate with each other. On two Fibre
Channel switches that are cascaded, the ports of each link form a zone. For details, see
A.2 Zone Division for Fibre Channel Switches.

Table 3-10 Domain ID planning

Fibre Channel Switch Domain ID (Example) Location


(Example)

Fibre Channel switch 1 1 Data Center A

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 23


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Fibre Channel Switch Domain ID (Example) Location


(Example)

Fibre Channel switch 2 2 Data Center B


NOTE
Fibre Channel switches 1 and 2 are
cascaded.

Fibre Channel switch 3 3 Data Center A

Fibre Channel switch 4 4 Data Center B


NOTE
Fibre Channel switches 3 and 4 are
cascaded.

Procedure
Step 1 Log in to Fibre Channel switch 1.

Step 2 Set domain IDs.


1. On the menu bar, choose Configure > Switch Admin.
2. In the Switch Status area, select Disable.
3. In the Switch Name and Domain ID area, enter Domain ID.
4. In the Switch Status area, select Enable.

Step 3 Create a zone.


1. On the Zone Administration page of the SNS series switches, select a format to display
zone members on the Member Selection List tab page.
2. On the Zone tab page, click New Zone.

The Create New Zone dialog box is displayed.


3. Enter the zone name and click OK.

In the Name list, the new zone information is displayed.


4. In Member Selection List, click the member that you want to add to the zone and click
right arrow.

The selected member is moved to the Zone Members page.


5. Repeat Step 3.4 to add another member to the zone.
6. Choose Zoning Actions > Save Config to save the configuration.

The zone is created successfully.


7. Refer Step 3.2 to Step 3.6 to create other zones.
NOTE

For more information, see the contents about zone management in the OceanStor
SNS2124&SNS2224&SNS22248&SNS3096&SNS5192&SNS5384 FC Switch V100R002C01 Product
Documentation.

Step 4 Set the mode of ports to long-distance mode.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 24


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

You must set the mode of each port used in the Disaster Recovery Data Center Solution
(Active-Active Mode) to long-distance mode.

1. Open the Switch Administration page.


2. Click Show Advanced Mode.
3. Click the Extended Fabric tab.
4. In Long Distance, select LS.
NOTE

For the LS option, you can enter a desired buffer value in Buffer Needed. When changing the
buffer value, you cannot change the values in Frame Size and Desired Distance.
5. Double-click Desired Distance (km) and enter a desired distance.

The distance must match the port transfer rate. The matching principles are as follows:
– When the speed is 8 Gbit/s, the value of the distance (km) ranges from 10 to 63.
– When the speed is 4 Gbit/s, the value of the distance (km) ranges from 10 to 125.
– When the speed is 2 Gbit/s, the value of the distance (km) ranges from 10 to 250.
– When the speed is 1 Gbit/s, the value of the distance (km) ranges from 10 to 500.
6. After configuring the Long Distance and Desired Distance (km) parameters of all
ports, click Apply.
7. Click Yes to confirm the modification.

Step 5 Log in to Fibre Channel switch 3 and perform operations from Step 2 to Step 4.

----End

3.5 Configuring Load Balancers


This section describes the configuration requirements for load balancing between data centers
and how to configure service load balancers in a data center.

3.5.1 Configuring GSLBs


GSLBs are used to balance service loads between two data centers. In this solution, F5 GTM
GSLBs are used.

One F5 GTM GSLB is deployed in each data center. The two GSLBs are configured in active-
standby mode for service load balancing between the two data centers.

Configuration requirements:
l The active F5 GTM GSLB must be connected to the standby F5 GTM GSLB in one-
armed mode.
l The active and standby F5 GTM GSLBs are reachable on an L3 network. There is no
need to use heartbeat cables to connect them.
l The standby F5 GTM GSLB detects the status of the active F5 GTM GSLB using a
heartbeat cable. When the active F5 GTM GSLB fails, the standby F5 GTM GSLB
becomes active. The switchover time must at ms level.
l When the active F5 GTM GSLB fails, the upper-level DNS automatically selects the
standby F5 GTM GSLB in round robin mode.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 25


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

l An F5 GTM GSLB can be connected to or be used to replace the DNS in a data center.
When the GSLB is used to replace the DNS, the WIP is configured to the application
domain name.
l F5 GTM GSLBs can flexibly allocate services to the two data centers based on the DNS
address, region, and egress bandwidth of a wide area network (WAN).

NOTE

Based on the preceding configuration requirements, see F5 GTM GSLB documents to configure the
GSLBs.
Go to http://www.f5.com to obtain the F5 Big-IP GSLB documents.

3.5.2 Configuring Local Load Balancers


In this solution, L2800 devices are used as local load balancers to achieve service load
balancing in data centers.

Prerequisites
The account and password for logging in to the L2800 Load Balancer Management System
(LBMS) have been obtained.

Context
Two L2800 Software Load Balancers (SLBs) in each data center are deployed active-standby
mode for service load balancing in the data center. When configuring the two L2800 SLBs in
each data center, configure the active L2800 SLB first and then synchronize the configuration
to the standby L2800 SLB.
The configuration method is the same in the two data centers. This section uses one data
center as an example to describe how to configure local load balancers.

Procedure
Step 1 Log in to the L2800 LBMS.

Step 2 Add an application server.


1. Choose Configuration > Real Server.
The configuration page is displayed, as shown in Figure 3-4.

Figure 3-4 Page for adding a server

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 26


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

2. Click Add Server.

Adding server refers web server that IBM HTTP Server (IHS) or Apache server.
3. Enter the name, service IP address, and port number of the web server.
4. Click Submit.

The application server is added.


5. For details about how to add other web servers, see Step 2.1 to Step 2.4.

Step 3 Configure a resource pool.


1. Choose Configuration > Pool.
2. Click Add Pool.
3. On the Basic Configuration page, enter the name, health check type, check period, and
timeout period of the resource pool.
4. On the Traffic Control Setting page, configure the upstream traffic and downstream
traffic based on your requirements.
5. Add all application servers to the resource pool.

Select real servers in Available Servers and click > to add them to Enabled Servers.
6. Click Submit.

The resource pool is configured.

Step 4 Configure a virtual service.


1. Choose Configuration > Virtual Service.
2. Click Add Service.
3. On the Basic Configuration page, enter the name, port, and virtual service IP address
(floating IP address) of the virtual service.
On the page for basic configuration, enter the name, port, and IP address (floating IP
address of the L2800) of the virtual service.
4. On the Customized Configuration page, configure Configuration Template, Persist
Type, and Persist Timeout.

– Configuration Template: Select Layer 7 HTTP or Layer 4 Host.


– Persist Type: Select Source.
– Persist Timeout: Set the value to 1800 seconds.
5. On the Resource Configuration page, configure the name, pattern, and scheduler of the
resource pool.

The name of the resource has been configured in Step 3. In Pattern, enter * * * *. In
Scheduler, select Round Robin. For details about parameter meanings, see the L2800
product documentation.
6. Click Submit.

The virtual service is configured.

Step 5 Synchronize configuration between the active SLB and standby SLB.
1. Choose Configuration > Synchronization.

The page for synchronizing configuration is displayed, as shown in Figure 3-5.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 27


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Figure 3-5 Synchronizing configuration

2. Enter the management IP addresses of the active L2800 and standby L2800.
3. Click Submit.
Synchronize the configuration result to the background system of the SLB.
4. Click Sync.
Synchronize the configuration information about the application server, resource pool,
and virtual service to the standby SLB.

----End

3.6 Configuring Middleware


This section describes middleware configuration requirements. The middleware is used to
connect applications to databases. Apache, Oracle WebLogic, and IBM WebSphere
Application Server (WAS) are all middleware.

Introduction to Middleware
Figure 3-6 shows the middleware positions.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 28


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Figure 3-6 Middleware positions


Data Centers A Data Centers B
L2800 L2800

WEB WEB
Server Server

APP APP
Server Server

RAC1 RAC2 RAC3

The types of web servers include the IBM HTTP Server (IHS) and Apache. The types of
application servers include IBM WAS and Oracle WebLogic. Active-active web servers are
available in the following two ways:
l IHS cooperates with WAS to make active-active web servers available.
The Hypertext Transfer Protocol (HTTP) requests from IHS to WAS are balanced. WAS
is used to deploy J2EE applications. It provides an elaborate environment for deploying
application programs. Comprehensive application program services and functions are
provided, covering transaction management, security, cluster, performance, availability,
connectivity, and scalability.
l Apache cooperates with Oracle WebLogic to make active-active middleware available.
Apache can work on all computer platforms that are widely used. It is one of the most
popular web server software worldwide. For an Oracle WebLogic cluster, a general
control end (AdminServer) is configured. The HTTP requests are balanced between
WebLogic nodes in random or round robin mode.

Configuration Requirements
l Serving as cluster resource pools, the web servers in two data centers are connected to
the L2800 devices to achieve load balancing between the web servers.
l Being configured as a cluster, application servers in two data centers are connected to
web servers to achieve load balancing between the application servers.
l Web servers between two data centers serve as backup resource pools for each other.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 29


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

NOTE

Based on the preceding configuration requirements, see the corresponding documents to configure the
middleware.
Obtain the related documents from the following paths:
l Apache: http://www.apache.org
l Oracle WebLogic: http://www.oracle.com
l IBM WAS and IHS: http://www.ibm.com

3.7 Configuring the VIS6600T Cluster


Two VIS6600T devices form a 4-node cluster. To ensure proper working of the cluster, you
must set the cluster heartbeat mode to external heartbeat mode.

Prerequisites
The account and password for logging in to VIS6600T have been obtained and VIS6600T has
been logged in successfully.

Context
The VIS6600T devices are connected using an Ethernet switch. The VIS6600T heartbeat
ports are connected to the Ethernet switch and belong to the same VLAN.
After the VIS6600T cluster hardware cables are connected, the heartbeat mode is internal
heartbeat mode by default. You must change the mode to external heartbeat mode.

Procedure
Step 1 Check the items in Table 3-11.
Before changing the heartbeat mode of the VIS6600T cluster, ensure that the items in Table
3-11 are correct. If an item fails to pass the check, check hardware cable connections and
switch port configuration to ensure that all items pass the check. Otherwise, the VIS6600T
cluster configuration fails.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 30


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Table 3-11 Check items of the VIS6600T cluster


No. Check Item Check Method Expected Result

1 Check whether 1. In OceanStor ISM of the VIS6600T, 1. The port running


the VIS6600T choose All Devices > VIS6600T > status is Link
heartbeat status Device Info > Service Control Unit Up.
is normal. > Ports, check whether Running 2. Other VIS6600T
Status is Link Up on the iSCSI Host nodes can be
Ports page. pinged by
NOTE running the
By default, the ports for the VIS6600T vxping
cluster heartbeat are P1 and P3 in slot 1
command.
on each node.
2. In OceanStor ISM of the VIS6600T,
check whether Running Status is
Link Up on the FC Host Ports page.
NOTE
By default, the ports for the VIS6600T
cluster heartbeat are P0 and P1 in slot 1
on each node.
3. Log in to the CLI of a VIS6600T node
and run the vxping heartbeat IP
addresses of other nodes command to
check the network cable connections
among the nodes. If the IP addresses
can be pinged, the connections are
normal.

2 Check whether On the heartbeat GE switch, run the The value of the
the Spanning display current-configuration command STP function
Tree Protocol to check whether value of the STP parameter is
(STP) function function parameter is disable. disable.
of the If the STP function is enabled, you must
VIS6600T set it to disable.
cluster's
Ethernet switch
is disable.

3 Check whether In OceanStor ISM of the VIS6600T, Running Rate of


the running rate choose the heartbeat ports on the iSCSI the heartbeat ports
of the heartbeat Host Ports page, check whether is 1000 Mbit/s.
ports is normal Running Rate is 1000 Mbit/s .
(1000 Mbit/s). If the running rate is not 1000 Mbit/s, you
must connect the heartbeat cables to GE
ports on the Ethernet switch.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 31


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

No. Check Item Check Method Expected Result

4 Check whether In OceanStor ISM of the VIS6600T, No heartbeat port IP


the IP addresses check whether IP addresses are displayed address is displayed.
of heartbeat on the iSCSI Host Ports page.
ports are If the IP addresses are displayed, select
cleared. the ports whose IP addresses are
displayed and choose IP Address >
Clear IPv4 Address or IP Address >
Clear IPv6 Address to clear the IP
addresses.

5 Check whether By default, the heartbeat IP addresses are -


the heartbeat 127.127.11.0 and 127.127.12.0. If the
Ethernet switch used switch restricts the IP addresses,
has network modify the heartbeat IP addresses. For
requirements. example, run the chgheartnet -i
10.252.0.0 command.
When running the chgheartnet
command, you only need to specify the
first 16 bits of IP addresses. The last 16
bits of the IP addresses are automatically
generated by default.

6 Check whether Log in to the CLI of two VIS6600T The versions of the
the versions of devices respectively and run the two VIS6600T
the two showcontroller command. devices are the
VIS6600T The command outputs on the two same.
devices are the VIS6600T devices must be the same.
same. Otherwise, upgrade the two VIS6600T
devices to the same version.

7 Check whether Log in to the two VIS6600T devices The domain names
the domain respectively and run the showdomain of the two
names of the command. VIS6600T devices
two VIS6600T If the domain names are different, see are the same.
devices are the Step 2 to go to the Heartbeat Mode and
same. Domain Management dialog box and
modify the domain names.

Step 2 On the two VIS6600T devices, change the heartbeat mode to external heartbeat mode.
1. In the navigation tree, select the Settings node.
2. Click Heartbeat Mode and Domain Management under Advanced in the function
pane.
The Heartbeat Mode and Domain Management dialog box is displayed.
3. Set Heartbeat Mode to External Heartbeat.
4. Click OK.
The 4-node VIS6600T cluster is set up successfully.

----End

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 32


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

3.8 Configuring Storage Arrays


Storage array configuration includes creating a LUN/LUN group, a host/host group, and a
mapping relationship between storage arrays and the VIS6600T cluster. In this way, the
VIS6600T cluster can centrally manage storage resources.

Context
The storage arrays in the two data centers must be configured. OceanStor 6800 V3 is used as
an example to describe its basic configuration. For details about how to configure other
storage arrays based on the storage configuration planning, see the corresponding documents.
NOTE

For details about how to configure a heterogeneous storage array, see the related operation guide.

Figure 3-7 shows the procedure process and operation portal for configuring storage arrays in
DeviceManager.

Figure 3-7 Storage arrays configuration process and operation portal in DeviceManager
Configuration process Operation portal in DeviceManager

Start

1 Create a disk domain.

2 Create a storage pool.

3 Create a LUN.

4 Create a LUN group.

5 Create a host.

6 Create a host group.

Create a mapping
7
view.

End

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 33


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Procedure
Step 1 Create a disk domain.

NOTE

For details, see the OceanStor 5300 V3&5500 V3&5600 V3&5800 V3&6800 V3&6900 V3 Storage
System V300R002 Basic Storage Service Guide. For example, open the documentation, search Creating
a Disk Domain to find the detailed operation procedure.

Step 2 Create a storage pool.

Step 3 Create a LUN.


Create all LUNs that are planned, including data LUNs, DCO LUNs, and quorum LUNs.
Step 4 Create a LUN group.
Create a LUN group that contains all LUNs.
Step 5 Create a host.
Create a VIS6600T host on the two storage arrays and select the Linux operating system. You
are advised to use each VIS6600T node as a host.
Step 6 Create a host group.
Add the created host into a host group. In the follow-up operations, you can directly map the
LUN/LUN group to the host group.
Step 7 Create a mapping view to set up a mapping relationship between the host group and LUN
group.

----End

Follow-up Procedure
Log in to the device management software of the storage array in data center B and see the
corresponding product documentation to complete all planned configurations.

NOTICE
l The storage array configuration takes effect after logical disks are scanned on the
OceanStor ISM of the VIS6600T.
l After the storage array configuration is adjusted, for example, the controller to which
LUNs belong is adjusted and a new LUN is added, logical disks must be scanned on the
OceanStor ISM of the VIS6600T to enable the configuration to take effect. Otherwise,
system performance may be adversely affected due to the original configuration.

3.9 Configuring the VIS6600T


This section describes how to configure quorum policies, storage virtualization, mirroring,
and active-active storage. After the configuration, a mirroring relationship is established
between storage arrays and two data centers that undertake services concurrently. When the

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 34


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

heartbeat is interrupted between two VIS6600T devices, services can be quickly recovered
based on the quorum policy that has been configured.

Context
This section only configures the VIS6600T in one data center.

3.9.1 Configuring Storage Virtualization


This section describes how to configure storage virtualization. The VIS6600T uses storage
virtualization to manage the storage array space, implementing resource consolidation and
optimal configuration of heterogeneous arrays.

Context
Storage virtualization indicates that the VIS6600T can screen the differences between
heterogeneous storage arrays and consolidate the heterogeneous storage arrays into a unified
resource pool (logical disk group) that provides virtual volumes for application hosts.
Figure 3-8 shows the storage virtualization configuration process and operation portal on the
ISM interface.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 35


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Figure 3-8 Storage virtualization configuration process and operation portal on the ISM
interface
Configuration process Operation portal on the ISM interface

Start

1 Scan for logical disks

2 Create a logical disk


group
5 & 7
(Optional) Changing the
3 6
Alias of a Logical Disk

4 Create a volume 2 & 4

5 Create a host group 1 & 3

6 Create a host

Add a mapping to a host


7
group

End

Configuration item Optional configuration item

Procedure
Step 1 Scan for logical disks.
After you scan for logical disks, the displayed logical disks are the LUNs created on the
storage array.
The names of logical disks are automatically generated. You can use the unique WWN of a
LUN to determine the logical disk that corresponds to the LUN. Figure 3-9 and Figure 3-10
show a relationship between a logical disk and a LUN: huawei-hvs85t3_66=LUN_data,
huawei-hvs85t3_64=LUN_DCO, huawei-hvs85t3_65=LUN_FD.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 36


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Figure 3-9 WWN of a LUN on a storage array

Figure 3-10 WWN of a LUN on the VIS6600T

NOTE

For details about how to configure storage virtualization, see the OceanStor VIS6600T V200R003 Product
Documentation. For example, open the documentation, search Scanning for Logical Disks to find the
detailed operation procedure.

Step 2 Create a logical disk group.


Create a logical disk group that contains all data disks and DCO disks mapped by two storage
arrays. Quorum disks are excluded. Set Retain Data to No for DCO disks, as shown in
Figure 3-11.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 37


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Figure 3-11 Creating a logical disk group

Step 3 (Optional) Changing the alias of a logical disk.


Only the names of logical disks that are mapped on the storage arrays are displayed on the
VIS6600T. The logical disks are difficult to be identified. You can change the aliases of
logical disks for easy maintenance.
Step 4 Create a volume.
The number of volumes is the same as that of data disks on a single storage array. Set the
volume size to the maximum size of a logical disk.
When creating a volume, select a data disk of data center A from the optional logical disks.
Step 5 Create a host group.
Set an Oracle RAC cluster as a host group. For example, the command can be toOracle.
Step 6 Create a host.
Use each Oracle RAC node as a host.
Step 7 Add a mapping to a host group (add a volume mapping to a host group).
Map the volume that you have created to the host group.

----End

3.9.2 Configuring a Mirror


This section describes how to configure a mirror. Mirroring generates data copies for data
redundancy. Each copy (or mirror) is stored on a logical disk different from the source

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 38


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

volume. The mirroring ensures that data is not lost upon a failure of the source volume or
mirror, improving data integrity and reliability.

Procedure
Step 1 Create a mirror.
1. Figure 3-12 shows the operation portal for creating a mirror.

Figure 3-12 Operation portal for creating a mirror

2
4

1 3

2. See Table 3-12 to complete mirror configuration.

Table 3-12 Mirror configuration description


Volume Data Disk Mirroring DCO Disk Mirroring
Name Relationship Relationship

Data_1 – Source disk: Set a DCO disk.


6800V3_LUNData01 (disk – DCO disk: Select the DCO
that is selected when a disk (6800V3_DCO) in data
volume is created) center A.
– Mirrored disk: – DCO mirrored disk: Select
otherdisk_LUNData01 the DCO disk
(otherdisk_DCO) in data
center B.

Data_2 – Source disk: -


6800V3_LUNData02 (disk
that is selected when a
volume is created)
– Mirrored disk:
otherdisk_LUNData02

NOTE

For details, see the OceanStor VIS6600T V200R003 Product Documentation. For example, open the
documentation, search Creating a Mirror to find the detailed operation procedure.

----End

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 39


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

3.9.3 Configuring Quorum Policies


After heartbeats between the two VIS6600T devices are interrupted, recover services based on
the quorum policy that you have configured.

Procedure
Step 1 Go to the operation portal for configuring quorum policies.

Figure 1 shows the operation portal for configuring quorum policies.

Figure 3-13 Operation portal for configuring quorum policies

1
2

Step 2 Click Set I/O fencing policy.

The Config I/O fencing policy dialog box is displayed.

Step 3 Follow the system configuration wizard to complete configuration.

Configuration parameters are described as follows:

l Quorum mode: Select Quorum Disk.


l Quorum disk: Select three logical disks, that is, three quorum disks that have been
planned.

NOTE

For details, see the OceanStor VIS6600T V200R003 Product Documentation. For example, open the
documentation, search Initially Configuring a Storage System to find the detailed operation procedure.

----End

3.9.4 Configuring Active-Active Storage


With active-active storage enabled, both data centers are active and bear services together to
improve overall serviceability and resource utilization. Mutual backup is available to the two
data centers. When one data center fails, the other one automatically takes over services,
ensuring service continuity.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 40


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Procedure
Step 1 Go to the Active-Active Storage Management portal.
Figure 3-14 shows the location of Active-Active Storage Management.

Figure 3-14 Active-Active Storage Management portal

Step 2 Click Active-Active Storage Management.

Step 3 In the Site Information area, click Configure Active-Active Sites.


The Configure Active-Active Sites page is displayed.

NOTE

In the Disaster Recovery Data Center Solution (Active-Active Mode), the two data centers are deployed
as site A and site B.

Step 4 Configure active-active storage parameters.


Follow the system configuration wizard to complete configuration. Table 3-13 describes
active-active storage configuration parameters.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 41


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Table 3-13 Active-active storage configuration parameters


Site Node ID Preferre Preferentially Storage Unit
Name d Site Reading and
Writing Local
Storage Units

DC_A Select the IDs of Yes Yes Select the storage array in
two VIS6600T data center A based on its
controllers (with the WWN.
same serial number) Obtain the WWN of the
in data center A storage array from the
based on the IP DeviceManager home
addresses that are page.
allocated to the
VIS6600T
controllers.

DC_B Select the IDs of No Yes Select the storage array in


two VIS6600T data center B based on its
controllers (with the WWN.
same serial number) Obtain the WWN of the
in data center A storage array from the
based on the IP device management
addresses that are software.
allocated to the
VIS6600T
controllers.

NOTE

For details, see the OceanStor VIS6600T V200R003 Product Documentation. For example, open the
documentation, search Configuring Active-Active Storage to find the detailed operation procedure.

Step 5 Enable active-active storage.


On the Active-Active Storage Management page, click Enable.
Complete the active-active storage configuration as prompted.

----End

3.10 Configuring UltraPath


This section describes how to configure the UltraPath software. In the Disaster Recovery Data
Center Solution (Active-Active Mode), the UltraPath software is configured to improve the
I/O processing efficiency and reduce the access latency.

Prerequisites
l The account and password for logging in to UltraPath have been obtained and UltraPath
has been logged in successfully.
l No multipathing software is installed on application hosts.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 42


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

If the multipathing software is installed on an application host, UltraPath may fail to


manage its drive letter because the drive letter is managed by the multipathing software.
NOTE

Linux has its own multipathing software. Before installing UltraPath, uninstall the multipathing
software or disable its process. For details about how to uninstall the software or disable its
process, see the documentation of the operating system. The following example shows how to
disable the multipathing service process:
l Versions earlier than Red Hat 7.0 and CentOS 7.0: Run the chkconfig --level 35 multipathd
off command to disable the process and restart the system to enable the setting to take effect.
l Red Hat 7.0 and CentOS 7.0: Run the systemctl disable multipathd.service command to
disable the process and restart the system to enable the setting to take effect.

Context
In UltraPath, the VIS6600T node in the local data center is set to be preferentially accessed.
Services preferentially use the local VIS6600T node to process I/Os. When the local
VIS6600T node fails, services will use the VIS6600T node in the other data center. In this
way, service response efficiency is improved and access latency is reduced.

UltraPath must be configured on all application servers.

NOTE

This section uses a Linux-based application server to describe how to configure UltraPath. For other
operating systems, see the corresponding user guide based on your operating system.
l OceanStor UltraPath for Linux V100R008C00 User Guide
l OceanStor UltraPath for AIX V100R008C00 User Guide
l OceanStor UltraPath for Solaris V100R008C00 User Guide
l OceanStor UltraPath for vSphere V100R008C00 User Guide
l OceanStor UltraPath for Windows V100R008C00 User Guide

Procedure
Step 1 Set the controllers on the VIS6600T in the local data center to local end and controllers on the
VIS6600T in the other data center to remote end.

UltraPath CLI #1 > set remote_controller array_id=ID tpg-id=ID1,ID2…[remote|


local]

Table 3-14 describes the parameters in the set remote_controller command.

Table 3-14 Parameters in the set remotecontroller command


Parameter Description

array-id ID that is allocated to the VIS6600T by UltraPath.


Run the show array command to obtain the array-id
information.

tpg-id ID of a VIS6600T controller. Multiple controller IDs can be


specified at the same time.
You can check the IDs on the VIS6600T. The IDs for the
controllers in a 4-node VIS6600T cluster are 0, 1, 2, and 3.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 43


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Parameter Description

remote|local Status of a controller. Possible values are local and remote.


l local: Indicates a local controller.
l remote: Indicates a remote controller.

Configuration instance

Assume that the ID of the VIS6600T in the data center where current application servers
reside is 0. The VIS6600T controller IDs are 0 and 1. In the other data center, the VIS6600T
ID is 1 and the VIS6600T controller IDs are 2 and 3. To set controllers 0 and 1 to local
controllers and controllers 2 and 3 to remote controllers.

Run the upadmin command to log in to the CLI. Run the following commands:
UltraPath CLI #1 > set remote_controller array_id=0 tpg_id=0,1
local //Set controllers 0 and 1 to local controllers.
UltraPath CLI #1 > set remote_controller array_id=1 tpg_id=2,3
remote //Set controllers 2 and 3 to remote controllers.

Step 2 Set the load balancing mode to load balancing among controllers.
UltraPath CLI #1 > set workingmode=0

Step 3 Sets the load balancing algorithm to round robin.


UltraPath CLI #1 > set loadbalancemode -m round-robin

----End

3.11 Configuring ReplicationDirector


This section describes how to configure the OceanStor ReplicationDirector disaster recovery
management software for unified management of the Disaster Recovery Data Center Solution
(Active-Active Mode).

3.11.1 Configuring ReplicationDirector (4-Node VIS6600T


Cluster)
This section describes how to configure the OceanStor ReplicationDirector disaster recovery
management software for a 4-node VIS6600T cluster.

Prerequisites
The account and password for logging in to ReplicationDirector have been obtained and
ReplicationDirector has been logged in.

Context
Figure 3-15 shows the procedure and operation portal for configuring ReplicationDirector for
a 4-node VIS6600T cluster.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 44


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Figure 3-15 Procedure and operation portal for configuring ReplicationDirector for a 4-node
VIS6600T cluster
Configuration process Operation portal

Start
11
(1) _1
22
(1)
33
(1) _1 _1
11 Discover resources.
11
(2) _2 33
(2)_2

22(2)_2
22 Create a site.

Create a protected
33 group.

_1/2 : 1/2 indicates the corresponding operation procedure of the tasks.


End

Procedure
Step 1 Discover resources.

To configure the disaster recovery protection service, discover resources first.


ReplicationDirector discovers the hosts where protected objects such as databases reside and
associated storage devices before implementing a variety of data protection and disaster
recovery methods for the protected objects.

Resource discovery includes discovering storage devices and hosts. Table 3-15 shows the
objects that must be added to ReplicationDirector.

Table 3-15 Protected objects of ReplicationDirector


Type Name Description

Storage l Storage array in Obtain the IP addresses, user names, and passwords
data center A used to access devices and add storage.
l Storage array in
data center B
l VIS6600T in data
center A
l VIS6600T in data
center B

Host Oracle RAC node Obtain the IP addresses, user names, and passwords
used to access Oracle RAC nodes.
NOTE
If VMware is used at the application layer, add the
vCenter server.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 45


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

NOTE

For details about how to configure ReplicationDirector for a 4-node VIS6600T cluster, see the
OceanStor ReplicationDirector V100R003C10 Product Documentation. For example, open the
documentation, search Discovering Resources to find the detailed operation procedure.

Step 2 Create a site.

Create a local site for the Disaster Recovery Data Center Solution (Active-Active Mode). The
resources that must be added to the local site include all devices in Step 1.

Step 3 Create a protected group.

You can add multiple protected objects to the same protected group based on your service
type and set a unified protection policy to protect the objects. In this example, an Oracle
protected group is created.

----End

3.11.2 Configuring ReplicationDirector (8-Node VIS6600T


Cluster)
This section describes how to configure the OceanStor ReplicationDirector disaster recovery
management software when there are 8 nodes of VIS6600T Cluster.

Prerequisites
The two 4-node VIS6600T clusters have been configured and the configurations are the same.

Context
When configuring the Disaster Recovery Data Center Solution (Active-Active Mode) for an
8-node VIS6600T cluster in ReplicationDirector, you can configure and associate two 4-node
VIS6600T clusters.

Procedure

Step 1 On the menu bar, click Settings.

Step 2 In the navigation tree, choose Site Management > All Sites.

Step 3 In the right function pane, select the site that has been created.

Step 4 Click Storage.

Step 5 Select the added VIS6600T cluster and click Configure Cluster.

After the two 4-node VIS6600T clusters are associated, the configuration of the 8-node
VIS6600T cluster is completed.

----End

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 46


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

3.12 Checking the Configuration Result


This section describes how to verify the configuration of the Disaster Recovery Data Center
Solution (Active-Active Mode). After configuring the solution, you must verify that read and
write operations are performed based on the path that has been designed to ensure that the
solution can work properly.

Prerequisites
l The OceanStor Toolkit (Toolkit for short) has been obtained.
l The IOmeter test tool has been obtained and installed on the test server.

NOTE

l Method for obtaining the OceanStor Toolkit


Log in to http://enterprise.huawei.com and choose Support > Product Support > IT > Storage >
Tools and Platform > OceanStor Toolkit.
l Method for obtaining IOmeter
IOmeter is a piece of free software and can be obtained from the Internet.

Context
If the storage arrays in the two data centers have the same I/O operations after the
configuration, the configuration is correct.
Figure 3-16 shows the process of verifying the configuration.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 47


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Figure 3-16 Process of verifying the configuration

Start

Use Toolkit to check


Execute inspection. the VIS6600T cluster.

Create a test volume.

Configure the test


Create a test host.
environment.

Map the test volume to the


test host.

Deliver I/Os.

Verify I/Os and deliver


Check volume I/Os. I/Os to the two storage
arrays.

Check storage array I/Os.

End

Procedure
Step 1 Use Toolkit to inspect the VIS6600T cluster.

If an item does not pass the inspection or an alarm is generated, see the corresponding
document to resolve the problem and ensure that all items pass the inspection and no alarm is
generated.

Step 2 Log in to either VIS6600T device.

Step 3 Create a test volume.


1. In the navigation tree, choose All Devices > VIS6600T > Logical Disk Groups >
XXXX.

XXXX is the name of the logic disk group.


2. In the function pane, click the Volumes tab.
3. Click Create.

The Create Volume dialog box is displayed.


4. Configure the basic information about the volume.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 48


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

Set the volume name to test_volume.


5. Select logical disks for the volume.
Select logical disks based on your requirements.
6. Click OK.
The test volume is created.
Step 4 Create a test host.
1. Use the IP address, user name, and password of a VIS6600T node to log in to the
VIS6600T ISM.
2. In the navigation tree, choose All Devices > VIS6600T > Mappings > Hosts.
3. In the function pane, click Create.
The Create Host dialog box is displayed.
4. Set the host name to test_host.
5. Click Next to start the configuration of the Fibre Channel initiator.
6. Add an initiator to the test host.
7. Click Next. On the Summary page, view the information about the new test host.
8. Click OK.
The test host is created.
Step 5 Map the test volume to the test host.
1. In the navigation tree, expand the Logical Disk Groups node.
2. Select a logical disk group whose volume you want to map.
3. In the function pane, click the Volumes tab.
4. Select the volume (test-volume) that you want to map.
5. Choose Mapping > Map to Host.
The Volume Mapping to Host dialog box is displayed.
6. Select the host (test_host) to which you want to map the volume.
7. Click OK.
A message is displayed indicating that the test volume is mapped to the test host
successfully.
Step 6 Use IOmeter to perform I/O operations on the volume (test_volume).
Step 7 In the VIS6600T, check I/O operations of the volumes.
1. In the VIS6600T, enable Performance Monitoring.
Choose Settings > Performance Monitoring > Set Performance Monitoring. On the
Set Performance Monitoring page, click Enable.
2. Choose Performance > Performance Monitoring.
3. On the Performance Monitoring page, check the I/O operations of volumes.
Step 8 On the two storage arrays, check the I/O operations of LUNs.
1. In the OceanStor 6800 V3, enable Performance Monitoring Settings.
Choose Viewing and Settings > Performance Monitoring Settings. On the
Performance Monitoring Settings page, click Enable.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 49


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) 3 Configuration Guide

2. Choose Performance Monitoring > Real-Time Performance Monitoring.


3. On the Real-Time Performance Monitoring page, check the I/O operations of LUNs.
NOTE

This section describes how to check I/O operations of LUNs in the OceanStor 6800 V3. For details
about how to check the I/O operations of the heterogeneous storage array, see the related product
document.

If the I/O operations of LUNs are the same on the two storage arrays, the solution is working
properly. Otherwise, check the configuration.

----End

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 50


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) A Appendix

A Appendix

A.1 Product Introduction


This chapter describes the products involved in the Disaster Recovery Data Center Solution
(Active-Active Mode).
A.2 Zone Division for Fibre Channel Switches
This chapter describes the principles for dividing zones on Fibre Channel switches and
provides examples.
A.3 Glossary

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 51


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) A Appendix

A.1 Product Introduction


This chapter describes the products involved in the Disaster Recovery Data Center Solution
(Active-Active Mode).

A.1.1 Storage Arrays (OceanStor 18000 Series)


HUAWEI OceanStor 18000 series enterprise storage systems (the OceanStor 18000 series for
short) are optimum storage platforms for next-generation data centers. Being secure, flexible,
and efficient, the OceanStor 18000 series meets the demanding core business requirements of
industries including finance, government sector, energy, manufacturing, transportation,
education, and telecommunications.
HUAWEI OceanStor 18000 series enterprise storage systems are high-end flagship products
and include three models: OceanStor 18500, OceanStor 18800, and OceanStor 18800F. The
OceanStor 18000 series inherits a flexible and scalable design and adopts the Smart Matrix
architecture. The architecture has a horizontal expansion system of multiple engines (each
engine containing two controllers) and provides up to eight system bays and two disk bays for
enterprise data centers. The hardware is seamlessly integrated into enterprise data centers,
boosting their efficiency and scalability to perfectly meet requirements of Online Transaction
Processing/Online Analytical Processing (OLTP/OLAP), high-performance computing
(HPC), digital media, Internet-based operation, centralized storage, backup, disaster recovery,
and data migration.
Figure A-1 shows the exterior of the OceanStor 18000 series.

Figure A-1 Exterior of the OceanStor 18000 series

A.1.2 Storage Arrays (OceanStor V3 Converged Storage Systems)


HUAWEI OceanStor V3 converged storage systems are the next-generation unified storage
products oriented to enterprise-class applications.
Leveraging a storage operating system built on a cloud-oriented architecture, a powerful new
hardware platform, and suites of intelligent management software, the OceanStor V3 mid-
range storage systems deliver industry-leading functionality, performance, efficiency,
reliability, and ease-of-use. They provide data storage for applications such as large-database
OLTP/OLAP and file sharing and can be widely applied to industries ranging from
government sector, finance, telecommunications, energy, to media and entertainment.
Additionally, the V3 mid-range storage systems can provide a wide range of efficient and
flexible backup and disaster recovery solutions to ensure business continuity and data
security, delivering excellent storage services.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 52


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) A Appendix

Figure A-2 shows the exterior of the V3 series.

Figure A-2 Exterior of the V3 series

A.1.3 Storage Arrays (OceanStor T Series)


HUAWEI OceanStor T series unified storage systems (T series for short) are the next-
generation unified storage products oriented to enterprise-class applications.
The OceanStor T series unified storage systems are brand-new storage products designed for
mid-range and high-end storage applications. The T series is an integration of the architecture,
storage protocols, and platforms. It also provides outstanding performance and unique
software features and supports efficient resource utilization, delivering an industry-leading
performance and all-around unified storage solution to meet your application needs in large-
database OLTP/OLAP, HPC, digital media, Internet operation, centralized storage, backup,
disaster recovery, and data migration, maximizing your ROI.
Figure A-3 shows the exterior of the T series.

Figure A-3 Exterior of the T series

A.1.4 Storage Virtualization Gateways (OceanStor VIS6600T)


HUAWEI OceanStor VIS6600T is a best-in-class storage virtualization product that has
powerful storage virtualization capabilities.
As an industry-leading storage virtualization product, the OceanStor VIS6600T meets
customers' requirements for heterogeneous storage integration, unified management of storage
space, and construction of a multi-level disaster recovery system. Specific functions that
address these requirements include storage resource integration, unified resource
management, data migration, and multiple data protection mechanisms. The VIS6600T is

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 53


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) A Appendix

built to provide customers with an open storage system featuring solid security, robust
reliability, flexible management, and superb performance.

Figure A-4 shows the exterior of the VIS6600T.

Figure A-4 Exterior of the VIS6600T

Powerful storage virtualization capabilities of the VIS6600T:

l Consolidation of heterogeneous storage arrays


The network layer–based heterogeneous storage virtualization function integrates
heterogeneous storage resources into a unified storage resource pool. Users can directly
access and share storage space in the resource pool without the need to know what or
where the resources come from.
l Seamless integration of Fibre Channel and IP networks
Both Fibre Channel and iSCSI host ports and array ports including the mainstream 10GE
iSCSI ports are fully supported.

A.1.5 Fibre Channel Switches (OceanStor SNS Series)


HUAWEI OceanStor SNS series Fibre Channel switches are oriented to small-scale
independent SANs and data centers and are designed to reduce enterprises' SAN costs while
improving SAN scalability and ease of use.

The OceanStor SNS (SNS for short) Fibre Channel switches combine a superior hardware and
software structure, meeting the requirements of enterprises for robust reliability, outstanding
performance, and high availability.

In this solution, SNS2224, SNS2248, and SNS3096 are recommended.

l The SNS2224/2248 is oriented to small-scale independent SANs and edge topologies of


large-scale core switching networks. It features Inter-Switch Link (ISL) aggregation,
high bandwidth, and port self-adaptation, meeting enterprises' requirements on Fibre
Channel switches.
l The SNS3096 is oriented to data centers and used in dedicated basic network
architectures that have been widely practiced. It uses the fifth-generation Fibre Channels,
meeting all requirements for high-density server virtualization, cloud architecture, and
next-generation storage architecture.

Figure A-5 and Figure A-6 show the exteriors of the SNS2224/2248 and SNS3096
respectively.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 54


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) A Appendix

Figure A-5 Exterior of the SNS2224/2248

Figure A-6 Exterior of the SNS3096

A.1.6 Multipathing Software (OceanStor UltraPath)


The OceanStor UltraPath software (UltraPath for short) provides a multipathing solution for
application servers to access a storage system, improving the security, reliability, and
maintainability of enterprise data.

UltraPath improves data transfer reliability, ensures security of paths between an application
server and a storage system, and provides customers with an easy-to-use and highly efficient
path management solution to bring the performance of application hosts and storage systems
into full play, maximizing the return on investment (ROI).

NOTE

The UltraPath is installed on application hosts.

A.1.7 DR management software (OceanStor ReplicationDirector)


OceanStor ReplicationDirector is a visual DR management software for data center storage
systems.

OceanStor ReplicationDirector (ReplicationDirector for short) is a piece of storage disaster


recovery (DR) management software oriented to enterprise-class data centers. This software
enables you to manage DR environments of the Disaster Recovery Data Center Solution
(Active-Active Mode), Disaster Recovery Data Center Solution (Active-Passive Mode) , and
Disaster Recovery Data Center Solution (Geo-Redundant Mode) as well as manage the DR of
varying database applications and virtualization environments. With ReplicationDirector, you
can configure DR tasks with ease, monitor the operating status of DR services in a visual way,
and perform DR and DR testing quickly.

ReplicationDirector provides DR topologies and end-to-end monitoring to simplify DR


management. The intuitive display of DR solution status and changes and the real-time device
parts monitoring enable you to identify and troubleshoot faults before a service failover,
preventing unnecessary failovers that adversely affect ongoing services and increase DR
costs.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 55


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) A Appendix

A.1.8 Load Balancer (L2800)


The L2800 connects, schedules, and manages enterprise-wide application server clusters and
can balance load for different back-end application servers.
Huawei L2800 load balancer provides a feasible and efficient way to enhance network
flexibility and availability based on the existing infrastructure. The L2800 employs carrier-
class intelligent server load balancing (SLB) engine and provides eight GE ports and two
10GE ports, enabling you to establish a large-capacity and intelligent local or global load
balancing scheduling network between application servers and clients. It also uses the load
balancing management system to centrally manage the deployment and statistics on
applications and servers, helping you cope with network congestion with ease. Moreover, the
HACS dual-host module improves the service reliability, meeting the demanding
requirements of critical services for business continuity and providing you with 24/7 smooth
applications.
Figure A-7 shows the appearance of the L2800.

Figure A-7 Appearance of the L2800

A.2 Zone Division for Fibre Channel Switches


This chapter describes the principles for dividing zones on Fibre Channel switches and
provides examples.
In a zone, a specific switch and device can communicate with each other. Fibre Channel
switches are cascaded across data centers and zones are created to enable interworking
between nodes in the VIS6600T cluster and storage arrays as well as application servers and
the VIS6600T nodes and form redundant links, ensuring ongoing services in the event of
single points of failure.
The four nodes in the VIS6600T cluster are numbered 0, 1, 2, and 3. Domain IDs of the two
Fibre Channel switches in data center A are set to 1 and 2 respectively. Domain IDs of the two
Fibre Channel switches in data center B are set to 2 and 4 respectively. Table A-1 and Table
A-2 describe ports on Fibre Channel switches.

Table A-1 Description of ports on Fibre Channel switches in data center A


Port Connection Device (Fibre Connection Device (Fibre
Channel Switch 1) Channel Switch 3)

0 Port P0 in slot 2 on node 0 of the Port P1 in slot 2 on node 0 of the


VIS6600T VIS6600T

1 Port P2 in slot 2 on node 0 of the Port P3 in slot 2 on node 0 of the


VIS6600T VIS6600T

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 56


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) A Appendix

Port Connection Device (Fibre Connection Device (Fibre


Channel Switch 1) Channel Switch 3)

2 Port P0 in slot 1 on node 0 of the Port P1 in slot 1 on node 0 of the


VIS6600T VIS6600T

3 Port P0 on controller A of the Port P1 on controller A of the


OceanStor 6800 V3 OceanStor 6800 V3

4 Fibre Channel port 1 of application Fibre Channel port 2 of application


host 1 host 1

5 Fibre Channel port 1 of application Fibre Channel port 2 of application


host 2 host 2

12 Port P0 in slot 2 on node 1 of the Port P1 in slot 2 on node 1 of the


VIS6600T VIS6600T

13 Port P2 in slot 2 on node 1 of the Port P3 in slot 2 on node 1 of the


VIS6600T VIS6600T

14 Port P0 in slot 1 on node 1 of the Port P1 in slot 1 on node 1 of the


VIS6600T VIS6600T

15 Port P0 on controller B of the Port P1 on controller B of the


OceanStor 6800 V3 OceanStor 6800 V3

Table A-2 Description of ports on Fibre Channel switches in data center B


Port Connection Device (Fibre Connection Device (Fibre
Channel Switch 2) Channel Switch 4)

0 Port P0 in slot 2 on node 2 of the Port P1 in slot 2 on node 2 of the


VIS6600T VIS6600T

1 Port P2 in slot 2 on node 2 of the Port P3 in slot 2 on node 2 of the


VIS6600T VIS6600T

2 Port P0 in slot 1 on node 2 of the Port P1 in slot 1 on node 2 of the


VIS6600T VIS6600T

3 Port P0 on controller A of a storage Port P1 on controller A of a storage


array array

4 Fibre Channel port 1 of application Fibre Channel port 2 of application


host 3 host 3

12 Port P0 in slot 2 on node 3 of the Port P1 in slot 2 on node 3 of the


VIS6600T VIS6600T

13 Port P2 in slot 2 on node 3 of the Port P3 in slot 2 on node 3 of the


VIS6600T VIS6600T

14 Port P0 in slot 1 on node 3 of the Port P1 in slot 1 on node 3 of the


VIS6600T VIS6600T

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 57


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) A Appendix

Port Connection Device (Fibre Connection Device (Fibre


Channel Switch 2) Channel Switch 4)

15 Port P0 on controller B of a storage Port P1 on controller B of a storage


array array

Fibre Channel switch 1 in data center A and Fibre Channel switch 2 in data center B are
cascaded. Fibre Channel switch 3 in data center A and Fibre Channel switch 4 in data center
B are cascaded. On two Fibre Channel switches that are cascaded, the ports of each link form
a zone. For details, see Table A-3 and Table A-4.

Table A-3 Zone division on Fibre Channel switches 1 and 2


Zone Name Example Zone Membera Description

VIS_NODE0_P0_6800V3_A_1 1:0; 1:3 Used to connect node 0 in the


VIS6600T cluster to controller
VIS_NODE0_P0_6800V3_B_1 1:0; 1:15 A/B of the OceanStor 6800
V3 in data center A

VIS_NODE1_P0_6800V3_A_1 1:12; 1:3 Used to connect node 1 in the


VIS6600T cluster to controller
VIS_NODE1_P0_6800V3_B_1 1:12; 1:15 A/B of the OceanStor 6800
V3 in data center A

VIS_NODE2_P0_6800V3_A_1 2:0; 1:3 Used to connect node 2 in the


VIS6600T cluster to controller
VIS_NODE2_P0_6800V3_B_1 2:0; 1:15 A/B of the OceanStor 6800
V3 in data center A

VIS_NODE3_P0_6800V3_A_1 2:12; 1:3 Used to connect node 3 in the


VIS6600T cluster to controller
VIS_NODE3_P0_6800V3_B_1 2:12; 1:15 A/B of the OceanStor 6800
V3 in data center A

VIS_NODE0_P0_disk_A_1 1:0; 2:3 Used to connect node 0 in the


VIS6600T cluster to controller
VIS_NODE0_P0_disk_B_1 1:0; 2:15 A/B of the OceanStor 6800
V3 in data center B

VIS_NODE1_P0_disk_A_1 1:12; 2:3 Used to connect node 1 in the


VIS6600T cluster to controller
VIS_NODE1_P0_disk_B_1 1:12; 2:15 A/B of the OceanStor 6800
V3 in data center B

VIS_NODE2_P0_disk_A_1 2:0; 2:3 Used to connect node 2 in the


VIS6600T cluster to controller
VIS_NODE2_P0_disk_B_1 2:0; 2:15 A/B of the OceanStor 6800
V3 in data center B

VIS_NODE3_P0_disk_A_1 2:12; 2:3 Used to connect node 3 in the


VIS6600T cluster to controller

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 58


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) A Appendix

Zone Name Example Zone Membera Description

VIS_NODE3_P0_disk_B_1 2:12; 2:15 A/B of the OceanStor 6800


V3 in data center B

VIS_NODE0_P2_Host1_1 1:1; 1:4 Used to connect node 0 in the


VIS6600T cluster to
application host 1 in data
center A

VIS_NODE1_P2_Host1_1 1:13; 1:4 Used to connect node 1 in the


VIS6600T cluster to
application host 1 in data
center A

VIS_NODE2_P2_Host1_1 2:1; 1:4 Used to connect node 2 in the


VIS6600T cluster to
application host 1 in data
center A

VIS_NODE3_P2_Host1_1 2:13; 1:4 Used to connect node 3 in the


VIS6600T cluster to
application host 1 in data
center A

VIS_NODE0_P2_Host2_1 1:1; 1:5 Used to connect node 0 in the


VIS6600T cluster to
application host 2 in data
center A

VIS_NODE1_P2_Host2_1 1:13; 1:5 Used to connect node 1 in the


VIS6600T cluster to
application host 2 in data
center A

VIS_NODE2_P2_Host2_1 2:1; 1:5 Used to connect node 2 in the


VIS6600T cluster to
application host 2 in data
center A

VIS_NODE3_P2_Host2_1 2:13; 1:5 Used to connect node 3 in the


VIS6600T cluster to
application host 2 in data
center A

VIS_NODE0_P2_Host3_1 1:1; 2:4 Used to connect node 0 in the


VIS6600T cluster to
application host 3 in data
center B

VIS_NODE1_P2_Host3_1 1:13; 2:4 Used to connect node 1 in the


VIS6600T cluster to
application host 3 in data
center B

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 59


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) A Appendix

Zone Name Example Zone Membera Description

VIS_NODE2_P2_Host3_1 2:1; 2:4 Used to connect node 2 in the


VIS6600T cluster to
application host 3 in data
center B

VIS_NODE3_P2_Host3_1 2:13; 2:4 Used to connect node 3 in the


VIS6600T cluster to
application host 3 in data
center B

VIS_InterConnect 1:2; 1:14; 2:2; 2:14 Used to interconnect the four


nodes in the VIS6600T cluster

a: Zone members are separated by semicolons (;). Members are expressed in the format of
Domain ID of Fibre Channel switch:port Number. For example, 1:0 indicates port 0 on
Fibre Channel switch 1.

Table A-4 Zone division on Fibre Channel switches 3 and 4


Zone Name Example Zone Membera Description

VIS_NODE0_P1_6800V3_A_2 3:0; 3:3 Used to connect node 0 in the


VIS6600T cluster to controller
VIS_NODE0_P1_6800V3_B_2 3:0; 3:15 A/B of the OceanStor 6800
V3 in data center A

VIS_NODE1_P1_6800V3_A_2 3:12; 3:3 Used to connect node 1 in the


VIS6600T cluster to controller
VIS_NODE1_P1_6800V3_B_2 3:12; 3:15 A/B of the OceanStor 6800
V3 in data center A

VIS_NODE2_P1_6800V3_A_2 4:0; 3:3 Used to connect node 2 in the


VIS6600T cluster to controller
VIS_NODE2_P1_6800V3_B_2 4:0; 3:15 A/B of the OceanStor 6800
V3 in data center A

VIS_NODE3_P1_6800V3_A_2 4:12; 3:3 Used to connect node 3 in the


VIS6600T cluster to controller
VIS_NODE3_P1_6800V3_B_2 4:12; 3:15 A/B of the OceanStor 6800
V3 in data center A

VIS_NODE0_P1_disk_A_2 3:0; 4:3 Used to connect node 0 in the


VIS6600T cluster to controller
VIS_NODE0_P1_disk_B_2 3:0; 4:15 A/B of the OceanStor 6800
V3 in data center B

VIS_NODE1_P1_disk_A_2 3:12; 4:3 Used to connect node 1 in the


VIS6600T cluster to controller
VIS_NODE1_P1_disk_B_2 3:12; 4:15 A/B of the OceanStor 6800
V3 in data center B

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 60


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) A Appendix

Zone Name Example Zone Membera Description

VIS_NODE2_P1_disk_A_2 4:0; 4:3 Used to connect node 2 in the


VIS6600T cluster to controller
VIS_NODE2_P1_disk_B_2 4:0; 4:15 A/B of the OceanStor 6800
V3 in data center B

VIS_NODE3_P1_disk_A_2 4:12; 4:3 Used to connect node 3 in the


VIS6600T cluster to controller
VIS_NODE3_P1_disk_B_2 4:12; 4:15 A/B of the OceanStor 6800
V3 in data center B

VIS_NODE0_P3_Host1_2 3:1; 3:4 Used to connect node 0 in the


VIS6600T cluster to
application host 1 in data
center A

VIS_NODE1_P3_Host1_2 3:13; 3:4 Used to connect node 1 in the


VIS6600T cluster to
application host 1 in data
center A

VIS_NODE2_P3_Host1_2 4:1; 3:4 Used to connect node 2 in the


VIS6600T cluster to
application host 1 in data
center A

VIS_NODE3_P3_Host1_2 4:13; 3:4 Used to connect node 3 in the


VIS6600T cluster to
application host 1 in data
center A

VIS_NODE0_P3_Host2_2 3:1; 3:5 Used to connect node 0 in the


VIS6600T cluster to
application host 2 in data
center A

VIS_NODE1_P3_Host2_2 3:13; 3:5 Used to connect node 1 in the


VIS6600T cluster to
application host 2 in data
center A

VIS_NODE2_P3_Host2_2 4:1; 3:5 Used to connect node 2 in the


VIS6600T cluster to
application host 2 in data
center A

VIS_NODE3_P3_Host2_2 4:13; 3:5 Used to connect node 3 in the


VIS6600T cluster to
application host 2 in data
center A

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 61


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) A Appendix

Zone Name Example Zone Membera Description

VIS_NODE0_P3_Host3_2 3:1; 4:4 Used to connect node 0 in the


VIS6600T cluster to
application host 3 in data
center B

VIS_NODE1_P3_Host3_2 3:13; 4:4 Used to connect node 1 in the


VIS6600T cluster to
application host 3 in data
center B

VIS_NODE2_P3_Host3_2 4:1; 4:4 Used to connect node 2 in the


VIS6600T cluster to
application host 3 in data
center B

VIS_NODE3_P3_Host3_2 4:13; 4:4 Used to connect node 3 in the


VIS6600T cluster to
application host 3 in data
center B

VIS_InterConnect 3:2; 3:14; 4:2; 4:14 Used to interconnect the four


nodes in the VIS6600T cluster

a: Zone members are separated by semicolons (;). Members are expressed in the format of
Domain ID of Fibre Channel switch:port Number. For example, 1:0 indicates port 0 on
Fibre Channel switch 1.

A.3 Glossary
C

CHAP Challenge Handshake Authentication Protocol (CHAP). A method to periodically


verify the identity of the peer using a 3-way handshake. During the establishment of a
link, the authenticator sends a "challenge" message to the peer. The peer responds with
a value calculated using a "one-way hash" function. The authenticator checks the
response against its own calculation of the expected hash value. If the values match,
the authentication is acknowledged. CHAP provides protection against playback
attack.
CSS Cluster Switch System (CSS) is a network virtualization technology that is applied to
modular configuration switches. It uses dedicated or general cables to connect
multiple switches that support the CSS technology. Then the switches are virtualized
into one large-scale switch.

DCO Data Change Object.


DNS Domain Name Server.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 62


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) A Appendix

DWDM Dense Wavelength Division Multiplexing.

FC A high-speed transport technology used to build SANs. FC is primarily used for


transporting SCSI traffic from servers to disk arrays, but it can also be used on
networks carrying ATM and IP traffic. FC supports single-mode and multi-mode fiber
connections, and can run on twisted-pair copper wires and coaxial cables. FC provides
both connection-oriented and connectionless services.

GSLB Global Server Load Balance.

iSCSI Internet Small Computer Systems Interface (iSCSI). A transport protocol that provides
for the SCSI protocol to be carried over a TCP based IP network. Standardized by the
Internet Engineering Task Force and described in RFC.
ISL Inter-Switch Link.
ISM Integrated Storage Manager.

LLD Low Level Design (LLD). The activities of further detailing the design content,
providing data configuration rules, and guiding data planning according to the
network topology solution, signaling channel routing solution, and service
implementation solution developed at the network planning stage.
LUN Logical Unit Number.

NAS Network Attached Storage (NAS). File-level computer data storage connected to a
computer network providing data access to heterogeneous clients. An NAS server
contains storage devices, such as disk arrays, CD/DVD drives, tape drives, or portable
storage media. An NVS server provides an embedded operating system to share files
across platforms.Network storage is developed towards two directions: SAN and
NAS. The NAS provides has a tradition of Ethernet data access. Its model comes from
the network file server.

OLAP Online Analytical Processing .


OLTP Online Transaction Processing.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 63


Copyright © Huawei Technologies Co., Ltd.
Business Continuity and Disaster Recovery Solution
User Guide (Active-Active Mode) A Appendix

R
RAC Real Application Clusters. A component of the Oracle database that allows a database
to be installed across multiple servers and to run any packaged or customized
application software without any modification.
RAID Redundant Array of Independent Disks.
RPO Recovery Point Objective. A service switchover policy that ensures the least data loss.
It tasks the data recovery point as the objective and ensures that the data used for the
service switchover is the latest backup data.
RTO Recovery Time Objective. A service switchover policy that ensures the shortest
switchover time. It tasks the recovery time point as the objective and ensures that the
redundancy machine can take over services as quickly as possible.

S
SAN Storage Area Network (SAN). An architecture to attach remote computer storage
devices such as disk array controllers, tape libraries and CD arrays to servers in such a
way that to the operating system the devices appear as locally attached devices.
SAS Serial Attached SCSI (SAS). A SCSI interface standard that provides for attaching
HBAs and RAID controllers to both SAS and SATA disk and tape drives, as well as
other SAS devices.
SLB Server Load Balancing.

STP Spanning Tree Protocol (STP). STP is a protocol that is used in the LAN to remove
the loop. STP applies to the redundant network to block some undesirable redundant
paths through certain algorithms and prune a loop network into a loop-free tree
network.

T
TCO Total Cost of Operation.

V
VIS Virtual Intelligent Storage.

Issue 02 (2016-01-06) Huawei Proprietary and Confidential 64


Copyright © Huawei Technologies Co., Ltd.

Das könnte Ihnen auch gefallen