Beruflich Dokumente
Kultur Dokumente
Solution
V100R002C10
Issue 02
Date 2016-01-06
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://e.huawei.com
Contents
3 Configuration Guide.....................................................................................................................8
3.1 Configuration Process...................................................................................................................................................13
3.2 Configuration Preparations...........................................................................................................................................16
3.3 Configuration Planning.................................................................................................................................................18
3.4 Configuring Switches................................................................................................................................................... 21
3.4.1 Configuring Ethernet Switches..................................................................................................................................21
3.4.2 Configuring Fibre Channel Switches........................................................................................................................ 23
3.5 Configuring Load Balancers.........................................................................................................................................25
3.5.1 Configuring GSLBs................................................................................................................................................... 25
3.5.2 Configuring Local Load Balancers............................................................................................................................26
3.6 Configuring Middleware.............................................................................................................................................. 28
3.7 Configuring the VIS6600T Cluster.............................................................................................................................. 30
3.8 Configuring Storage Arrays..........................................................................................................................................33
3.9 Configuring the VIS6600T........................................................................................................................................... 34
3.9.1 Configuring Storage Virtualization............................................................................................................................35
3.9.2 Configuring a Mirror................................................................................................................................................. 38
3.9.3 Configuring Quorum Policies....................................................................................................................................40
3.9.4 Configuring Active-Active Storage...........................................................................................................................40
3.10 Configuring UltraPath................................................................................................................................................ 42
3.11 Configuring ReplicationDirector................................................................................................................................ 44
3.11.1 Configuring ReplicationDirector (4-Node VIS6600T Cluster)............................................................................... 44
3.11.2 Configuring ReplicationDirector (8-Node VIS6600T Cluster)............................................................................... 46
3.12 Checking the Configuration Result............................................................................................................................ 47
A Appendix......................................................................................................................................51
A.1 Product Introduction.................................................................................................................................................... 52
A.1.1 Storage Arrays (OceanStor 18000 Series)................................................................................................................ 52
A.1.2 Storage Arrays (OceanStor V3 Converged Storage Systems)..................................................................................52
Overview
HUAWEI Business Continuity and Disaster Recovery Solution includes four sub-solutions:
Disaster Recovery Data Center Solution (Active-Active Mode), High-Availability (HA)
Solution, Disaster Recovery Data Center Solution (Active-Passive Mode), and Disaster
Recovery Data Center Solution (Geo-Redundant Mode). This document describes the
positioning, characteristics, as well as configuration process and steps of the Disaster
Recovery Data Center Solution (Active-Active Mode).
Intended Audience
This document is intended for:
l Technical support engineers
l Maintenance engineers
Symbol Conventions
The symbols that may be found in this document are defined as follows:
Symbol Description
Symbol Description
Change History
Changes between document issues are cumulative. The latest document issue contains all the
changes in earlier issues.
Issue 02 (2016-01-06)
This issue is the second official release.
Issue 01 (2015-06-30)
This issue is the first official release.
2 Solution Description
This chapter describes the positioning, highlights, and involved products of the Disaster
Recovery Data Center Solution (Active-Active Mode).
2.1 Positioning
The Disaster Recovery Data Center Solution (Active-Active Mode) is an end-to-end solution
that enables storage systems, applications, and networks to work in active-active mode,
ensuring zero data loss and ongoing services.
2.2 Network Diagrams
This section describes the network diagrams of the Disaster Recovery Data Center Solution
(Active-Active Mode) and how active-active is achieved on the storage layer, application
layer, and network layer.
2.3 Highlights
The Disaster Recovery Data Center Solution (Active-Active Mode) features robust reliability,
wide compatibility, and flexible scalability.
2.1 Positioning
The Disaster Recovery Data Center Solution (Active-Active Mode) is an end-to-end solution
that enables storage systems, applications, and networks to work in active-active mode,
ensuring zero data loss and ongoing services.
As enterprise services grow, service breakdown exerts increasing negative impact on
corporate image and operations. Enterprises have more demanding requirements for business
continuity and 24/7 availability of critical services.
According to statistics, traditional data center disaster recovery (DR) solutions have the
following problems:
l Long recovery periods with a certain amount of data loss
l Long service downtime due to manual service switchover upon faults
l Low resource utilization but high total cost of ownership (TCO)
Constructing active-active DR systems has become a trend in the medical, social security,
finance, and government sectors, which gives rise to the Disaster Recovery Data Center
Solution (Active-Active Mode). This solution has two data centers that work in active-active
mode and provide services at the same time, improving the service capability and resource
utilization of data centers. The two data centers serve as backup for each other. When either
data center fails, services are failed over to the other to ensure business continuity.
This solution achieves active-active operation at the storage layer, application layer, and
network layer, eliminating single points of failure and ensuring business continuity.
Figure 2-1 Network diagram of the Disaster Recovery Data Center Solution (Active-Active
Mode)
WDM WDM
Network layer
GSLB GSLB
FusionSphere/
Oracle
layer
VIS6600T
cluster
Storage layer
The Disaster Recovery Data Center Solution (Active-Active Mode) is an end-to-end solution
that covers the network layer, application layer, and storage layer.
l Network layer
The global server load balancer (GSLB) balances loads between data centers based on
the latency and service traffic. The server load balancer (SLB) balances loads between
applications servers in data centers. IP and Fibre Channel networks between data centers
can communicate with each other. When one data center breaks down, its services are
automatically switched to another data center for failover, ensuring business continuity.
l Application layer
Host clusters, database clusters, and application clusters work at the same time on the
two data centers and serve as backup for each other. When one data center breaks down,
its services are automatically switched to another data center for failover, ensuring
business continuity.
NOTICE
If the two data centers are interconnected using a wavelength division multiplexing (WDM)
device in 1+1 link protetcion mode, a switchover between the active link and standby link will
have the following impacts upon the breakdown of the active link:
l Links between Fibre Channels are interrupted unexpectedly.
l The link recovery takes about 15 seconds in cascaded mode of Fibre Channel switches.
As a result, arbitration is started due to a timeout of the VIS6600T cluster heartbeat.
l After a link switchover is completed, restore the two data centers in the system to active-
active status.
l Services are not affected. Before system recovery, the system performance deteriorates.
2.3 Highlights
The Disaster Recovery Data Center Solution (Active-Active Mode) features robust reliability,
wide compatibility, and flexible scalability.
l Robust reliability
The active-active architecture ensures zero data loss and service interruption (RPO = 0,
RTO = 0) when one data center breaks down.
l Wide compatibility
This VIS6600T is widely compatible with storage systems from EMC, IBM, HDS, HP,
and Sun, making full use of storage resources while protecting the existing investment.
l Flexible scalability
The solution integrates value-added features such as remote replication and can be
smoothly upgraded to the Disaster Recovery Data Center Solution (Geo-Redundant
Mode), enhancing the DR capability.
3 Configuration Guide
This chapter describes prerequisites, configuration process, and detailed configuration steps of
the core devices involved in the Disaster Recovery Data Center Solution (Active-Active
Mode).
An enterprise uses the Disaster Recovery Data Center Solution (Active-Active Mode). The
distance between the two data centers is shorter than 25 km. Figure 3-1 shows the typical
network.
Figure 3-1 Typical network of the Disaster Recovery Data Center Solution (Active-Active
Mode)
Router Router
F5 Big-IP F5 Big-IP
Network layer
L2800 L2800
Core switch Core switch
iStack iStack
Oracle RAC
cluster
VIS6600T
cluster
Storage layer
Table 3-1 describes the components and their functions in the Disaster Recovery Data Center
Solution (Active-Active Mode).
Table 3-1 Components and their functions in the Disaster Recovery Data Center Solution
(Active-Active Mode)
Component Product Description
Local load balancer L2800 Two local load balancers (active and
standby) are deployed in each data center
for service load balancing in the data
center.
Application server Oracle RAC 11g R2 Oracle RAC servers form an Oracle RAC
server cluster. When an application server fails,
its services are automatically switched to
another application server.
Fibre Channel switch SNS2248 Two Fibre Channel switches are deployed
in each data center for cascaded
connection across data centers and active-
active storage at the storage layer.
NOTE
l This document describes how to configure the core devices (especially, the devices at the storage
layer) involved in the Disaster Recovery Data Center Solution (Active-Active Mode). Existing
network infrastructure, upper-layer hosts, and applications must be prepared by users, application
providers, or integrators.
l This document only helps you configure the basic active-active storage architecture and does not
involve users' service systems and data migration. The configuration of users' service systems and
data migration require independent professional services or can be completed by users.
l In this solution, the OceanStor V3 series can be used. For different models of storage systems in the
same series, the configurations are the same. The configuration guide uses the OceanStor 6800 V3
as examples to describe how to configure the Disaster Recovery Data Center Solution (Active-
Active Mode).
l This document uses the typical network in Figure 3-1 as an example. Plan and configure the
application layer based on actual services. The configuration methods used at the storage layer and
network layer are similar to the method described in the guide.
l The device connection diagram refers to Networking Assistant.
This section describes prerequisites and documents that you must prepare before you
configure the Disaster Recovery Data Center Solution (Active-Active Mode).
3.3 Configuration Planning
This section describes items that you must plan before the configuration, including the service
IP addresses of L2800 devices and application servers, storage arrays, and volumes of the
VIS6600T cluster.
3.4 Configuring Switches
This section describes the configuration requirements of core switches and access switches at
the network layer and the configuration procedure of Fibre Channel switches at the storage
layer. Ethernet switches serve as the core switches and access switches at the network layer
and Fibre Channel switches are used at the storage layer.
3.5 Configuring Load Balancers
This section describes the configuration requirements for load balancing between data centers
and how to configure service load balancers in a data center.
3.6 Configuring Middleware
This section describes middleware configuration requirements. The middleware is used to
connect applications to databases. Apache, Oracle WebLogic, and IBM WebSphere
Application Server (WAS) are all middleware.
3.7 Configuring the VIS6600T Cluster
Two VIS6600T devices form a 4-node cluster. To ensure proper working of the cluster, you
must set the cluster heartbeat mode to external heartbeat mode.
3.8 Configuring Storage Arrays
Storage array configuration includes creating a LUN/LUN group, a host/host group, and a
mapping relationship between storage arrays and the VIS6600T cluster. In this way, the
VIS6600T cluster can centrally manage storage resources.
3.9 Configuring the VIS6600T
This section describes how to configure quorum policies, storage virtualization, mirroring,
and active-active storage. After the configuration, a mirroring relationship is established
between storage arrays and two data centers that undertake services concurrently. When the
heartbeat is interrupted between two VIS6600T devices, services can be quickly recovered
based on the quorum policy that has been configured.
3.10 Configuring UltraPath
This section describes how to configure the UltraPath software. In the Disaster Recovery Data
Center Solution (Active-Active Mode), the UltraPath software is configured to improve the
I/O processing efficiency and reduce the access latency.
3.11 Configuring ReplicationDirector
This section describes how to configure the OceanStor ReplicationDirector disaster recovery
management software for unified management of the Disaster Recovery Data Center Solution
(Active-Active Mode).
3.12 Checking the Configuration Result
This section describes how to verify the configuration of the Disaster Recovery Data Center
Solution (Active-Active Mode). After configuring the solution, you must verify that read and
write operations are performed based on the path that has been designed to ensure that the
solution can work properly.
Figure 3-2 shows the configuration process of the Disaster Recovery Data Center Solution
(Active-Active Mode).
Figure 3-2 Configuration process of the Disaster Recovery Data Center Solution (Active-
Active Mode)
Start
Configuration
Preparations.
Configure the
VIS6600T cluster.
Configure storage
virtualization.
Configure storage
arrays.
Configure a mirror.
Configure the VIS6600T cluster Configure the
and active-active storage. VIS6600T. Configure a quorum policy.
Configuring ReplicationDirector
(4-Node VIS6600T Cluster). Configure Configure the unified disaster
ReplicationDirector. recovery management platform.
Configuring ReplicationDirector
(8-Node VIS6600T Cluster).
Check the Verify the configuration.
configuration result.
End
Table 3-2 describes the configuration process of the Disaster Recovery Data Center Solution
(Active-Active Mode).
Table 3-2 Configuration process of the Disaster Recovery Data Center Solution (Active-
Active Mode)
No. Configuration Description Operation Location
Procedure
Prerequisites
Prerequisites for configuring the Disaster Recovery Data Center Solution (Active-Active
Mode) are as follows:
l All products have been installed.
– All products hardware installation and software installation has been completed.
– Products have been connected based on the low-level design (LLD) of the solution.
– The required licenses have been applied and installed based on the license operation
guide.
– The UltraPath software and ReplicationDirector Agent have been installed on
application hosts, and has been started the agent service.
– OceanStor ReplicationDirector Server has been installed on the management server
and can be accessed by devices through their management network ports.
l The IP/Fibre Channel network is working properly between the two data centers.
l The third-party quorum LUN has been deployed in the third-party quorum site and
mapped to the VIS6600T cluster.
l The compatibility between the VIS6600T and the heterogeneous storage array has been
checked.
NOTE
For details about the models of heterogeneous storage arrays that are compatible with the
VIS6600T, see the Huawei OceanStor Virtual Intelligent Storage Interoperability Matrix. Log in
to http://enterprise.huawei.com. In the search box, enter a document name and click the search
button. Select Collateral to download the document.
OceanStor 5300 3.8 Configuring Choose Support > Product Support > IT
V3&5500 V3&5600 Storage Arrays > Storage > Disk Storage > V3 Series
V3&5800 V3&6800 Unified Storage > OceanStor 6800 V3 to
V3&6900 V3 download the document.
Storage System NOTE
V300R002 Basic In this document, the V3 series is used as an
Storage Service example. To obtain documentation of the
Guide OceanStor 18000/T series storage arrays, choose
Support > Product Support > IT > Storage >
Disk Storage.
OceanStor 3.9 Configuring the Choose Support > Product Support > IT
VIS6600T VIS6600T > Storage > Disk Storage > VIS Virtual
V200R003 Product Storage > OceanStor VIS6600T to
Documentation download the document.
OceanStor UltraPath 3.10 Configuring Choose Support > Product Support > IT
for Linux UltraPath > Storage > Storage Software > Storage
V100R008C0 User Management Software > UltraPath to
Guide download the document.
L2800 Load 3.5.2 Configuring Choose Support > Product Support > IT
Balancer Local Load > Server > APP Server > L2800 to
V100R001C00 Balancers download the document.
Product
Documentation
NOTE
This document uses product versions in Table 3-3 as an example. If the actual product versions are
different from these in the document, obtain the corresponding documentation and complete the
configuration.
L2800 Planning
Table 3-4 describes the service IP address planning of L2800 devices.
App-Server01 192.168.10.101/24 80
App-Server02 192.168.10.102/24 80
App-Server03 192.168.10.103/24 80
App-Server04 192.168.10.104/24 80
App-Server05 192.168.10.105/24 80
Table 3-6 describes storage array planning of application servers. The storage planning is the
same on application servers.
Disk quantity 10 -
Network Planning
Figure 3-3 shows the network planning in and between the two data centers in the Disaster
Recovery Data Center Solution (Active-Active Mode).
WAN
Egress Egress
router router
CSS CSS
Cascading
Core Core
switch switch
iStack iStack
Access Access
switch switch
Configuration Requirements
The Ethernet switch configuration requirements are as follows:
l Core switches are cascaded to enable L2 interconnection between the two data centers.
– Cascading configuration for the two data centers with a distance shorter than 25 km
Single-mode optical fibers are used to cascade core switches. Single-mode optical
modules must be configured on the core switches for long-distance transmission.
This section uses this configuration mode as an example.
NOTE
You must ensure that the used switches support the single-mode optical modules of the
matching model and the longest distance supported by the optical modules must be larger
than the actual transmission distance.
– Cascading configuration for the two data centers with a distance equal to or larger
than 25 km
Dense wavelength division multiplexing (DWDM) devices are used for
interconnection.
l Virtual local area networks (VLANs) are used to isolate different services.
l Core switches are used to configure a CSS loop-free Ethernet.
NOTE
In this solution, you are advised to use HUAWEI CloudEngine 12800 series and 6800 series as core
switches and access switches respectively.
Based on the preceding configuration requirements, see the corresponding documents to configure the
Ethernet switches. Log in to http://support.huawei.com and obtain the related documents from the
following paths:
l CE12800: Support > Product Support > Fixed Network > Carrier IP > Switch&Security >
Carrier Switch > CloudEngine 12800
l CE6800: Support > Product Support > Fixed Network > Carrier IP > Switch&Security >
Carrier Switch > CloudEngine 6800
Prerequisites
The accounts and passwords for logging in to Fibre Channel switches have been obtained.
Context
Fibre Channel switches are cascaded across data centers and zones are created to enable
interworking between nodes in the VIS6600T cluster and storage arrays as well as application
servers and the VIS6600T nodes and form redundant links, ensuring ongoing services in the
event of single points of failure.
l Cascaded connection across data centers
Four Fibre Channel switches are cascaded across active-active data centers, building a
foundation for the mirroring relationship between storage arrays in the two data centers.
When Fibre Channel switches are cascaded, domain IDs must be set to prevent ID
conflicts on the network. A domain ID is the unique identifier of a Fibre Channel. Table
3-10 describes the domain ID planning.
l Zone division
In a zone, a specific switch and device can communicate with each other. On two Fibre
Channel switches that are cascaded, the ports of each link form a zone. For details, see
A.2 Zone Division for Fibre Channel Switches.
Procedure
Step 1 Log in to Fibre Channel switch 1.
For more information, see the contents about zone management in the OceanStor
SNS2124&SNS2224&SNS22248&SNS3096&SNS5192&SNS5384 FC Switch V100R002C01 Product
Documentation.
You must set the mode of each port used in the Disaster Recovery Data Center Solution
(Active-Active Mode) to long-distance mode.
For the LS option, you can enter a desired buffer value in Buffer Needed. When changing the
buffer value, you cannot change the values in Frame Size and Desired Distance.
5. Double-click Desired Distance (km) and enter a desired distance.
The distance must match the port transfer rate. The matching principles are as follows:
– When the speed is 8 Gbit/s, the value of the distance (km) ranges from 10 to 63.
– When the speed is 4 Gbit/s, the value of the distance (km) ranges from 10 to 125.
– When the speed is 2 Gbit/s, the value of the distance (km) ranges from 10 to 250.
– When the speed is 1 Gbit/s, the value of the distance (km) ranges from 10 to 500.
6. After configuring the Long Distance and Desired Distance (km) parameters of all
ports, click Apply.
7. Click Yes to confirm the modification.
Step 5 Log in to Fibre Channel switch 3 and perform operations from Step 2 to Step 4.
----End
One F5 GTM GSLB is deployed in each data center. The two GSLBs are configured in active-
standby mode for service load balancing between the two data centers.
Configuration requirements:
l The active F5 GTM GSLB must be connected to the standby F5 GTM GSLB in one-
armed mode.
l The active and standby F5 GTM GSLBs are reachable on an L3 network. There is no
need to use heartbeat cables to connect them.
l The standby F5 GTM GSLB detects the status of the active F5 GTM GSLB using a
heartbeat cable. When the active F5 GTM GSLB fails, the standby F5 GTM GSLB
becomes active. The switchover time must at ms level.
l When the active F5 GTM GSLB fails, the upper-level DNS automatically selects the
standby F5 GTM GSLB in round robin mode.
l An F5 GTM GSLB can be connected to or be used to replace the DNS in a data center.
When the GSLB is used to replace the DNS, the WIP is configured to the application
domain name.
l F5 GTM GSLBs can flexibly allocate services to the two data centers based on the DNS
address, region, and egress bandwidth of a wide area network (WAN).
NOTE
Based on the preceding configuration requirements, see F5 GTM GSLB documents to configure the
GSLBs.
Go to http://www.f5.com to obtain the F5 Big-IP GSLB documents.
Prerequisites
The account and password for logging in to the L2800 Load Balancer Management System
(LBMS) have been obtained.
Context
Two L2800 Software Load Balancers (SLBs) in each data center are deployed active-standby
mode for service load balancing in the data center. When configuring the two L2800 SLBs in
each data center, configure the active L2800 SLB first and then synchronize the configuration
to the standby L2800 SLB.
The configuration method is the same in the two data centers. This section uses one data
center as an example to describe how to configure local load balancers.
Procedure
Step 1 Log in to the L2800 LBMS.
Adding server refers web server that IBM HTTP Server (IHS) or Apache server.
3. Enter the name, service IP address, and port number of the web server.
4. Click Submit.
Select real servers in Available Servers and click > to add them to Enabled Servers.
6. Click Submit.
The name of the resource has been configured in Step 3. In Pattern, enter * * * *. In
Scheduler, select Round Robin. For details about parameter meanings, see the L2800
product documentation.
6. Click Submit.
Step 5 Synchronize configuration between the active SLB and standby SLB.
1. Choose Configuration > Synchronization.
2. Enter the management IP addresses of the active L2800 and standby L2800.
3. Click Submit.
Synchronize the configuration result to the background system of the SLB.
4. Click Sync.
Synchronize the configuration information about the application server, resource pool,
and virtual service to the standby SLB.
----End
Introduction to Middleware
Figure 3-6 shows the middleware positions.
WEB WEB
Server Server
APP APP
Server Server
The types of web servers include the IBM HTTP Server (IHS) and Apache. The types of
application servers include IBM WAS and Oracle WebLogic. Active-active web servers are
available in the following two ways:
l IHS cooperates with WAS to make active-active web servers available.
The Hypertext Transfer Protocol (HTTP) requests from IHS to WAS are balanced. WAS
is used to deploy J2EE applications. It provides an elaborate environment for deploying
application programs. Comprehensive application program services and functions are
provided, covering transaction management, security, cluster, performance, availability,
connectivity, and scalability.
l Apache cooperates with Oracle WebLogic to make active-active middleware available.
Apache can work on all computer platforms that are widely used. It is one of the most
popular web server software worldwide. For an Oracle WebLogic cluster, a general
control end (AdminServer) is configured. The HTTP requests are balanced between
WebLogic nodes in random or round robin mode.
Configuration Requirements
l Serving as cluster resource pools, the web servers in two data centers are connected to
the L2800 devices to achieve load balancing between the web servers.
l Being configured as a cluster, application servers in two data centers are connected to
web servers to achieve load balancing between the application servers.
l Web servers between two data centers serve as backup resource pools for each other.
NOTE
Based on the preceding configuration requirements, see the corresponding documents to configure the
middleware.
Obtain the related documents from the following paths:
l Apache: http://www.apache.org
l Oracle WebLogic: http://www.oracle.com
l IBM WAS and IHS: http://www.ibm.com
Prerequisites
The account and password for logging in to VIS6600T have been obtained and VIS6600T has
been logged in successfully.
Context
The VIS6600T devices are connected using an Ethernet switch. The VIS6600T heartbeat
ports are connected to the Ethernet switch and belong to the same VLAN.
After the VIS6600T cluster hardware cables are connected, the heartbeat mode is internal
heartbeat mode by default. You must change the mode to external heartbeat mode.
Procedure
Step 1 Check the items in Table 3-11.
Before changing the heartbeat mode of the VIS6600T cluster, ensure that the items in Table
3-11 are correct. If an item fails to pass the check, check hardware cable connections and
switch port configuration to ensure that all items pass the check. Otherwise, the VIS6600T
cluster configuration fails.
2 Check whether On the heartbeat GE switch, run the The value of the
the Spanning display current-configuration command STP function
Tree Protocol to check whether value of the STP parameter is
(STP) function function parameter is disable. disable.
of the If the STP function is enabled, you must
VIS6600T set it to disable.
cluster's
Ethernet switch
is disable.
6 Check whether Log in to the CLI of two VIS6600T The versions of the
the versions of devices respectively and run the two VIS6600T
the two showcontroller command. devices are the
VIS6600T The command outputs on the two same.
devices are the VIS6600T devices must be the same.
same. Otherwise, upgrade the two VIS6600T
devices to the same version.
7 Check whether Log in to the two VIS6600T devices The domain names
the domain respectively and run the showdomain of the two
names of the command. VIS6600T devices
two VIS6600T If the domain names are different, see are the same.
devices are the Step 2 to go to the Heartbeat Mode and
same. Domain Management dialog box and
modify the domain names.
Step 2 On the two VIS6600T devices, change the heartbeat mode to external heartbeat mode.
1. In the navigation tree, select the Settings node.
2. Click Heartbeat Mode and Domain Management under Advanced in the function
pane.
The Heartbeat Mode and Domain Management dialog box is displayed.
3. Set Heartbeat Mode to External Heartbeat.
4. Click OK.
The 4-node VIS6600T cluster is set up successfully.
----End
Context
The storage arrays in the two data centers must be configured. OceanStor 6800 V3 is used as
an example to describe its basic configuration. For details about how to configure other
storage arrays based on the storage configuration planning, see the corresponding documents.
NOTE
For details about how to configure a heterogeneous storage array, see the related operation guide.
Figure 3-7 shows the procedure process and operation portal for configuring storage arrays in
DeviceManager.
Figure 3-7 Storage arrays configuration process and operation portal in DeviceManager
Configuration process Operation portal in DeviceManager
Start
3 Create a LUN.
5 Create a host.
Create a mapping
7
view.
End
Procedure
Step 1 Create a disk domain.
NOTE
For details, see the OceanStor 5300 V3&5500 V3&5600 V3&5800 V3&6800 V3&6900 V3 Storage
System V300R002 Basic Storage Service Guide. For example, open the documentation, search Creating
a Disk Domain to find the detailed operation procedure.
----End
Follow-up Procedure
Log in to the device management software of the storage array in data center B and see the
corresponding product documentation to complete all planned configurations.
NOTICE
l The storage array configuration takes effect after logical disks are scanned on the
OceanStor ISM of the VIS6600T.
l After the storage array configuration is adjusted, for example, the controller to which
LUNs belong is adjusted and a new LUN is added, logical disks must be scanned on the
OceanStor ISM of the VIS6600T to enable the configuration to take effect. Otherwise,
system performance may be adversely affected due to the original configuration.
heartbeat is interrupted between two VIS6600T devices, services can be quickly recovered
based on the quorum policy that has been configured.
Context
This section only configures the VIS6600T in one data center.
Context
Storage virtualization indicates that the VIS6600T can screen the differences between
heterogeneous storage arrays and consolidate the heterogeneous storage arrays into a unified
resource pool (logical disk group) that provides virtual volumes for application hosts.
Figure 3-8 shows the storage virtualization configuration process and operation portal on the
ISM interface.
Figure 3-8 Storage virtualization configuration process and operation portal on the ISM
interface
Configuration process Operation portal on the ISM interface
Start
6 Create a host
End
Procedure
Step 1 Scan for logical disks.
After you scan for logical disks, the displayed logical disks are the LUNs created on the
storage array.
The names of logical disks are automatically generated. You can use the unique WWN of a
LUN to determine the logical disk that corresponds to the LUN. Figure 3-9 and Figure 3-10
show a relationship between a logical disk and a LUN: huawei-hvs85t3_66=LUN_data,
huawei-hvs85t3_64=LUN_DCO, huawei-hvs85t3_65=LUN_FD.
NOTE
For details about how to configure storage virtualization, see the OceanStor VIS6600T V200R003 Product
Documentation. For example, open the documentation, search Scanning for Logical Disks to find the
detailed operation procedure.
----End
volume. The mirroring ensures that data is not lost upon a failure of the source volume or
mirror, improving data integrity and reliability.
Procedure
Step 1 Create a mirror.
1. Figure 3-12 shows the operation portal for creating a mirror.
2
4
1 3
NOTE
For details, see the OceanStor VIS6600T V200R003 Product Documentation. For example, open the
documentation, search Creating a Mirror to find the detailed operation procedure.
----End
Procedure
Step 1 Go to the operation portal for configuring quorum policies.
1
2
NOTE
For details, see the OceanStor VIS6600T V200R003 Product Documentation. For example, open the
documentation, search Initially Configuring a Storage System to find the detailed operation procedure.
----End
Procedure
Step 1 Go to the Active-Active Storage Management portal.
Figure 3-14 shows the location of Active-Active Storage Management.
NOTE
In the Disaster Recovery Data Center Solution (Active-Active Mode), the two data centers are deployed
as site A and site B.
DC_A Select the IDs of Yes Yes Select the storage array in
two VIS6600T data center A based on its
controllers (with the WWN.
same serial number) Obtain the WWN of the
in data center A storage array from the
based on the IP DeviceManager home
addresses that are page.
allocated to the
VIS6600T
controllers.
NOTE
For details, see the OceanStor VIS6600T V200R003 Product Documentation. For example, open the
documentation, search Configuring Active-Active Storage to find the detailed operation procedure.
----End
Prerequisites
l The account and password for logging in to UltraPath have been obtained and UltraPath
has been logged in successfully.
l No multipathing software is installed on application hosts.
Linux has its own multipathing software. Before installing UltraPath, uninstall the multipathing
software or disable its process. For details about how to uninstall the software or disable its
process, see the documentation of the operating system. The following example shows how to
disable the multipathing service process:
l Versions earlier than Red Hat 7.0 and CentOS 7.0: Run the chkconfig --level 35 multipathd
off command to disable the process and restart the system to enable the setting to take effect.
l Red Hat 7.0 and CentOS 7.0: Run the systemctl disable multipathd.service command to
disable the process and restart the system to enable the setting to take effect.
Context
In UltraPath, the VIS6600T node in the local data center is set to be preferentially accessed.
Services preferentially use the local VIS6600T node to process I/Os. When the local
VIS6600T node fails, services will use the VIS6600T node in the other data center. In this
way, service response efficiency is improved and access latency is reduced.
NOTE
This section uses a Linux-based application server to describe how to configure UltraPath. For other
operating systems, see the corresponding user guide based on your operating system.
l OceanStor UltraPath for Linux V100R008C00 User Guide
l OceanStor UltraPath for AIX V100R008C00 User Guide
l OceanStor UltraPath for Solaris V100R008C00 User Guide
l OceanStor UltraPath for vSphere V100R008C00 User Guide
l OceanStor UltraPath for Windows V100R008C00 User Guide
Procedure
Step 1 Set the controllers on the VIS6600T in the local data center to local end and controllers on the
VIS6600T in the other data center to remote end.
Parameter Description
Configuration instance
Assume that the ID of the VIS6600T in the data center where current application servers
reside is 0. The VIS6600T controller IDs are 0 and 1. In the other data center, the VIS6600T
ID is 1 and the VIS6600T controller IDs are 2 and 3. To set controllers 0 and 1 to local
controllers and controllers 2 and 3 to remote controllers.
Run the upadmin command to log in to the CLI. Run the following commands:
UltraPath CLI #1 > set remote_controller array_id=0 tpg_id=0,1
local //Set controllers 0 and 1 to local controllers.
UltraPath CLI #1 > set remote_controller array_id=1 tpg_id=2,3
remote //Set controllers 2 and 3 to remote controllers.
Step 2 Set the load balancing mode to load balancing among controllers.
UltraPath CLI #1 > set workingmode=0
----End
Prerequisites
The account and password for logging in to ReplicationDirector have been obtained and
ReplicationDirector has been logged in.
Context
Figure 3-15 shows the procedure and operation portal for configuring ReplicationDirector for
a 4-node VIS6600T cluster.
Figure 3-15 Procedure and operation portal for configuring ReplicationDirector for a 4-node
VIS6600T cluster
Configuration process Operation portal
Start
11
(1) _1
22
(1)
33
(1) _1 _1
11 Discover resources.
11
(2) _2 33
(2)_2
22(2)_2
22 Create a site.
Create a protected
33 group.
Procedure
Step 1 Discover resources.
Resource discovery includes discovering storage devices and hosts. Table 3-15 shows the
objects that must be added to ReplicationDirector.
Storage l Storage array in Obtain the IP addresses, user names, and passwords
data center A used to access devices and add storage.
l Storage array in
data center B
l VIS6600T in data
center A
l VIS6600T in data
center B
Host Oracle RAC node Obtain the IP addresses, user names, and passwords
used to access Oracle RAC nodes.
NOTE
If VMware is used at the application layer, add the
vCenter server.
NOTE
For details about how to configure ReplicationDirector for a 4-node VIS6600T cluster, see the
OceanStor ReplicationDirector V100R003C10 Product Documentation. For example, open the
documentation, search Discovering Resources to find the detailed operation procedure.
Create a local site for the Disaster Recovery Data Center Solution (Active-Active Mode). The
resources that must be added to the local site include all devices in Step 1.
You can add multiple protected objects to the same protected group based on your service
type and set a unified protection policy to protect the objects. In this example, an Oracle
protected group is created.
----End
Prerequisites
The two 4-node VIS6600T clusters have been configured and the configurations are the same.
Context
When configuring the Disaster Recovery Data Center Solution (Active-Active Mode) for an
8-node VIS6600T cluster in ReplicationDirector, you can configure and associate two 4-node
VIS6600T clusters.
Procedure
Step 2 In the navigation tree, choose Site Management > All Sites.
Step 3 In the right function pane, select the site that has been created.
Step 5 Select the added VIS6600T cluster and click Configure Cluster.
After the two 4-node VIS6600T clusters are associated, the configuration of the 8-node
VIS6600T cluster is completed.
----End
Prerequisites
l The OceanStor Toolkit (Toolkit for short) has been obtained.
l The IOmeter test tool has been obtained and installed on the test server.
NOTE
Context
If the storage arrays in the two data centers have the same I/O operations after the
configuration, the configuration is correct.
Figure 3-16 shows the process of verifying the configuration.
Start
Deliver I/Os.
End
Procedure
Step 1 Use Toolkit to inspect the VIS6600T cluster.
If an item does not pass the inspection or an alarm is generated, see the corresponding
document to resolve the problem and ensure that all items pass the inspection and no alarm is
generated.
This section describes how to check I/O operations of LUNs in the OceanStor 6800 V3. For details
about how to check the I/O operations of the heterogeneous storage array, see the related product
document.
If the I/O operations of LUNs are the same on the two storage arrays, the solution is working
properly. Otherwise, check the configuration.
----End
A Appendix
built to provide customers with an open storage system featuring solid security, robust
reliability, flexible management, and superb performance.
The OceanStor SNS (SNS for short) Fibre Channel switches combine a superior hardware and
software structure, meeting the requirements of enterprises for robust reliability, outstanding
performance, and high availability.
Figure A-5 and Figure A-6 show the exteriors of the SNS2224/2248 and SNS3096
respectively.
UltraPath improves data transfer reliability, ensures security of paths between an application
server and a storage system, and provides customers with an easy-to-use and highly efficient
path management solution to bring the performance of application hosts and storage systems
into full play, maximizing the return on investment (ROI).
NOTE
Fibre Channel switch 1 in data center A and Fibre Channel switch 2 in data center B are
cascaded. Fibre Channel switch 3 in data center A and Fibre Channel switch 4 in data center
B are cascaded. On two Fibre Channel switches that are cascaded, the ports of each link form
a zone. For details, see Table A-3 and Table A-4.
a: Zone members are separated by semicolons (;). Members are expressed in the format of
Domain ID of Fibre Channel switch:port Number. For example, 1:0 indicates port 0 on
Fibre Channel switch 1.
a: Zone members are separated by semicolons (;). Members are expressed in the format of
Domain ID of Fibre Channel switch:port Number. For example, 1:0 indicates port 0 on
Fibre Channel switch 1.
A.3 Glossary
C
iSCSI Internet Small Computer Systems Interface (iSCSI). A transport protocol that provides
for the SCSI protocol to be carried over a TCP based IP network. Standardized by the
Internet Engineering Task Force and described in RFC.
ISL Inter-Switch Link.
ISM Integrated Storage Manager.
LLD Low Level Design (LLD). The activities of further detailing the design content,
providing data configuration rules, and guiding data planning according to the
network topology solution, signaling channel routing solution, and service
implementation solution developed at the network planning stage.
LUN Logical Unit Number.
NAS Network Attached Storage (NAS). File-level computer data storage connected to a
computer network providing data access to heterogeneous clients. An NAS server
contains storage devices, such as disk arrays, CD/DVD drives, tape drives, or portable
storage media. An NVS server provides an embedded operating system to share files
across platforms.Network storage is developed towards two directions: SAN and
NAS. The NAS provides has a tradition of Ethernet data access. Its model comes from
the network file server.
R
RAC Real Application Clusters. A component of the Oracle database that allows a database
to be installed across multiple servers and to run any packaged or customized
application software without any modification.
RAID Redundant Array of Independent Disks.
RPO Recovery Point Objective. A service switchover policy that ensures the least data loss.
It tasks the data recovery point as the objective and ensures that the data used for the
service switchover is the latest backup data.
RTO Recovery Time Objective. A service switchover policy that ensures the shortest
switchover time. It tasks the recovery time point as the objective and ensures that the
redundancy machine can take over services as quickly as possible.
S
SAN Storage Area Network (SAN). An architecture to attach remote computer storage
devices such as disk array controllers, tape libraries and CD arrays to servers in such a
way that to the operating system the devices appear as locally attached devices.
SAS Serial Attached SCSI (SAS). A SCSI interface standard that provides for attaching
HBAs and RAID controllers to both SAS and SATA disk and tape drives, as well as
other SAS devices.
SLB Server Load Balancing.
STP Spanning Tree Protocol (STP). STP is a protocol that is used in the LAN to remove
the loop. STP applies to the redundant network to block some undesirable redundant
paths through certain algorithms and prune a loop network into a loop-free tree
network.
T
TCO Total Cost of Operation.
V
VIS Virtual Intelligent Storage.