Sie sind auf Seite 1von 456

Hitachi Universal Replicator -

Open Systems
TSI0150

Courseware Version 6.1


Notice: This document is for informational purposes only, and does not set forth any warranty, express or
implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems. This
document describes some capabilities that are conditioned on a maintenance contract with Hitachi Data
Systems being in effect, and that may be configuration-dependent, and features that may not be currently
available. Contact your local Hitachi Data Systems sales office for information on feature and product
availability.
Hitachi Data Systems sells and licenses its products subject to certain terms and conditions, including limited
warranties. To see a copy of these terms and conditions prior to purchase or license, please call your local sales
representative to obtain a printed copy. If you purchase or license the product, you are deemed to have
accepted these terms and conditions.
THE INFORMATION CONTAINED IN THIS MANUAL IS DISTRIBUTED ON AN "AS IS" BASIS
WITHOUT WARRANTY OF ANY KIND, INCLUDING WITHOUT LIMITATION, ANY IMPLIED
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR
NONINFRINGEMENT. IN NO EVENT WILL HDS BE LIABLE TO THE END USER OR ANY THIRD PARTY
FOR ANY LOSS OR DAMAGE, DIRECT OR INDIRECT, FROM THE USE OF THIS MANUAL, INCLUDING,
WITHOUT LIMITATION, LOST PROFITS, BUSINESS INTERRUPTION, GOODWILL OR LOST DATA,
EVEN IF HDS EXPRESSLY ADVISED OF SUCH LOSS OR DAMAGE.
Hitachi Data Systems is registered with the U.S. Patent and Trademark Office as a trademark and service
mark of Hitachi, Ltd. The Hitachi Data Systems logotype is a trademark and service mark of Hitachi, Ltd.
The following terms are trademarks or service marks of Hitachi Data Systems Corporation in the United
States and/or other countries:
Hitachi Data Systems Registered Trademarks
Hi-Track, ShadowImage, TrueCopy, Essential NAS Platform, Universal Storage Platform

Hitachi Data Systems Trademarks


HiCard, HiPass, Hi-PER Architecture, HiReturn, Hi-Star, iLAB, NanoCopy, Resource Manager, SplitSecond,
TrueNorth, Universal Star Network

All other trademarks, trade names, and service marks used herein are the rightful property of their respective
owners.
NOTICE:
Notational conventions: 1KB stands for 1,024 bytes, 1MB for 1,024 kilobytes, 1GB for 1,024 megabytes, and 1TB
for 1,024 gigabytes, as is consistent with IEC (International Electrotechnical Commission) standards for
prefixes for binary and metric multiples.
© Hitachi Data Systems Corporation 2013. All Rights Reserved
HDS Academy 1033

Contact Hitachi Data Systems at www.hds.com.

Page ii HDS Confidential: For distribution only to authorized parties.


Contents

INTRODUCTION ............................................................................................................. VII


Introductions ..............................................................................................................................vii
Course Description ................................................................................................................... viii
Supported Storage Systems.......................................................................................................ix
Prerequisites ............................................................................................................................... x
Course Objectives.......................................................................................................................xi
Agenda...................................................................................................................................... xiii
Learning Paths..........................................................................................................................xiv
HDS Academy Is on Twitter and LinkedIn .................................................................................xv
Collaborate and Share ..............................................................................................................xvi
Terms and Acronyms............................................................................................................... xvii
Replication Terminology ...........................................................................................................xix

1. OVERVIEW ............................................................................................................. 1-1


Module Objectives ................................................................................................................... 1-1
Purpose.................................................................................................................................... 1-2
Key Features............................................................................................................................ 1-3
Group Associations.................................................................................................................. 1-8
Documents............................................................................................................................. 1-10
Components........................................................................................................................... 1-11
Module Review ...................................................................................................................... 1-12

2. SPECIFICATIONS ..................................................................................................... 2-1


Module Objectives ................................................................................................................... 2-1
Common Storage System Specifications ................................................................................ 2-2
Volume Specifications.............................................................................................................. 2-3
System Option Modes.............................................................................................................. 2-6
Pair Volume Status ................................................................................................................ 2-11
Module Review ...................................................................................................................... 2-16

3. ARCHITECTURE AND INTERNAL OPERATIONS ............................................................ 3-1


Module Objectives ................................................................................................................... 3-1
Bitmap Areas ........................................................................................................................... 3-2
Moving Data Between Storage Systems ................................................................................. 3-7
Journal Volume Structure ......................................................................................................3-16
Performance Considerations and Restrictions ...................................................................... 3-19
Best Practices ........................................................................................................................ 3-21
Journal Cache Specifications ................................................................................................ 3-29
Configuration Planning........................................................................................................... 3-31
Review ................................................................................................................................... 3-33
Module Review ...................................................................................................................... 3-36

4. HITACHI STORAGE NAVIGATOR CONFIGURATION ...................................................... 4-1


Module Objectives ................................................................................................................... 4-1
Preparation Checklist............................................................................................................... 4-2
Preparation .............................................................................................................................. 4-3
Configuration.......................................................................................................................... 4-12
Journal Group Configuration Details...................................................................................... 4-30

HDS Confidential: For distribution only to authorized parties. Page iii


Contents Hitachi Universal Replicator - Open Systems

Review................................................................................................................................... 4-49
Module Review ..................................................................................................................... 4-50

5. STORAGE NAVIGATOR FOR OPERATIONS ..................................................................5-1


Module Objectives .................................................................................................................. 5-1
Preparation for Operations ...................................................................................................... 5-2
Commands Overview ............................................................................................................. 5-3
paircreate................................................................................................................................ 5-6
Detailed Information ............................................................................................................. 5-10
pairdisplay ............................................................................................................................ 5-13
pairsplit ................................................................................................................................. 5-14
pairresync ............................................................................................................................. 5-16
Change Pair Option .............................................................................................................. 5-18
pairsplit ................................................................................................................................. 5-19
Usage Monitor ...................................................................................................................... 5-22
History................................................................................................................................... 5-27
Options ................................................................................................................................. 5-28
Troubleshooting.................................................................................................................... 5-29
Commands and Status Review ............................................................................................ 5-31
Module Review ..................................................................................................................... 5-32

6. COMMAND CONTROL INTERFACE CONFIGURATION AND OPERATIONS .........................6-1


Module Objectives .................................................................................................................. 6-1
Overview.................................................................................................................................. 6-2
Checklist .................................................................................................................................. 6-6
Configuration .......................................................................................................................... 6-7
Commands ........................................................................................................................... 6-27
Scripted Commands for Disaster Recovery ......................................................................... 6-45
Microsoft Windows Subcommands ...................................................................................... 6-47
Configuration Setting Commands ........................................................................................ 6-51
Troubleshooting.................................................................................................................... 6-53
Module Summary ................................................................................................................. 6-57
Module Review ..................................................................................................................... 6-58

7. DATA PROTECTION CONCEPTS AND PRACTICES ........................................................7-1


Module Objectives .................................................................................................................. 7-1
Bundles.................................................................................................................................... 7-2
Planning Considerations ........................................................................................................ 7-3
Planning Considerations: Rolling Disaster ............................................................................. 7-8
Planning Considerations ........................................................................................................ 7-9
Failover ................................................................................................................................. 7-16
Best Practices....................................................................................................................... 7-22
Module Review ..................................................................................................................... 7-30

8. THREE DATA CENTER OPERATIONS .........................................................................8-1


Module Objectives .................................................................................................................. 8-1
Purpose of 3DC Replication .................................................................................................... 8-2
Configurations ........................................................................................................................ 8-3
Operations ............................................................................................................................ 8-18
Disaster Recovery with 3DC ................................................................................................. 8-20
3DC Disaster Recovery........................................................................................................ 8-25
Module Review ..................................................................................................................... 8-26

Page iv HDS Confidential: For distribution only to authorized parties.


Hitachi Universal Replicator - Open Systems Contents

9. DELTA RESYNCHRONIZATION .................................................................................. 9-1


Module Objectives .................................................................................................................. 9-1
Concepts .................................................................................................................................. 9-2
Specifications.......................................................................................................................... 9-9
Configuration......................................................................................................................... 9-11
CCI Support for Delta Resync .............................................................................................. 9-21
Specifications........................................................................................................................ 9-22
Sequence of Delta Resync ................................................................................................... 9-23
Configuration......................................................................................................................... 9-26
Sequence of Delta Resync ................................................................................................... 9-28
Review .................................................................................................................................. 9-29
Module Review ..................................................................................................................... 9-30

10. DATA TRANSPORT TECHNOLOGIES ...................................................................... 10-1


Module Objectives ................................................................................................................ 10-1
Supported Link Topologies .................................................................................................... 10-2
Transport Technologies ......................................................................................................... 10-7
SAN Extension to MAN........................................................................................................ 10-10
Optical Flow Control............................................................................................................ 10-15
SAN Extension to WAN ....................................................................................................... 10-19
WAN Technology ................................................................................................................ 10-22
SAN Extension Technology Comparison............................................................................ 10-23
Module Review ................................................................................................................... 10-24

11. HITACHI REPLICATION MANAGER OVERVIEW ........................................................ 11-1


Module Objectives ................................................................................................................ 11-1
Purpose and Benefits............................................................................................................ 11-2
Components.......................................................................................................................... 11-6
Initial Setup ........................................................................................................................... 11-8
Prerequisite Software............................................................................................................ 11-9
Launching ........................................................................................................................... 11-10
Register Information Sources ............................................................................................. 11-12
Refresh Configuration from Information Sources ............................................................... 11-15
Managing Users and User Permissions .............................................................................. 11-17
Sites Views ......................................................................................................................... 11-18
Launching Views................................................................................................................. 11-20
Launching Views................................................................................................................. 11-21
Universal Replicator Operations ......................................................................................... 11-27
TrueCopy Operations.......................................................................................................... 11-37
ShadowImage Replication Operations ............................................................................... 11-41
Copy-on-Write Snapshot /Thin Image ................................................................................ 11-43
Alerts ................................................................................................................................... 11-46
Create Alert Setting Wizard ................................................................................................. 11-50
Setting Up Alerts ................................................................................................................. 11-52
Alert Status ......................................................................................................................... 11-55
Application Replicas............................................................................................................ 11-57
Application Backup and Restore Features ......................................................................... 11-59
Module Review ................................................................................................................... 11-60

12. UNIVERSAL REPLICATOR MXN CONSISTENCY GROUPS ......................................... 12-1


Module Objectives ................................................................................................................ 12-1
Licensing ............................................................................................................................... 12-2
Concepts ............................................................................................................................... 12-3
Managing Extended Consistency Groups ............................................................................ 12-8

HDS Confidential: For distribution only to authorized parties. Page v


Contents Hitachi Universal Replicator - Open Systems

Managing Extended Consistency Group............................................................................... 12-9


Managing............................................................................................................................ 12-10
Restrictions......................................................................................................................... 12-15
Module Review ................................................................................................................... 12-16

NEXT STEPS.............................................................................................................. N-1


GLOSSARY ................................................................................................................ G-1
EVALUATING THIS COURSE ........................................................................................ E-1

Page vi HDS Confidential: For distribution only to authorized parties.


Introduction
Introductions

Name
Position
Experience
What you expect from the course

HDS Confidential: For distribution only to authorized parties. Page vii


Introduction
Course Description

Course Description

This 5 day course covers the use of Hitachi Universal Replicator


(HUR) and Hitachi TrueCopy in three data center (3DC)
configurations, as well as Universal Replicator Delta Resync
function. Details on the use of MxN Consistency (CT) Groups,
Remote Replication Data Transport, and Hitachi Replication
Manager are also presented.

The classroom sessions are supported by lab exercises where the


participants will learn to install, configure, and use HUR, 3DC, 3DC
Async, 3DC Delta Resync, and MxN CT Groups on Hitachi storage
platforms.

Page viii HDS Confidential: For distribution only to authorized parties.


Introduction
Supported Storage Systems

Supported Storage Systems

Virtual Storage Platform (VSP)


Hitachi Unified Storage VM (HUS VM)
Hitachi Universal Storage Platform V (USP V)
Hitachi Universal Storage Platform VM (USP VM)
Hitachi Universal Storage Platform (USP), Network Storage
Controller

HDS Confidential: For distribution only to authorized parties. Page ix


Introduction
Prerequisites

Prerequisites

Prerequisite Coursework (recommended)


• CSI0147 - Hitachi Enterprise Storage Replication
Supplemental Coursework (recommended)
CCE1879 - Hitachi Data Systems Business Continuity Fundamentals
CCI1999 - Managing Hitachi Virtual Storage Platform with Storage
Navigator or
CCE2288 - Hitachi Unified Storage Fundamentals

Prerequisite Knowledge
• This training requires a basic knowledge of VSP, USP V and USP VM
• In addition, the learner should be familiar with Hitachi Storage Navigator
and CCI software

Page x HDS Confidential: For distribution only to authorized parties.


Introduction
Course Objectives

Course Objectives

Upon completion of this course, you should be able to:


• Describe Hitachi Universal Replicator key features and benefits
• Describe Universal Replicator components and specifications
• Describe Universal Replicator journals, including sizing requirements and
restrictions on journal volumes
• Identify Universal Replicator performance factors
• Use Hitachi Storage Navigator to configure Universal Replicator
replication links, remote storage systems, and journals
• Issue Storage Navigator commands for Universal Replicator operations
• Set up CCI command devices, create CCI configuration files and operate
Universal Replicator with CCI commands

HDS Confidential: For distribution only to authorized parties. Page xi


Introduction
Course Objectives

Upon completion of this course, you should be able to: (continued)


• Identify data protection best practice techniques
• Operate 3DC Configurations including 3DC Async
• Configure and operate Universal Replicator Delta Resync feature
• Discuss Remote Replication Data Transport considerations
• Describe the features and use of Hitachi Replication Manager software
• Create and use Universal Replicator MxN CT Groups

Page xii HDS Confidential: For distribution only to authorized parties.


Introduction
Agenda

Agenda

Modules Lab Activities


1. Overview
2. Specifications
3. Architecture and Internal Operations
4. Hitachi Storage Navigator Configuration
5. Storage Navigator Pair Operations 1. Hitachi Universal Replicator
Configuration and Operations
6. Command Control Interface
Configuration and Operations
7. Data Protection Concepts and 2. Managing Universal Replicator with
Practices CCI
8. Three Data Center Operations 3. Universal Replicator in 3DC Operations
9. Delta Resynchronization 4. Universal Replicator Delta Resync
Operations
10. Data Transport Technologies
11. Hitachi Replication Manager Overview
12. Universal Replicator MxN Consistency 5. Universal Replicator MxN Consistency
Groups Group Operations

HDS Confidential: For distribution only to authorized parties. Page xiii


Introduction
Learning Paths

Learning Paths

Are a path to professional


certification
Enable career advancement
Are for customers, partners
and employees
• Available on HDS.com,
Partner Xchange and HDSnet
Are available from the instructor
• Details or copies

HDS.com: http://www.hds.com/services/education/
Partner Xchange Portal: https://portal.hds.com/
HDSnet: http://hdsnet.hds.com/hds_academy/
Please contact your local training administrator if you have any questions regarding
Learning Paths or visit your applicable website.

Page xiv HDS Confidential: For distribution only to authorized parties.


Introduction
HDS Academy Is on Twitter and LinkedIn

HDS Academy Is on Twitter and LinkedIn

Follow the HDS Academy on Twitter for regular


training updates.

LinkedIn is an online community that enables


students and instructors to actively participate in
online discussions related to Hitachi Data Systems
products and training courses.

These are the URLs for Twitter and LinkedIn:


http://twitter.com/#!/HDSAcademy
http://www.linkedin.com/groups?gid=3044480&trk=myg_ugrp_ovr

HDS Confidential: For distribution only to authorized parties. Page xv


Introduction
Collaborate and Share

Collaborate and Share

Learn what’s new in the Academy


Ask the Academy a question
Discover and share expertise
Shorten your time to mastery
Give your feedback
Participate in forums

Academy in theLoop!

theLoop:
http://loop.hds.com/community/hds_academy/course_announcements_and_feed
back_community ― HDS internal only

Page xvi HDS Confidential: For distribution only to authorized parties.


Introduction
Terms and Acronyms

Terms and Acronyms

LDEV = Logical partition of an array group by emulation type. Size


depends on emulation
LUN = LDEV that is mapped to a port
Emulation = Partitioning of array group into OPEN-X LDEVs
• Only one Emulation Type per Array Group is allowed

LCU = Logical Control Unit


• Number of LCUs is dependent on cache size, number of Array Groups
installed
• LDEVs are assigned to Logical Control Units at Install Time
Max 256 LDEV per LCU
SSID = Storage Subsystem ID — Unique ID assigned to each LCU
Needed for TrueCopy

HDS Confidential: For distribution only to authorized parties. Page xvii


Introduction
Terms and Acronyms

FED = Front-End Director = Channel Adapter = CHA


BED = Back-End Director = ACP Pair = Array Control Processor =
DKA = Disk Adapter
Array Group = RAID Group = Parity Group
• 4 or 8 HDD of RAID-1, or RAID-5, or RAID-6
DKC = Disk Controller
DKU = Disk Unit = Disk Frame
HDD = Hard Disk Device = disk drive

Page xviii HDS Confidential: For distribution only to authorized parties.


Introduction
Replication Terminology

Replication Terminology

Main Control Unit (MCU, M-DKC)


• Contains Primary Volumes (P-VOLs) and Master Journal Volumes (M-JNL)
Remote Control Unit (RCU, R-DKC)
• Contains Secondary Volumes (S-VOLs) and Restore Journal Volumes (R-JNL)
P-VOL (Primary Volume)
• Active, online LUN (in MCU)
S-VOL (Secondary Volume)
• Remote copy of the P-VOL (in RCU)
Replication Links (Paths)
• Fibre Channel connection to carry replication control and data
Journal Volume
• Buffer for journal updates if necessary
Journal Groups
• Contain both journal volumes and data volumes, grouped by application or server
• Allow Consistency Group operation on multiple data volumes with one command

HDS Confidential: For distribution only to authorized parties. Page xix


Introduction
Replication Terminology

Page xx HDS Confidential: For distribution only to authorized parties.


1. Overview
Module Objectives

Upon completion of this module, you should be able to:


• Define the purpose of Hitachi Universal Replicator
• Identify the key features of Universal Replicator
• Discuss Universal Replicator group associations
• Describe Universal Replicator documentation
• Define and review Universal Replicator components

HDS Confidential: For distribution only to authorized parties. Page 1-1


Overview
Purpose

Purpose

Hitachi Universal Replicator


• Provides an asynchronous replication solution
Business continuity and disaster recovery

Data migration

• Provides one-to-one copies at any distance between Hitachi enterprise


storage systems
• Allows production volumes to stay online during normal Universal
Replicator operations
• Supports 3DC replication when paired with Hitachi TrueCopy
Synchronous Replication software or with another instance of Universal
Replicator (3DC Async configuration)
• Supports 4DC configurations

Universal Replicator is a disaster recovery solution for large amounts of data which
span multiple volumes. The Universal Replicator group-based update sequence
consistency solution enables fast and accurate database recovery, even after a rolling
disaster, without the need for time-consuming data recovery procedures.
During normal data replication operations, the primary data volumes remain online to
all hosts and continue to process both read and write I/O operations. In the event of a
disaster or system failure, the secondary copy of data can be rapidly invoked to allow
recovery with a very high level of data integrity. Universal Replicator can also be used
for data duplication and migration tasks
Once Universal Replicator operations are established, duplicate copies of data are
automatically maintained asynchronously. Universal Replicator enables fast and
accurate database recovery even after disasters, such as earthquakes, without the
time-consuming data recovery procedures.
Supported Hitachi enterprise storage systems:
Hitachi Virtual Storage Platform
Hitachi Universal Storage Platform V or VM
Hitachi Universal Storage Platform or Hitachi Network Storage Controller

Page 1-2 HDS Confidential: For distribution only to authorized parties.


Overview
Key Features

Key Features

Compatibility

Universal Replicator Asynchronous Remote Replication


• HUR volumes can be basic or dynamic provisioning volumes
• ShadowImage and Thin Image Snapshot can provide multiple local mirrors in each site
• All replication volumes can be externalized on any supported storage system
• Configurations can be defined with SNM, Hitachi Replication Manager (HRpM) , or with
RAIDCOM

CCI / HRpM / Unlimited Distance CCI / HRpM /


RAIDCOM RAIDCOM

P-VOL S-VOL

VSP / HUS VM / VSP / HUS VM /


P-VOL S-VOL
USP V / USP USP V / USP
Supported Supported
storage system storage system

ShadowImage Thin Image ShadowImage


S-VOL S-VOL S-VOL

HDS Confidential: For distribution only to authorized parties. Page 1-3


Overview
Key Features

Journals

Journal Cache
• Cache space in which Universal Replicator can build control information (metadata)
and temporarily store overwritten write data blocks
Journal Volumes
• Offline physical OPEN-V LDEVs on storage system
• Required on M-DKC (primary) and R-DKC (remote) storage arrays
• Buffer for journal updates (metadata and data) during replication
• Enhances Universal Replicator ability to survive communication failure between
sites
Journal Groups - Provide Hardware-level consistency grouping
• Contains journal volumes and data volumes assigned by application or by server
• All multi-volume data generated by an application must be in the same journal group
Journal volumes and data volumes in the same journal group can be
assigned to different CLPRs
For VSP, journal groups should be managed by a dedicated Virtual Director
Blade

When Universal Replicator is used, data to be copied will be temporarily stored in


journal volumes, which are a type of physical logical devices. Universal Replicator
enables you to configure and manage highly reliable data replication systems by
using journal volumes to reduce chances of suspension of copy operations; copy
operations can be suspended due to restrictions on data transfers from the primary
site to the remote site.
The updates (sometimes called update data) that will be stored in journal volumes are
called journal data or sometimes differential data. Because journal data will be stored
in journal volumes, you can perform and manage highly reliable remote copy
operations without suspension of remote copy operations. For example: Even if a
communication path between the primary storage system and the remote storage
system fails temporarily, remote copy operations can continue as the journal
volumes provide buffering capability to maintain the replication until the
communication path is recovered.
If data transfer from hosts to the primary storage system is temporarily faster than
data transfer between the primary storage system and the remote storage system,
remote copy operations between the primary storage system and the remote storage
system can continue. Because journal volumes can contain much more update data
than the cache memory, remote copy operations can continue if data transfer from
hosts to the primary storage system is faster for a relatively long period of time than
data transfer between the primary storage system and the remote storage system.

Page 1-4 HDS Confidential: For distribution only to authorized parties.


Overview
Key Features

Note: If you add a journal volume when a remote copy operation is in progress (that
is, when at least one data volume pair exists for data copying), the metadata area of
the journal volume will be unused and only the journal data area will be used.
To make the metadata area usable, you need to split (suspend) all the data volume
pairs in the journal group and then restore (resynchronize) the pairs.
Adding journal volumes during a remote copy operation will not decrease the
metadata usage rate if the metadata usage rate is high.
Adding journal volumes during a remote copy operation may not change the
journal data usage rate until the journal volumes are used.

HDS Confidential: For distribution only to authorized parties. Page 1-5


Overview
Key Features

MxN Consistency Group (MxN CT Group)

An MxN Consistency Group can consist of multiple storage systems at either


primary or secondary sites
Secondary volume consistency is preserved across multiple journal groups
(may extend across multiple storage systems)
Requires Command Control Interface software for creation and operations

CCI CCI

UR Consistency among
two or more journal
groups; may span
UR
storage systems

UR

UR

Page 1-6 HDS Confidential: For distribution only to authorized parties.


Overview
Key Features

MxN Consistency Group (MxN CT Group)

Delivers asynchronous 2DC support for:


• Up to four storage systems per site
• Intermix of VSP, HUS VM, USP V, or USP VM at both sites
• 32K pairs in a single MxN consistency group spread over four storage
systems (see note)
• Allows split of multiple journal groups as a single consistency group split
based on consistency markers (CTQ-Markers)
• Requires latest CCI version to support HUS VM

Note: 8K pairs per Journal Group X 4 Journal Groups per MxN CT Group X 2 LDKC
per storage system = 64K pairs per MxN CT Group

HDS Confidential: For distribution only to authorized parties. Page 1-7


Overview
Group Associations

Group Associations

Universal Replicator group associations

Allowed Not allowed Not allowed (Open


Systems)
MCU/RCU MCU/RCU MCU/RCU MCU/RCU MCU RCU

P-VOL S-VOL P-VOL S-VOL P-VOL S-VOL

JNL Group JNL Group JNL Group


P-VOL S-VOL S-VOL P-VOL P-VOL RCU

S-VOL
S-VOL P-VOL
JNL Group Illegal Copy direction

Illegal JNL Grouping


Mirrored Replication

Page 1-8 HDS Confidential: For distribution only to authorized parties.


Overview
Group Associations

Allowable configurations

Allowed

MCU RCU

P-VOL S-VOL
JNL Group 00 RCU
P-VOL S-VOL

P-VOL S-VOL
JNL Group 01

MxN Consistency Group (CT Group)

HDS Confidential: For distribution only to authorized parties. Page 1-9


Overview
Documents

Documents

MK-90RD7032
• Hitachi Virtual Storage Platform Universal Replicator User Guide

MK-92HM7019
• Hitachi Unified Storage VM Universal Replicator User Guide

MK-96RD624
• Hitachi Universal Storage Platform V, Hitachi Universal Storage Platform VM Hitachi Universal
Replicator User's Guide

MK-90RD7008, MK-90RD7009, MK-90RD7010 CCI documents


For operations with TrueCopy and ShadowImage
• MK-90RD7029 Hitachi Virtual Storage Platform TrueCopy User’s Guide
• MK-90RD7024 Hitachi Virtual Storage Platform ShadowImage User’s Guide
• MK-96RD622 Hitachi Universal Storage Platform V, Hitachi Universal Storage Platform
VM TrueCopy User’s Guide
• MK-96RD618 Hitachi Universal Storage Platform V, Hitachi Universal Storage Platform
VM ShadowImage User’s Guide
• MK-92HM7018 Hitachi Unified Storage VM Hitachi TrueCopy User Guide
• MK-92HM7013 Hitachi Unified Storage VM Hitachi ShadowImage User Guide

If required, the equivalent documentation for the USP/NSC models:


MK-94RD223 Hitachi TagmaStore Universal Storage Platform and Network
Storage Controller Universal Replicator User and Reference Guide
MK-94RD215 Hitachi TagmaStore Universal Storage Platform and Network
Storage Controller TrueCopy User and Reference Guide
MK-94RD204 Hitachi TagmaStore Universal Storage Platform and Network
Storage Controller ShadowImage User’s Guide

Page 1-10 HDS Confidential: For distribution only to authorized parties.


Overview
Components

Components

Equivalent microcode levels on both MCU and RCU


Storage Navigator / Device Manager access to both MCU and RCU
Disaster Recovery Bundle - Licenses for Universal Replicator and
TrueCopy Sync.
Disaster Recovery Extended bundle for 3DC and UR MxN
Consistency Groups
Optional, but recommended for disaster recovery environments
• Hitachi Command Control Interface
• Host failover software
• Hitachi Device Manager and Replication Manager

HDS Confidential: For distribution only to authorized parties. Page 1-11


Overview
Module Review

Module Review

1. State one of the reasons to use Universal Replicator.


2. What are the two primary means of determining which data
volumes should be grouped together?
3. What licenses are required to use Universal Replicator in a 3DC
environment?

Page 1-12 HDS Confidential: For distribution only to authorized parties.


2. Specifications
Module Objectives

Upon completion of this module, you should be able to:


• Describe Hitachi Universal Replicator specifications
• Discuss System Option Modes that pertain to Universal Replicator
• Describe Universal Replicator volume status conditions

HDS Confidential: For distribution only to authorized parties. Page 2-1


Specifications
Common Storage System Specifications

Common Storage System Specifications

Supported Storage and Emulation Types

Item Emulation/Spec Remarks


Virtual Storage Platform 3990-6 (Basic) is not supported
Universal Storage Platform V 2107 is mandatory for USPV and VSP
Universal Storage Platform VM
Universal Storage Platform
Network Storage Controller
RAID-5 RAID-5 — 3D+1P and 7D+1P
RAID-6 RAID-6 — 6D+2P and 14D+2P
RAID level RAID-1 RAID-1 — 2D+2D
RAID groups can be concatenated
(up to 4)
OPEN OPEN-V
Data
volume 3390-1,-2,-3
LDEV M/F
3390-3R,-9,-L, -M
emulation
type OPEN OPEN-V
Journal
volume OPEN-V
M/F

For Open Systems environments, Controller Emulation has no effect, but must be set.

Page 2-2 HDS Confidential: For distribution only to authorized parties.


Specifications
Volume Specifications

Volume Specifications

Universal Replicator usage with other features


• Hitachi Dynamic Provisioning: P-VOLs and S-VOLs
• Externalized volumes (virtual LUNs): P-VOLs and S-VOLs
• TrueCopy: P-VOLs and S-VOLs
• ShadowImage: P-VOLs and S-VOLs
• Copy-on-Write Snapshot: P-VOLs only
• Volume Migrator: P-VOLs If mapped to ports
• Data Retention Utility: Certain volumes; refer to Data Retention Utility
documentation
• LUSE (Logical Unit Size Expansion) volumes: P-VOL and S-VOL must
have the same size and structure
• Cache Residency Manager volumes
• Server Priority Manager volumes: P-VOLs and S-VOLs

Notes on LUSE usage:


Two LUSE volumes can be assigned to a Universal Replicator (HUR) pair. Both
of the LUSE volumes that are assigned to primary and secondary data volumes
must consist of the same number of LDEVs and must have the same capacity.
If you want to perform LUSE operation to primary or secondary data volumes in
an existing UR pair, you must delete the pair first to return the volumes to SMPL
status. For detailed information about LUN Expansion (LUSE), see the LUN
Expansion User's Guide.

HDS Confidential: For distribution only to authorized parties. Page 2-3


Specifications
Volume Specifications

Maximum number of pairs:


• Virtual Storage Platform: 32,768
• Hitachi Unified Storage VM: 16,384
• Universal Storage Platform V and VM: 32,768
• Universal Storage Platform and NSC: 16,384

Journal volumes and data volumes in the same journal group can belong to different
CLPRs. All journal volumes must belong to a single CLPR and all data volumes
must belong to a single CLPR.
NSC = Hitachi Network Storage Controller

Page 2-4 HDS Confidential: For distribution only to authorized parties.


Specifications
Volume Specifications

Journal Group specifications for Hitachi enterprise storage system

Item Specification

Number of Journal Groups Up to 256 (00 - FF) per storage system

Data Volumes per Journal Group Up to 8192

Journal Volumes per Journal Group Up to 64 (VSP, HUS VM, and USPV)
Up to 16 (USP)

Mirror IDs per UR Data Volume 0 to 3 (Reserve 0 for TrueCopy)

HDS Confidential: For distribution only to authorized parties. Page 2-5


Specifications
System Option Modes

System Option Modes

Contact Tech Support for information


Note: Changes requires tech support involvement

Page 2-6 HDS Confidential: For distribution only to authorized parties.


Specifications
System Option Modes

HDS Confidential: For distribution only to authorized parties. Page 2-7


Specifications
System Option Modes

Page 2-8 HDS Confidential: For distribution only to authorized parties.


Specifications
System Option Modes

HDS Confidential: For distribution only to authorized parties. Page 2-9


Specifications
System Option Modes

New for VSP and HUS VM

Page 2-10 HDS Confidential: For distribution only to authorized parties.


Specifications
Pair Volume Status

Pair Volume Status

Volume status conditions


Pair P-VOL S-VOL
Description
Status Access Access
SMPL This volume is not currently assigned to a Universal Read/ Read/
Replicator data volume pair. This volume does not belong Write Write
to the journal group. When this volume is added to a
Universal Replicator data volume pair, its status will
change to COPY.
COPY The initial copy operation (or resync copy) for this pair is Read/ N/A
in progress. This data volume pair is not yet synchronized. Write
When the initial copy is complete, the status changes to
PAIR.
PAIR This data volume pair is synchronized. Updates to the Read/ N/A
primary data volume are journaled and sent to the Write
secondary data volume.
PSUS This data volume pair is suspended as a result of the Read/ Read
pairsplit command (pairsplit -r). Write Only
While the pair is split, MCU and RCU bitmaps denote (Default)
changes to P-VOL and S-VOL.

PSUS Status: Pair Suspended Synchronized


This data volume pair is not synchronized, because the user has split this pair
(pairsplit-r), or because the user has released this pair from the remote storage
system (pairsplit-S). For Universal Replicator pairs, the primary storage system and
remote storage system keep track of any journal data that were discarded during the
pairsplit-r operation. While a pair is split, the primary storage system and remote
storage system keep track of the primary data volume and secondary data volume
tracks which are updated.
When you split a pair from the primary storage system, the primary storage system
changes the status of the primary data volume and secondary data volume to PSUS.
When you split a pair from the remote storage system, the remote storage system
changes the status of the secondary data volume to PSUS. The primary storage
system detects this (if path status is normal) and changes primary data volume
status to PSUS.
When you release a pair from the remote storage system, the remote storage system
changes the status of the secondary data volume to SMPL. The primary storage
system detects this (if path status is normal) and changes primary data volume
status to PSUS. You must release the pair from the primary storage system in order
to change the primary data volume status to SMPL.

HDS Confidential: For distribution only to authorized parties. Page 2-11


Specifications
Pair Volume Status

Volume status conditions


Pair Description P-VOL S-VOL
Status Access Access
PSUE This data volume pair is suspended by MCU or RCU due Read/ Read
to an error condition. MCU and RCU keep track of any Write Only
journal data that were discarded during the suspension (Default)
operation. The MCU keeps track of the primary data
volume tracks which are updated while the pair is
suspended.
PFUL Universal Replicator monitors the total amount of data in Read/ Read
the journal volume. If the amount of data exceeds the Write Only
threshold (80%), the pair status changes from COPY or (Default)
PAIR to PFUL.
If Inflow Control is ON, Host I/O will be delayed to prevent
Journal Volumes from filling.
If Inflow Control is OFF, a message is generated but Host
I/O is not delayed.
Note: The PFUL status is displayed by CCI (Command
Control Interface). Storage Navigator displays this status
as PAIR.

PSUE: Pair Suspended Error


This data volume pair is not synchronized, because the primary storage system or
remote storage system has suspended the pair due to an error condition. For
Universal Replicator pairs the primary storage system and remote storage system
keep track of any journal data that were discarded during the suspension operation.
The primary storage system keeps track of the primary data volume tracks which are
updated while the pair is suspended.
If the primary storage system detects a Universal Replicator suspension
condition (see Suspension Condition), the primary storage system changes the
secondary data volume status to PSUE.
If the primary storage system detects a Universal Replicator suspension
condition, the remote storage system changes the secondary data volume status to
PSUE. The primary storage system detects this (if path status is normal) and
changes primary data volume status to PSUS.

Page 2-12 HDS Confidential: For distribution only to authorized parties.


Specifications
Pair Volume Status

Volume status conditions


Pair Description P-VOL S-VOL
Status Access Access
PFUS Indicates pair suspension due to Journal Volumes filling: Read/ Read
Inflow Control OFF - Occurs when Journal reaches 100% Write Only
utilization (Default)
Inflow Control ON - Occurs if Journal Threshold is Write if
reached (80% utilization) and Data Overflow Watch timer write
period expires option is
Note: The PFUS status is displayed by CCI (Command enabled
Control Interface). Storage Navigator displays this status
as PSUS.
SSWS Result of SUSPEND SWAP (pairsplit -RS) operation in Read Read/
RCU which conditions S-VOLS for takeover. The data can only Write
be written into the secondary data volume that is
reassigned from the primary data volume during
processing of resynchronization (Takeover).
Note: The SSWS status is displayed by CCI (Command
Control Interface). Storage Navigator displays this status
as PSUS or PSUE.

PFUS (Pair Full Suspended)


SSWS (Secondary Swap Suspended) - Takeover function is active.

HDS Confidential: For distribution only to authorized parties. Page 2-13


Specifications
Pair Volume Status

Volume status conditions


Pair Description P-VOL S-VOL
Status Access Access
Suspending This pair is in transition from PAIR or COPY to Read/ N/A
PSUS/PSUE. When the split/suspend pair operation Write
is requested, the status of all affected pairs changes
to Suspending. When the split/suspend operation is
complete, the status changes to PSUS/PSUE.
Deleting This pair is in transition from PAIR, COPY, or Read/ N/A
PSUS/PSUE to SMPL. When the pairsplit -S Write
operation is requested, the status of all affected pairs
changes to Deleting. When the pairsplit -S operation
is complete, the status changes to SMPL.

Page 2-14 HDS Confidential: For distribution only to authorized parties.


Specifications
Pair Volume Status

Transition from PAIR to PFUS


• PFUL = Pool (Journal) full
• PFUS = Pool (Journal) full, suspended
• If Journal Data exceeds 80%, group status changes to PFUL
• Depending on Inflow Control Setting, if Journal Data attains 100%, or if it
exceeds 80% for the user-set period of time, group status changes to
PFUS (suspended)

80%

Universal Replicator monitors the amount of journal data. If the amount of data
exceeds the threshold (80%), the pair status changes to PFUL. If the amount of
journal data exceeds the threshold for a certain period of time, the volume status
changes to PFUS, and the group is suspended.

HDS Confidential: For distribution only to authorized parties. Page 2-15


Specifications
Module Review

Module Review

1. Any OPEN emulation type can be used for journal volumes.


True or False?
2. How many journal volumes can be created in one Journal Group?
3. What does the status PSUE signify?
4. What does the status PFUS signify?

Page 2-16 HDS Confidential: For distribution only to authorized parties.


3. Architecture and
Internal Operations
Module Objectives

Upon completion of this module, you should be able to:


• Describe how Universal Replicator uses bitmap areas during pair
operations
• Describe how Universal Replicator moves data between storage systems
• Demonstrate how journal volumes are utilized to replicate data
• Describe the structure of a journal volume
• Identify Universal Replicator performance considerations and restrictions
• Identify best practices for specifying journal volumes
• Identify journal cache specifications
• Describe Universal Replicator MxN Consistency Group functionality
• Discuss configuration planning steps and considerations

HDS Confidential: For distribution only to authorized parties. Page 3-1


Architecture and Internal Operations
Bitmap Areas

Bitmap Areas

Bitmap tracking mechanism is invoked when pairs are created


• Denotes locations of changes on P-VOL while data is copied (Base
Journal and resync)
• The Base Journal process reads P-VOL, creates metadata, and sends
data to RCU under request of RCU Read Journal
• Base Journal makes one pass through the bitmap
When a write comes in, its location is checked to see if the track has
already been sent
If it has not been sent, then that track is sent next, with a sequence
number and the update follow with the next sequence number
If it has been sent, then the write is sequenced next and handled as a
journal update
• In remote replication, differential bitmaps are now called bitmap areas
• Created in control memory (VSP, HUS VM), or shared memory (USP V)
VSP and USP V bitmaps denote those data locations (track or
cylinder) that have been changed by host I/O

Note: The number of bitmap areas affects the maximum possible number of pairs
that can be created in the disk storage system.

Page 3-2 HDS Confidential: For distribution only to authorized parties.


Architecture and Internal Operations
Bitmap Areas

HUS VM bitmaps denote changed tracks (256KB block) changed by


host I/O
Bitmaps are also invoked when pairs are split or suspended
• When data volumes are resynchronized, P-VOL and S-VOL bitmaps are
merged and changed data are read from the P-VOL and sent to the
remote storage system

Bitmap area usage during Base Journal


Best Practice: Use Bitmap 1 (Mirror ID 1) for the UR pair, leaving the other
bitmaps available for future implementation of TrueCopy and Delta Resync

Primary Volume Bitmap (X denotes data locations changed by host I/O during Base
Journal)
X | | X | | | | X | | X | | | | | | | |.............

Secondary Volume Bitmap (No host I/O access allowed to S-VOL until pairs are split)

| | | | | | | | | | | | | | | | | | ................

Note: Bitmaps are initially set to all ones. As each cylinder is copied during Base
Journal process, the bit corresponding to that cylinder is changed to zero.

HDS Confidential: For distribution only to authorized parties. Page 3-3


Architecture and Internal Operations
Bitmap Areas

Bitmap usage when pairs have been split


If desired, write I/O on S-VOL during pairsplit is allowed
• When resync occurs:
MCU merges primary and secondary bitmaps with OR operation
All changes are sent to RCU under control of P-VOL bitmaps in a
process similar to Base Journal operation

Caution: Resynchronization will overwrite any changes


made to the S-VOL while pairs were split.

For additional flexibility, Universal Replicator provides a secondary data volume


write option (S-VOL Write) which enables write I/O to the secondary data volume of
a split Universal Replicator pair. The secondary data volume write option can be
selected by the user during the pairsplit-r operation and applies only to the selected
pairs. The secondary data volume write option can be accessed only when you are
connected to the primary storage system.
When you resync a split Universal Replicator pair which has the secondary data
volume write option enabled, the remote storage system sends the secondary data
volume bitmap to the primary storage system, and the primary storage system
merges the primary data volume and secondary data volume bitmaps to determine
which cylinders are out-of sync. All changed cylinders are then sent from P-VOL to
S-VOL. This ensures proper resynchronization of the pair. Note that this will
overwrite any changes to the S-VOL that occurred while pairs were split.

Page 3-4 HDS Confidential: For distribution only to authorized parties.


Architecture and Internal Operations
Bitmap Areas

Bitmap area usage during resynchronization


When pairs are resynchronized, P-VOL and S-VOL bitmaps are merged in MCU and
all changed data locations are copied from P-VOL to S-VOL

P-VOL Bitmap
X | | X | | | | X | | X | | | | | | | | ............

S-VOL Bitmap (if S-VOL Write is enabled)

| X | | X | | | | | | | | | | X | | | .............

Merged Bitmaps
X | X | X | X | | | X | | X | | |X | | | | | .................

HDS Confidential: For distribution only to authorized parties. Page 3-5


Architecture and Internal Operations
Bitmap Areas

Bitmap areas as Mirror Unit Numbers (mirror IDs)


• Each HUR P-VOL and S-VOL can have up to four bitmap areas
• Remote Replication bitmaps are separate from ShadowImage/CoW/Thin
Image differential bitmaps and do not share mirror IDs

0
0 0 0 1
1 1 1 2

2 2 2
0
3 3
1
Universal Universal
Replicator Primary Replicator 2
Volume Secondary
ShadowImage
Volume
L1 S-VOLs

The example shows the mirror ID association of a Universal Replicator pair with
multiple ShadowImage mirrors. Note that Universal Replicator has four Mirror IDs
per volume while ShadowImage has three.

Page 3-6 HDS Confidential: For distribution only to authorized parties.


Architecture and Internal Operations
Moving Data Between Storage Systems

Moving Data Between Storage Systems

Journal Volumes provide buffer space: Allows HUR to survive more


severe link outages compared to other async replications
Journal Groups contain multiple Journal Volumes along with data
volumes (P-VOLs or S-VOLs)
When pairs are created within the Journal Groups a Consistency
Group ID is assigned to the resulting M-JNL/R-JNL association
As a best practice, always issue pair operation commands to the
Journal Group when:
Suspending all pairs in a journal group (pairsplit command)
Resyncing all suspended pairs in a journal group (pairresync
command)
If managed properly, HUR guarantees multi-volume consistency on
S-VOLs

Journal groups enable update sequence consistency to be maintained across a


journal group of volumes. The primary data volumes and secondary data volumes of
the pairs in a journal group must be located within one physical primary storage
system and one physical remote storage system (1-to-1 requirement).
When more than one data volume is updated, the order that the data volumes are
updated is managed within the journal group that the data volumes belong to.
Consistency in data updates is maintained among paired journal groups. UR uses
journal groups to maintain data consistency among data volume.

HDS Confidential: For distribution only to authorized parties. Page 3-7


Architecture and Internal Operations
Moving Data Between Storage Systems

Base Journal operation


• Differential bitmaps are created
• Copy of P-VOL data is initiated
• Copy Pace parameter: controls amount of data sent (track or cylinder)
• Host write operations to P-VOL are managed by bitmaps during Base
Journal and while pairs are split
Journal Obtain operation
• Sends new write data to journal volume, in preparation for transport
• Two update modes
Asynchronous - Normal update mode
Synchronous - Only invoked with Inflow Control ON (default, but not
recommended)
Journal Copy operation
• R-DKC requests that primary journal (P-JNL) data be sent to remote
storage system

Page 3-8 HDS Confidential: For distribution only to authorized parties.


Architecture and Internal Operations
Moving Data Between Storage Systems

Journal Restore operation


• Updates S-VOL with data stored in secondary journal (S-JNL)
• Maintains write order consistency of data on S-VOLs
Base Journal process - Initial Copy
• The Base Journal process reads P-VOL data, creates metadata, and
sends data
P-VOL is online, accepting host I/O. Changes in the data are noted in
the bitmaps
When all data have been copied (including all changes that occur while
copying), pair status changes to PAIR
Nocopy option is available if desired (see note)
• Base Journal Metadata - MCU creates metadata pointers for P-VOL
data locations as it is read. Those pointers provide a restart point if the
Base Journal is interrupted

Initial copy operations synchronize data in the primary data volume and data in
the secondary data volume. They are performed independently from the host
I/Os, when you create a data volume pair or when you resynchronize a
suspended pair. The initial copy operation copies the Base Journal data that is
obtained from the primary data volume at the Primary storage system to the
Secondary storage system.
The Primary storage system reads all data of the primary data volume as the
Base Journal data, in sequence. The Base Journal contains a replica of the entire
data volume or a replica of updates to the data volume.
The Base Journal process denotes changes in primary volume by noting changed
data locations in the Bitmap Areas. Once all changes are copied, PAIR status is
declared.
When a data volume pair is suspended, the primary volume Bitmap Areas is
again used to note changes to the primary volume.
Note: You can specify None as the copy mode for initial copy operations. If the None
mode is selected, only volume identification information will be copied. Full Base
Journal operations will not be performed. The None mode must be used only when
you are sure that data in the primary data volume is identical to data in the
secondary data volumes, or when only the volume identification information is
desired.

HDS Confidential: For distribution only to authorized parties. Page 3-9


Architecture and Internal Operations
Moving Data Between Storage Systems

Journal Obtain process


• Begins after Base Journal is complete (PAIR status is achieved)
• Builds Journal Updates as data comes in from host
• Metadata (control data) is created in Journal Cache for each update
• Metadata includes Sequence numbers (and CTQ-Markers furnished by
CCI if MxN Consistency Group is specified) to preserve write order
consistency across all volumes in the R-JNL groups

• Metadata and data blocks are queued for destage to the primary journal
volumes (M-JNL Group)
• Journal Obtain notification is sent to RCU
• Metadata and data blocks are held in cache as long as possible before
destaging to M-JNL volumes

Page 3-10 HDS Confidential: For distribution only to authorized parties.


Architecture and Internal Operations
Moving Data Between Storage Systems

Journal Obtain process (continued)


• Contingent upon Inflow Control setting, Journal Obtain operates
differently when Journal Volume Threshold is reached.
Inflow Control OFF - Normal operation mode
• Host I/O will not be delayed
• Journal groups will suspend if journal cache threshold or journal volume
threshold is exceeded
• Also called Asynchronous Mode
Inflow control ON - (not recommended) Will delay host I/O by the
time required to destage journal updates to Journal Volumes when:
• Journal volumes have exceeded the 80% threshold due to either link failure,
or host write I/O rate exceeding the MCU/RCU transfer rate
• Groups will be suspended if Overflow Watch Timer value is exceeded
• Also called Synchronous Mode - occurs only when the M-JNL volumes
reach the 80% threshold with Inflow Control ON

HDS Confidential: For distribution only to authorized parties. Page 3-11


Architecture and Internal Operations
Moving Data Between Storage Systems

Journal Copy process


• Upon receipt of Journal Obtain notification, secondary storage system
(RCU) responds with Read Journal Command to pull data from primary
storage system (MCU)

• MCU responds by sending journal data (from cache if possible) as the


Journal Copy
Data is sent with Sequence Numbers(and CTQ-Markers if MxN
Consistency Group is set)
Even if Journal Copy sends journal data from cache, destage to the
M-JNL occurs
• Sent data is received in RCU Cache, waiting to be settled. Data may be
stored on secondary journal volumes (R-JNL) if necessary because of
RCU constraints. (See note)

• Read Journal commands are sent repeatedly to primary storage system


until MCU sends highest sequence number

Note: Among other causes, journal updates arriving at the RCU may be written to R-
JNL due to inadequate RCU Journal Cache, or low data volume write I/O
throughput due to fewer or slower disks used on RCU side than MCU side.

Page 3-12 HDS Confidential: For distribution only to authorized parties.


Architecture and Internal Operations
Moving Data Between Storage Systems

Journal Restore process


• After data is received in RCU Journal Cache, RCU initiates Journal
Restore. Updates are reassembled by sequence number (see note)
• RCU notifies MCU of highest sequence number, along with the number of
updates received
• Sequence numbers are compared to insure that all updates sent by MCU
were in fact received by RCU
• After successful comparison of sequence numbers between RCU and
MCU:
RCU writes data to S-VOLs in sequence
RCU discards metadata and any retained R-Journal entries
MCU discards entries in M-Journal

Note: RCU may have to buffer Journal Copy to R-JNL volumes if the amount of data
exceeds what can be handled in the RCU Journal Cache. An example would be if a
partial link failure causes a significant increase in M-JNL utilization. When all links
come up, the M-JNL will attempt to flush all retained updates (oldest first) because
the RCU has already requested them. The inrush of journal updates into the RCU
may be more than can be handled in the RCU journal cache. The excess will buffer to
the R-JNL volumes, and then be brought into RCU Journal Cache as space becomes
available.

HDS Confidential: For distribution only to authorized parties. Page 3-13


Architecture and Internal Operations
Moving Data Between Storage Systems

Sequencing guarantees that write order fidelity is maintained for each


JNL group

Data (5) Write data for P-VOLs of


Data (4)
JNL Group 1
Data (3)
Data (2)
Write data for P-VOLs of
JNL Group 0
Data (1)
1

2
Cache Transfer by Read Metadata 5 Cache
Metadata (including JNL Command (including SCI)

sequence #) JNL (generated issued from RCU JNL


3 from write (generated
data) Write data from write
(2) (1) data)
#3 Data (4) JNL (4)
#2 Data (2) JNL (2) #2 Data (5) JNL (5) (4)
#3 JNL (4)
#1 Data (1) JNL (1) #1 Data (3) JNL (3) #2 JNL (2) #2 JNL (5)
4
#1 JNL (1) #1 JNL (3)
(3) (5)
Transfer can be 6
performed in the
JNL-VOL different order from
the write order. JNL-VOL
P-VOL S-VOL
JNL Group 0 JNL Group 1 Paired JNL Group 2
JNL Group 3

P-VOL JNL-VOL S-VOL


JNL-VOL
Paired
MCU RCU

1. P-VOL receives write command from the host.


2. MCU stores the received write data to the cache.
3. For each write data, MCU obtains sequence# in the JNL Group and creates the
metadata including the sequence#. MCU also allocates the area of JNL VOL and
records the location the area to the metadata. (The write data and write order of
them are managed with metadata for each JNL Group.)

MCU also creates JNL that can be stored in JNL VOL from write data.
4. RCU issues Read Journal Commands and MCU sends the metadata and the JNL
as a response to the commands. They can be performed in a different order from
the write order.

5. RCU stores the metadata and the JNL to the cache.


6. RCU sorts the JNL in the write order by referring metadata (sequence#) for each
JNL Group and reflects (writes) them to the paired S-VOL according to the write
order.
Page 3-14 HDS Confidential: For distribution only to authorized parties.
Architecture and Internal Operations
Moving Data Between Storage Systems

Hitachi Remote I/O (RIO) operation


• Replication links use RIO
Maximum of 64 updates per Read Journal command
Up to 1MB transfer length
Independent of source or target device
• Data packed and sent from different source volumes
• Simultaneous transfers from a single source volume
Up to 32 concurrent Read Journal commands can be handled
simultaneously on a single link

4 MCU 3 2 4 RCU 4
4 4
3 4
3 3
3 3

2 2

HDS Confidential: For distribution only to authorized parties. Page 3-15


Architecture and Internal Operations
Journal Volume Structure

Journal Volume Structure

Overview
• Journal volumes are buffers for write updates that allow Universal
Replicator to survive extended periods of reduced link bandwidth or
complete link failure without suspending the replication

Journal volumes store journal data (metadata and data blocks created
by Universal Replicator Journal Obtain process)
Up to 64 Journal volumes can be added to a group (VSP, HUS VM,
and USP V) or 16 (USP)
• Journal volumes are divided into extents
By default there are 33 extents
• One extent is reserved for metadata
• 32 extents are reserved for journal data
• Extents allow Universal Replicator to read data from journal volumes in
parallel

In MCU and RCU, journals are stored in journal volumes. One journal group can
contain up to 64 journal volumes. A journal volume consists of metadata area and
journal data area. The ratio of metadata area to journal data area is fixed at 1 to 32. A
journal data area is divided into 32 extents and stores journal data. In the metadata
extent, metadata is stored sequentially so that multiple metadata with neighboring
sequence numbers can be read to the cache memory from the disk.
Journal data are stored in a round-robin manner so that multiple journal data with
neighboring sequence numbers can be transferred to RCU in parallel. The journal
data can also be read from / written to the disk in parallel. In the Journal Copy
function, when a MCU receives a Read Journal command from a RCU, the MCU
sends the oldest journal (with the lowest sequence number) first.

Page 3-16 HDS Confidential: For distribution only to authorized parties.


Architecture and Internal Operations
Journal Volume Structure

All data volumes within a journal group share the journal volumes
assigned to that group
Journal data extents are written in round-robin manner across
available JNL volumes
Starting positions are placed differently to minimize disk actuator
movement
Metadata and data extents have fixed size
When additional journal volumes are added, extents are redistributed
• This allows journal volumes to be added nondisruptively

Metadata
1 Metadata Metadata 1 Metadata
area 1 1
area area area
Journal
32 Data
data Journal Journal Volumes
area Journal 32
32 data data 32
data area
JNL # 1 area area

JNL # 3
JNL # 2 JNL # 64

HDS Confidential: For distribution only to authorized parties. Page 3-17


Architecture and Internal Operations
Journal Volume Structure

Metadata structure

Type Description
Type of journal (for example, base journal or update
Journal type
journal)
Original data storing The primary data volume slot number, and the start and
position end of sub-block number (data length)
The number of primary data volume that stores the
LDEV Number (data)
original data
Journal data storing The slot number of primary journal volume, and the
position start sub-block number
The sequence number that is assigned when the
Journal sequence number
journal is obtained
The volume number of primary journal volume that
LDEV Number (journal)
stores the journal data

CTQ-Marker (for MxN


Functions similar to a timestamp for Journal creation
Consistency Group)

Page 3-18 HDS Confidential: For distribution only to authorized parties.


Architecture and Internal Operations
Performance Considerations and Restrictions

Performance Considerations and Restrictions

Universal Replicator Link Bandwidth


• The total bandwidth between MCU and RCU must exceed the average
throughput for write I/O (for data to be replicated) from hosts to the MCU
Note: If there is to be no delay in transmission of data and RPO is to be
minimized, then total bandwidth should exceed the peak throughput

Host Total bandwidth available to the

replication
Average Write I/O
Data Rate

CHA MIP RTP


Extender Extender
Write data RTP MIP Write data

Cache memory Cache memory


Fibre Channel
P P JNL (Max.4Gb/sec) Fibre Channel JNL
S S
(Max.4Gb/sec)
P P S S
MCU RCU

Following factors should be considered to realize the larger bandwidth between


MCU and RCU than the highest throughput between the hosts and MCU:
Type of data link (DWDM/T3/ATM/IP)
Number of data link
Number of fibre channel links
Maximum number of links = 8 Initiators and 8 RCU Targets per MCU/RCU

HDS Confidential: For distribution only to authorized parties. Page 3-19


Architecture and Internal Operations
Performance Considerations and Restrictions

Data flow and performance factors of Universal Replicator


• M-JNL group throughput must exceed the peak inflow data rate for data to be
replicated
• If JNL VOL throughput is less than the peak data rate, Journal Cache is the
only buffer available
• Journal group configuration planning is essential to achieve performance
demands
• Best case: Read Journal command always finds data in MCU cache. (No read
access from journal volume is required for Journal Copy)

M-JNL Throughput
Read Journal / Journal Copy
Cache Cache

S-VOL
P-VOL
M-JNL R-JNL
JNL Group

Page 3-20 HDS Confidential: For distribution only to authorized parties.


Architecture and Internal Operations
Best Practices

Best Practices

JNL Capacity and JNL Throughput

M-JNL capacity must be sufficient to absorb change rate spikes


M-JNL throughput must be greater than peak change rate

JNL VOL
Throughput

M-JNL (RPO)
Link
Bandwidth
Data Change
Rate
Time
Typical Change Rate Peak

HDS Confidential: For distribution only to authorized parties. Page 3-21


Architecture and Internal Operations
Best Practices

Sizing Journal Volumes

The M-JNL Throughput must be greater than the peak data change
rate to absorb higher than normal spikes
Requirement for JNL VOL Capacity
• Journals provide increased resiliency by allowing Universal Replicator to
maintain PAIR relationship longer than other asynchronous replications
• Journaling will occur when the replicated data inflow rate (change rate)
exceeds available replication link bandwidth
• When link down occurs, all data changes go to M-JNL
• When links come up:
Inflow data shares link bandwidth with the M-JNL updates being sent
across the links
This can result in R-JNL utilization increasing

Page 3-22 HDS Confidential: For distribution only to authorized parties.


Architecture and Internal Operations
Best Practices

Sizing Journal Volumes

M-JNL VOL Capacity (see note)


• Journal volumes should have enough capacity to absorb the increase in
journal data generated by the following conditions:
Occasional higher-than-average peak in data change rate
Occasional link degradation or total loss of links

(P - L) x T = C

Where:
P = Peak replicated data rate (highest aggregate inflow rate in MB/sec) L =
Link bandwidth = outflow rate in MB/sec, M-R throughput (worst case
L=0 if all links are down)
T = Expected duration of peak input data in seconds (customer specified) C
= Capacity of journal volumes (in MB)

Note: Essentially you calculate your maximum expected data change for the period
of time you wish to avoid HUR suspension. For example, if you want to protect
against suspension for 6 hours, you calculate your journal volume capacity to be
your maximum expected data change in 6 hours.

HDS Confidential: For distribution only to authorized parties. Page 3-23


Architecture and Internal Operations
Best Practices

Sizing Journal Volumes

RCU Journal Group considerations


• In general, R-JNL has to absorb the data sent during Journal Copy
• If R-JNL fills, RCU stops replying to RIO, causing M-JNLs to fill
• In data protection environments designed to allow replication direction
reversal, match R-JNL capacities and throughputs with M-JNL

Data Rate MCU Journal Volume RCU Journal Volume


Throughput Throughput

MCU - RCU Link Throughput


S-VOL
P-VOL JNL JNL

M-JNL Group R-JNL Group

If: Throughput of JNL groups in both MCU and RCU group exceeds peak data
rate for data to be replicated
Then: MCU - RCU Link throughput does not have to exceed data rate for data to
be replicated. Journal volumes will buffer excess data rate
If user does not need a takeover environment, the performance and capacity of
R-JNL group can be less than M-JNL group (not recommended)

Page 3-24 HDS Confidential: For distribution only to authorized parties.


Architecture and Internal Operations
Best Practices

Journal Volume Physical Characteristics

JNL volumes and data volumes in same parity group


• Not recommended! Neither JNL volume nor data volumes will achieve sufficient
and stable performance (see note)

Recommendations:
• Best practice: Assign no more than one JNL volume from any given parity group to
a JNL group
• Second Best Practice: Two journal volumes per parity group per journal group

Monitor JNL utilization, parity group utilization, and back end director to
identify bottlenecks
Ensure parity groups containing journal volumes are distributed across all
available BEDs
Parity Group
JNL VOL

other VOL

HDD

Note: The data transfer speed of a journal volume depends on the data transfer
speed of the Parity group that the journal volume belongs. One Parity group can
consist of one or more logical volumes, including journal volumes. Therefore, if
frequent access is made to non-journal volumes in a Parity group, relatively fewer
accesses can be made to journal volumes in the same Parity group. This can cause a
drop in the data transfer speed of journal volumes. To avoid that, consider
relocating the data volumes to another parity group.

HDS Confidential: For distribution only to authorized parties. Page 3-25


Architecture and Internal Operations
Best Practices

Journal Volume Physical Characteristics

Physical disk type


• Journal volume access is primarily sequential write and random read
Random read access requires seek time and rotational latency to
position accessed data
• Therefore performance of physical disk (seek time, rotation speed, and
transfer speed) affects JNL VOL throughput
High performance HDDs are preferable because of faster access time

A Journal Group can consist of physical volumes of differing number of revolutions,


physical volume capacity, and physical volume RAID configurations (for example,
RAID-1 and RAID-5). Data transfer speed of Parity groups is affected by physical
volumes and RAID configurations.

Page 3-26 HDS Confidential: For distribution only to authorized parties.


Architecture and Internal Operations
Best Practices

Journal Volume Physical Characteristics

JNL Volume Considerations


• Extents: JNL entries are written by extents across all Journal volumes in
round-robin manner
• Number of Journal Volumes: More Journal Volumes in the journal group
decreases extent size (given that the same overall journal capacity is
used), therefore forcing more equable use of all volumes in the group

• Size: If all JNL volumes in a group are the same size, all extents are the
same size
• Throughput:Total JNL throughput of a JNL group is the sum of the
throughput for each JNL VOL in the group
• Physical Drive Types: Parity Groups contributing JNL volumes to
Journal Groups should have equivalent performance

JNL entries are distributed across all volumes to achieve maximum performance
from assigned JNL VOLs
Therefore total JNL throughput of a JNL group is the sum of each JNL VOL
performance. More VOLs, more throughput
One JNL group can have up to 64 JNL VOLs
To obtain stable throughput, equivalent performance characteristics of all JNL
Volumes are desirable. Lower performance of some JNL VOLs may strongly
affect the characteristic of the JNL group throughput.
In general, best performance of the Journal Group will be realized when as many
volumes as possible are incorporated into the Journal Group and all volumes are
approximately the same size.

HDS Confidential: For distribution only to authorized parties. Page 3-27


Architecture and Internal Operations
Best Practices

Journal Volume Physical Characteristics

RAID configuration
• RAID-5 (7D+1P) is recommended
7+1 parity group is more efficient; spreads I/O across multiple parity

groups, BEDS (see note)


Better than two RAID-5 (3D+1P) groups of same capacity
RAID-1 sequential write performance is low in comparison

JNL VOL JNL VOL JNL VOL

RAID-5 (7D+1P) RAID-5 (3D+1P) RAID-1 (2D+2P)

BEST! Better Worst!

Note: RAID-6 14D+2P is also supported. No guidance on suitability for Journals has
been provided.

Page 3-28 HDS Confidential: For distribution only to authorized parties.


Architecture and Internal Operations
Journal Cache Specifications

Journal Cache Specifications

Generally Universal Replicator requires a smaller amount of cache


than other asynchronous remote replications
• The recommended minimum is 25% to 30% additional cache for
replicated data, plus an additional 1GB per Journal Group
JNL Cache — MCU
• In normal use, MCU journal cache contains mainly control (metadata)
information
• If the data change rate rises above link throughput, MCU journal cache
write pending ratio will climb
• Destaging of journal cache to journal disk will begin after Cache Write
Pending reaches the 30-40% range
• If JNL volume throughput is insufficient, the journal cache write
pendings may reach priority destage level

HDS Confidential: For distribution only to authorized parties. Page 3-29


Architecture and Internal Operations
Journal Cache Specifications

Use of Cache parameter setting: Determines whether or not the


RCU will use journal cache. Default is USE
• Requires 25% to 30% additional cache in RCU, plus 1GB per journal
group
• Provides significant RCU performance benefit
• If set to NOT USE, all journal updates go to R-JNL
For both MCU and RCU, the only way to increase journal cache size
is to increase total cache size

If RAID-1 Journals, Use of Cache does not apply.

Page 3-30 HDS Confidential: For distribution only to authorized parties.


Architecture and Internal Operations
Configuration Planning

Configuration Planning

Replication planning considerations:


• Investigate I/O patterns (with Performance Monitor and other data
collection tools)
For example, look at peak IOPS, average IOPS, read/write ratio,
read/write cache hit ratio, sequential/random pattern
• Consider DKC configuration
For example, look at number of host paths, number of DKC-DKC paths,
JNL groups, RAID level
• Design MCU and RCU configuration to meet desired RPO/RTO
objectives
For example, number of FEDs and BEDs, number of host paths, JNL
groups, RAID level, replication link bandwidth

HDS Confidential: For distribution only to authorized parties. Page 3-31


Architecture and Internal Operations
Configuration Planning

Identify applications to be replicated


Determine amount of data to be replicated
Estimate inflow data rate for data to be replicated
Establish Link Bandwidth need to meet desired RPO/RTO
Perform necessary calculations to determine journal volume size
Identify suitable storage systems
Identify suitable parity groups for data volumes and journal volumes
Order and install necessary hardware and software
Configure MCU and RCU
Test replication
Begin production replication
Monitor, evaluate, modify

Thorough configuration planning is necessary to achieve performance demands.

Page 3-32 HDS Confidential: For distribution only to authorized parties.


Architecture and Internal Operations
Review

Review

How Universal Replicator Works

Replication process
• Initial Copy (also called Base Journal Copy)
During Initial Copy process, metadata pointers to data on the P-VOL
and write sequence numbers are stored in the metadata area of the
journal volume

The base journal data is obtained from the P-VOL and sent to the R-
DKC in reply to Read Journal commands issued by the R-DKC
The data in the S-VOL synchronizes with the data in the P-VOL using
the sequence numbering scheme stored as metadata on the primary
journal volume

Only one synchronization pass is required for the initial copy and
resynchronization
This operation is conceptually similar to Initial Copy in TrueCopy

Journal obtain is the function to store the already stored data in the primary data
volume as a base journal in the journal volume at the primary site. This function
stores the write data as a journal data in the journal volume with every update of the
primary data volume according to the write instruction from the host. The journal
obtain operation is performed according to the instruction of pair create or pair
resync operation from the primary site. The write sequence number from the host is
assigned to the journal data. According to this information, the write sequence
consistency at the remote site can be maintained. The update data from the host is
kept in the cache. Therefore, the journal obtain function for the update data is
performed at another time from the recipient of update data from the host or the
storage of data to the data volume.
: For distribution only to authorized parties.

Page 3-33
Architecture and Internal Operations
Review

How Universal Replicator Works

Journal Update process - After Base Journal completes:


1. Journal Obtain process is invoked. JNL data is queued for destage to
MCU JNL volumes
2. MCU sends Journal Obtain notification to RCU
3. RCU issues Read Journal Command to initiate Journal Copy
4. Journal Copy pulls from MCU cache (may have to initiate disk access if
JNL data has been destaged to JNL volume) and sends data to RCU
5. RCU executes Journal Restore and sorts JNL data by sequence
number
6. When complete, RCU compares sequence numbers with MCU. If
agreement is reached, data is written to S-VOLs and JNL data on both
M-JNL and R-JNL volumes are discarded

1. Update Copy starts as the Journal Obtain process is invoked when data is
written as journal data to cache and then the journal volume. Control
information (metadata) is attached.

2. MCU then sends Journal Obtain notification to RCU. This tells the RCU that
pending data is now ready. Data will remain in MCU cache until it is destaged to
Journal Volume.
3. RCU then pulls data from MCU with Read Journal command.
4. If available in cache, Journal Copy pulls from MCU cache and sends data to RCU
cache to be stored on secondary journal volume. If not in cache, data will come
from MCU Journal Volume.

5. RCU executes Journal Restore to begin assembling the Journal Data into
sequence number order
6. After the journal data is sequenced, RCU compares sequence numbers with
MCU. If both agree on number of blocks sent and the highest number sent, both
discard their retained journal data.

Page 3-34 HDS Confidential: For distribution only to authorized parties.


Architecture and Internal Operations
Review

Best Practice Sizing for Journal Volumes

JNL VOL Throughput — Requirement


• JNL VOL throughput of M-JNL group must exceed the peak inflow data
rate for data to be replicated
• If JNL VOL throughput is less than the peak data rate, Universal
Replicator will suspend groups (PFUS)
• Components of JNL VOL throughput:
Journal Volume performance
• High performance HDD
• Number of journal volumes in the group (more is better)
• Size of journal volumes (roughly equal is better)
• Overall parity group utilization (low I/O load of other volumes in the parity
group)
• Best Practice: One JNL Volume per parity group per Journal Group
Transfer Bandwidth
• Number of transfer paths (links)
• Speed of transfer pipe

HDS Confidential: For distribution only to authorized parties. Page 3-35


Architecture and Internal Operations
Module Review

Module Review

1. Synchronous write to M-JNL is the normal mode for Universal


Replicator. True or False?
2. How many journal volumes can be added to a USP V Journal
Group?
3. Two large journal volumes will give better performance than many
small volumes. True or False? Why?
4. How does greatly varying size of journal volumes affect Journal
Group?
5. Universal Replicator replicates data synchronously. True or False?
6. Update data blocks must arrive in the RCU in the original order as
sent by MCU. True or False?
7. What are the three basic operations Universal Replicator uses to
move data from P-VOL to S-VOL?
8. What determines the size of a journal volume data extent?

Page 3-36 HDS Confidential: For distribution only to authorized parties.


4. Hitachi Storage
Navigator
Configuration
Module Objectives

Upon completion of this module, you should be able to:


• Prepare the storage systems for Hitachi Universal Replicator operation
• Map LDEVs to ports for use as P-VOLs and S-VOLs
• Configure Universal Replicator paths and add the remote system
• Configure Journal groups

HDS Confidential: For distribution only to authorized parties. Page 4-1


Hitachi Storage Navigator Configuration
Preparation Checklist

Preparation Checklist

With Hitachi Storage Navigator Program


• If necessary, create volumes to be used as replication P-VOLs and S-
VOLs using the appropriate Storage Navigator functions
• Use appropriate procedure for your storage platform
Map candidate P-VOL and S-VOL logical devices (LDEVs) to the
desired ports
Record port/LUN information
• Confirm necessary license keys are installed
Universal Replicator, TrueCopy, ShadowImage, Disaster Recovery
Extended
• Open Universal Replicator feature in both primary and secondary
systems
Set port attributes for replication links
Add remote DKC
Create journals
Perform pair operations

Note:
LDEV numbers as displayed by Storage Navigator
An LDEV number that ends with a “#” mark indicates that the LDEV is an
external volume (for example: 00:00:01#)
An LDEV number that ends with a letter “X” indicates that the LDEV is a virtual
volume used by Dynamic Provisioning (for example 00:00:01X)

Page 4-2 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Preparation

Preparation

Connect to Storage Navigator

For VSP, launch browser


• Enter IP address of SVP
• Login to Storage Navigator

HDS Confidential: For distribution only to authorized parties. Page 4-3


Hitachi Storage Navigator Configuration
Preparation

Connect to Storage Navigator

For HUS VM, launch browser


• Enter IP address of SVP
• Login to Storage Navigator

Connect to Storage Navigator

For USP V, connect to Storage


System
1. Launch browser
2. Select desired system from
Storage Device List

3. Login to Storage Navigator

Page 4-4 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Preparation

Map Logical Devices

For use as P-VOLs and S-VOLs


• Universal Replicator data volumes must be LUNs, that is, LDEVs mapped
to ports
• The primary and remote volumes must be the same size
• LDEVs must be mapped to ports that have the default target attribute
setting
• Verify that the Host Mode setting is correct for the operating system of
choice

Verify Fiber Channel Paths

VSP and HUS VM


• From Storage Navigator
browser window, select
Actions > Component >
View Port Location
• You can also access from
the GUI with Go >
System Information >
Port Location
• Connected ports have
yellow highlights

HDS Confidential: For distribution only to authorized parties. Page 4-5


Hitachi Storage Navigator Configuration
Preparation

Verify Fiber Channel Paths

USP V
• After mapping LDEVs to
ports, verify
connectivity:
1. Open Storage
Navigator on
both MCU and RCU
2. Select Port Status
3. Confirm correct ports
have highlights

Check Licenses

VSP and HUS VM - Settings dropdown menu


• Select Environmental Settings > License Keys

Page 4-6 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Preparation

Check Licenses

USP V
• Go Menu > Environmental Settings > License Keys

Check Licenses

If additional keys are needed


1. Change to Modify
2. Change the Mode to File
3. Browse for the license
key file
4. Click Install
5. Click Apply
6. Refresh when prompted

HDS Confidential: For distribution only to authorized parties. Page 4-7


Hitachi Storage Navigator Configuration
Preparation

Replication links

Universal Replicator requires multiple Fibre Channel paths between


MCU and RCU
• Forward Paths are the control paths
MCU ports must be set to Initiator (Transmitting Port)
RCU ports are set to RCU Target (Receiving Ports)
• Reverse Paths carry data
MCU ports must be set to RCU Target
RCU ports are set to Initiator
• An operational minimum configuration would comprise at least two control
paths and two data paths, spread across available Front-End Directors
(and Clusters)

Replication links

M-DKC to R-DKC
• Forward links are used for control functions between MCU and RCU
• Reverse links carry journal data
M-DKC (MCU) R-DKC (RCU)

P-VOL S-VOL
JNL-VOL Init
Control Information RCU JNL-VOL

P-VOL S-VOL
Max 8 Initiators per DKC
JNL Group 16 paths per MCU/RCU JNL Group

Read JNL Command


RCU
JNL Data sent
Init
P-VOL JNL-VOL JNL-VOL S-VOL

JNL Group JNL Group


RCU Target Port Initiator Port

Page 4-8 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Preparation

Replication links

Host Host Storage

Navigator
Storage
Navigator LAN
LAN

SVP SVP
Control
Initiator RCU Target

RCU Target Initiator R-JNL Group


M-JNL Group
Data
Primary storage system (M-DKC) Remote storage system (R-DKC)

INITIATOR - Transmitting port RCU TARGET - Receiving port


Maximum 8 Initiator Ports per M-DKC Maximum 8 RCU Target Ports per
R-DKC

HDS Confidential: For distribution only to authorized parties. Page 4-9


Hitachi Storage Navigator Configuration
Preparation

Replication links

One-port Front-End Director (FED)


• All path hardware (cables, switches, extenders and converters) must be
in place before beginning Universal Replicator Path configuration
Note: Zone the FC switches so that no other device on the SAN
can access the ports used for Universal Replicator

Cluster 1 FED Cluster 2 FED


Initiator Initiator
Processor 1A
2A

3A 4A

6A
5A

8A
7A

Processor = Channel Control processor for the model of storage system

Channel Control processor:


VSP: LR (Logical Router)
HUS VM: CHB
USP V: MP

Page 4-10 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Preparation

Replication links

2-port Front-End Director (FED) card


• Note the port layout on the CHA card before deciding which ports to use
for Universal Replicator paths
• All path hardware (cables, switches, extenders and converters) must be
in place before beginning Universal Replicator Path configuration
Initiator
1A Initiator 2A Initiator

Processor Processor
3A 4A

6A Initiator
5A Initiator

7A 8A

Cluster 1 FED Cluster 2 FED

Processor = Channel Host Interface Processor on FC card

HDS Confidential: For distribution only to authorized parties. Page 4-11


Hitachi Storage Navigator Configuration
Configuration

Configuration

Overview

In Storage Navigator: (see note)


• Configure port attributeson both MCU and RCU storage systems
• Add DKC definition on both MCU and RCU storage systems
This will create and test the paths
Note that the Add DKC operation must be done on both storage
systems so that paths are established in the correct direction
• Create journal groupson both MCU and RCU storage systems
Set appropriate Journal group options

Note: These operations can also be done with RAIDCOM (VSP and HUS VM only)
and Hitachi Replication Manager.

Page 4-12 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Configuration

Overview

Virtual Storage Platform and HUS VM


• Launching Universal Replicator from Storage Navigator browser window and from
inside Storage navigator GUI
Note: All setup must be done on both MCU and RCU storage systems

Launching from browser window Launching from SN GUI

Overview

For Universal Storage Platform V, select Universal Replicator from


Storage Navigator on both MCU and RCU storage systems

Select this
option

Not this…
(Mainframe)

HDS Confidential: For distribution only to authorized parties. Page 4-13


Hitachi Storage Navigator Configuration
Configuration

Overview

For USP or NSC, select Universal Replicator from Storage Navigator


on both MCU and RCU storage systems

Select this
option

Not this…
(Mainframe)

Inside Storage Navigator, select the Universal Replicator. To do so, click on the
Universal Replicator menu option.
Note: There are two forms of Universal Replicator: Open Systems and Mainframe.
The Open Systems Universal Replicator instance is the top Universal Replicator
option.

Page 4-14 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Configuration

Port Attributes

Configure port attributes on both MCU and RCU


1. Select Subsystem, CHA card
slot, or attribute in the tree
view on the left
2. Right-click the selected ports
3. Set initiator and RCU target
attributes
4. Click Apply
5. Verify the initiator and RCU
target creation

Do any of the following. Ports will be displayed in the upper-right list.


Select Subsystem
Select a channel adapter from the tree view. The port for that channel adapter
will be listed on the upper right of the panel
Select a port attribute (RCU Target or Initiator) from the tree view
Now Select a port in the upper-right list in the panel. Select a Fibre Channel port
only.
Right-click the selected port and select the desired port attribute (RCU remote or
initiator) from the pop-up menu. The rightmost column of the list displays
Modified and the Preset list displays the changes that you have made.
See the Preset list to check the settings that you have made. If you want to
change the attribute of a port, select and right-click the port from the list and
then select the new attribute. If you want to cancel a change in the attribute of a
port, select and right-click the port in the Preset list and then select Cancel. If you
want to cancel changes in attributes of all ports, right-click on the Preset list and
select the Cancel All command.
Click the Apply button on the lower right in the Universal Replicator panel to
enable all the settings. To disable all the operations and restore the list, click the
Cancel button.
Review port setting for replication

HDS Confidential: For distribution only to authorized parties. Page 4-15


Hitachi Storage Navigator Configuration
Configuration

If using USPV or USP 2-port cards, changing port attributes will change two
ports. Be sure the associated port is not being used by a host or other storage
system
The associated port can be used for a different replication path (either Universal
Replicator or TrueCopy Remote Replication software).
Not recommended for performance reasons
Make sure to change port attributes on both MCU and RCU storage systems
Make sure all cables, switches, extenders and converters are in place
Only ports assigned to SLPR0 may be used for replication

Page 4-16 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Configuration

Add DKC

1. Select DKC

2. Select DKC Operation


> Add DKC

HDS Confidential: For distribution only to authorized parties. Page 4-17


Hitachi Storage Navigator Configuration
Configuration

Add DKC

Enter the serial number of


the remote DKC
Select Controller ID
• VSP = 6
• HUS VM = 19
• USP V = 5
• USP = 4
Enter Path Group ID (see note)
Set the ports you specified as
Initiator and RCU Target
Click Option
Repeat on other storage system

Note:
Port column — Initiator ports
Pair-Port column — RCU Target ports

When you assign Logical Paths, use the port allocations you set as Initiator and RCU
Target. Make sure an Initiator and RCU Target are assigned together. If two
Initiators are grouped together, this will cause an error.
Display Features
DKC S/N: Allows you to enter the five-digit serial number of the remote storage
system
LDKC: Enter “00”
Controller ID: Allows you to enter the controller ID (that is, storage system
family ID) of the remote storage system
Note: The controller ID for a Universal Storage Platform disk storage system is 4.
The controller ID for a Universal Storage Platform V disk storage system is 5.
Path Gr. ID: Allows you to enter the path group ID. Path group IDs are used for
identifying groups of logical paths. One path group can contain up to eight
logical paths.
Note: In older microcode versions, you cannot enter path group IDs. Also, you
cannot clear the Default check box. The number of path groups per one remote
storage system in this case is always 1.
M-R Path: Allows you to specify logical paths from initiator ports on the primary
storage system to RCU remote ports on the remote storage system

Page 4-18 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Configuration

Port: Displays a list of initiator ports on the primary storage system. Select an
initiator port from this drop-down list, or type in a Port Number.
Pair-Port: Displays a list of all ports on the remote storage system. Select an RCU
remote port on the remote storage system from this drop-down list, or type in a
port number
Note: When specifying a port, you can use the keyboard to enter the port number.
When you enter the port number, you can abbreviate the port number into two
characters. For example, you can enter 1A instead of CL1-A. You can use uppercase
and lowercase letters.
Option: Opens the DKC Option panel
Cancel: Cancels the settings you made on the Add DKC panel and then closes
the panel

HDS Confidential: For distribution only to authorized parties. Page 4-19


Hitachi Storage Navigator Configuration
Configuration

Add DKC

Options: All models


1. Keep the default settings and click Set
2. When the main screen appears, click Apply

The DKC Option panel displays the following:


Minimum Paths option allows you to specify the minimum number of paths
required for each remote storage system connected to the primary storage system
(default = 1)
An error occurs if you enter a larger number than the number of paths already
set on the Add DKC panel, or if the number of paths falls below this number (for
example, due to path failures). When an error occurs, the primary storage system
suspends all affected Universal Replicator (and Universal Replicator for z/OS)
pairs to prevent remote copy operations from adversely affecting performance due
to the inadequate number of paths.
If the primary storage system contains Universal Replicator pairs which contain
critical data for disaster recovery, set the minimum number of paths to one, so that
Universal Replicator operations continue even if there is only one path to a remote
storage system
If you need high performance at the primary storage system, set the minimum
number of paths to two or more, depending on the number of pairs managed by the
primary storage system
The RIO MIH Time setting specifies the RIO MIH timer value, which is the wait
time until data transfer from the primary storage system to the remote storage
system is complete. The RIO MIH time value must be from 10 to 100 seconds. The
default setting is 15 seconds.

Page 4-20 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Configuration

Note: RIO MIH is an acronym for remote I/O missing interrupt handler. Not all
operating systems have Missing Interrupt Handler routines. In that case, this
function still works and provides a notification of abnormally long response times
for the RIO. It can be used to help identify possible intermittent link failures.

HDS Confidential: For distribution only to authorized parties. Page 4-21


Hitachi Storage Navigator Configuration
Configuration

Add DKC

Apply Add DKC function


• At this point, both Fibre paths should be fully functional and able to
replicate data
You can confirm that the paths are functional on the DKC Operation
panel

Remote
system
with serial
number

Check Status
of DKC

Display Features
Tree: Lists remote storage systems. The following information appears to the right of
the icon:
Controller ID of a remote storage system (The controller ID is a storage
system family ID)
Serial number of the remote storage system
Path group ID
The icon indicates the status of logical paths between the primary storage system
and the remote storage system
All the logical paths are in normal status. A failure occurred at some of the logical
paths.
Controller ID: Displays the controller ID of a remote storage system. The
controller ID is a storage system family ID of a disk storage system. The icon
indicates the status of logical paths between the primary storage system and the
remote storage system:
All the logical paths are in normal status.
Note: The controller ID for a TagmaStore Universal Storage Platform disk storage
system is 4 and a Universal Storage Platform V and VM disk storage system is 5.
S/N: Displays the five-digit serial number of the remote storage system.

Page 4-22 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Configuration

Path Gr. ID: Displays the path group ID.


Note: In the current version, the path group ID is set to the default value.
M-R Path: Indicates the channel type of the logical paths between the primary
storage system and the remote storage system. This column always displays
Fibre.
Status:
Normal: No failure occurs to the logical paths
Failed: All the logical paths fail
Warning: Some of the logical paths fail
Num of Path: Indicates the number of logical paths

HDS Confidential: For distribution only to authorized parties. Page 4-23


Hitachi Storage Navigator Configuration
Configuration

Add DKC

DKC Status: All Models


• See Troubleshooting
Section in User Guide

Check Path
Status

Path Status: Here are some commonly encountered conditions. There are many
more. Check the appropriate Universal Replicator User Guide for comprehensive
troubleshooting information.
Normal -This path has been successfully established and can be used for
Universal Replicator remote copy activities
Initialization Failed - An error occurred with initialization of connection
between the primary and the remote storage system. Possible causes:
No cable is connected to the primary storage system
No cable is connected to the remote storage system
No cable is connected to the network device that comes between the primary
and the remote storage system
Serial Number Mismatch - The serial number of the storage system connected to
this logical path does not match the serial number specified by the Add DKC panel.
Delete and re-create the DKC.
Invalid Port - The port is not an initiator port
Pair-Port Number Mismatch
The specified port number is incorrect
The port in the remote storage system is physically disconnected from the
primary storage system

Page 4-24 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Configuration

Pair-Port Type Mismatch - The port on the remote storage system is not an RCU
remote port
DKC S/N: Indicates the serial number of the remote storage system
Path Gr. ID: Indicates a path group ID
M-R Path: Indicates the type of channel interface between the primary and the
remote storage systems. This column displays Fibre
Minimum Paths: Indicates the minimum possible number of paths between the
primary and the remote storage systems
RIO MIH Time: Indicates the remote I/O missing interrupt handler (RIO MIH)
timer value, which is the wait time until data transfer from the primary storage
system to the remote storage system is complete
DKC Registered: Indicates the date and time when the primary and the remote
storage systems are associated to each other
Last Updated: Indicates the date and time when the last operation on a logical
path to the remote storage system was performed
Refresh the DKC Operations tab after this panel is closed: If you select this
checkbox, information in the DKC Operation panel will be refreshed after you
close the DKC Status panel

HDS Confidential: For distribution only to authorized parties. Page 4-25


Hitachi Storage Navigator Configuration
Configuration

Journal groups

Create Journal groups in both MCU and RCU systems


Best practice: Define unique M-JNL and R-JNL group IDs across
all participating storage systems (see note)

1. Click the Journal


Operation tab
2. Select a free journal group
3. Right-click on the journal
group
4. Select Edit JNL VOLs

Note: HUR Journal groups can be defined in CCI as MxN Consistency groups.
Journal group IDs in MxN Consistency groups must be unique in the group and the
MxN Consistency group may extend across storage systems.
To avoid possible conflicts, avoid duplicate Journal group IDs in your replication
environment.
However, M-JNL and R-JNL IDs can be identical within a group, as well as within
an MxN Consistency group.

Page 4-26 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Configuration

Journal groups

Create
1. Select the CU number or Parity group
2. Select LDEV
3. Click Add
4. Click Set
5. Repeat this step for
the other storage system

3DC TC-UR is the default 3DC


configuration. If a 3DC UR-UR
configuration is desired, UR
3DC setting is required for all
participating Journal groups.
2DC Cascade and UR 3DC
are not available on HUS VM

The Edit JNL Volumes panel displays similar information about:


JNL Volumes - Journal Volumes
Free Volumes - Not registered in journal groups
For each category of volumes:
Parity Group: indicates the parity group or the external volume group where a
journal volume belongs.
Note: If the letter ‘E’ is displayed at the beginning of a group, the group is an
external volume group. In the current version, however, the panel does not
display external volumes.
CU:LDEV: Indicates the CU number and the LDEV number of a volume. The CU
number is displayed to the left of the colon (:). The LDEV number is displayed to the
right of the colon.
Note: If a pound sign (#) is displayed at the end of a volume, the volume is an
external volume. In the current version, however, the panel does not display
external volumes. Consider the implications carefully before selecting an external
volume as a Journal Volume.
Capacity: Indicates the capacity of a journal volume in gigabytes
Emulation: Indicates the emulation type of the volume
Operation: Displays one of the following:

HDS Confidential: For distribution only to authorized parties. Page 4-27


Hitachi Storage Navigator Configuration
Configuration

Blank: This column usually blank


Add: Indicates a volume to be added to a journal group
Delete: Indicates a volume to be deleted from a journal group
JNL Volume buttons
Add: To register volumes in a journal group, select the volumes from Free
Volumes and click Add
Delete: To delete volumes from a journal group. Select the volumes from JNL
Volumes and click Delete
Parity Group/CU change
To change the display in the Free Volumes list:
Parity Group: Select to display volumes belonging to a parity group. Specify a
parity group number in the text boxes to the right and then click the Show button
CU: Select to display volumes belonging to a CU. Then select a CU from the list
to the right

Page 4-28 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Configuration

Journal groups

Confirm Initial Status


• Note that VSP displays status for all mirror IDs

Note that both Attribute and Status conditions are displayed. In general, attributes
provide high-level indication of the Journal group, while status provides more detail
about the current condition of the Journal group.

HDS Confidential: For distribution only to authorized parties. Page 4-29


Hitachi Storage Navigator Configuration
Journal Group Configuration Details

Journal Group Configuration Details

VSP

Journal Status

Page 4-30 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Journal Group Configuration Details

VSP and HUS VM

Journal Options
1. In the Journal Operation panel, select the Mirror ID you want to change
2. Select Journals > Change JNL Option

HDS Confidential: For distribution only to authorized parties. Page 4-31


Hitachi Storage Navigator Configuration
Journal Group Configuration Details

VSP and HUS VM

Journal Options (continued)


3. Specify Yes or No (NO is recommended) for Inflow Control setting
If Inflow Control is Yes, Data Overflow Watch is active. Specify the
number of seconds for the system to monitor write data to the journal
volume when the journal volume threshold (80%) is reached

Recommended - When
Inflow Control is No,
Data Overflow Watch
is disabled
Use of Cache:
• Use: Journal data will be
stored in the RCU cache
(see note).

• Not Use: Journal data is


not stored in the cache

Note: When there is insufficient space in the cache, journal data will also be stored
into the journal volume. This setting only takes effect on RAID-5 or RAID-6 journal
volumes.
Page 4-32 HDS Confidential: For distribution only to authorized parties.
Hitachi Storage Navigator Configuration
Journal Group Configuration Details

VSP and HUS VM

Changing Mirror Options


1. In the Journal Operation panel, select the Mirror ID you wish to change
2. Select Mirrors > Change Mirror Option

HDS Confidential: For distribution only to authorized parties. Page 4-33


Hitachi Storage Navigator Configuration
Journal Group Configuration Details

VSP and HUS VM

Changing Mirror Options (continued)


• Unit of Path Watch Time - See note
• Path Watch Time - Specify how long to defer group suspend after path
failure (should not be less than calculated M-JNL capacity)
• Forward Path Watch Time - Set Path Watch Time on RCU
• Copy Pace - Low, Medium, High
• Transfer Speed - Set line speed
of data transfer in Mb/sec (for
takeover)

• Delta Resync Failure- Action


taken if Delta Resync operation
fails

Notes
Unit of Path Watch Time - Specify the unit, minute, hour, or day.
Path Watch Time - Specify the interval from the time a path becomes blocked to
when the mirror is split (suspended). The interval must be the same for master and
restore journals in the same mirror (see next item).
Note: If you want a mirror to split immediately after a path becomes blocked, ask
Hitachi Data Systems Support Center to set system option mode 448 to ON and set
system option mode 449 to OFF.
Forward Path Watch Time
Yes: The Path Watch Time value will be forwarded to the restore journal
No: The Path Watch Time value will not be forwarded to the restore journal.
No is the default.
Blank: The current setting of Forward Path Watch Time will remain
unchanged
Copy Pace - Specify the pace for initial copy activity per volume. This field
cannot be specified on the remote system. Low is the default.
When specifying Medium, ensure that write I/O is 10 Mb/sec or less per parity
group. If it exceeds 10 Mb/sec, pairs may be suspended

Page 4-34 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Journal Group Configuration Details

When specifying High, ensure that I/O will not occur. If update I/O occurs,
pairs may be suspended
Transfer Speed - Specify the line speed (in Mb/sec) of data transfer. Specify one
of the following: 256, 100, or 10
Recommended values are as follows:
10 is recommended if the transfer speed is from 10 Mb/sec to 99 Mb/sec
100 is recommended if the transfer speed is from 100 Mb/sec to 255 Mb/sec
256 is recommended if the transfer speed is 256 Mb/sec and more

HDS Confidential: For distribution only to authorized parties. Page 4-35


Hitachi Storage Navigator Configuration
Journal Group Configuration Details

HUR (VSP, HUS VM and USP V)

System Option Modes 448 and 449


• If you do not want to suspend immediately after path failure, set system
option mode 448 to OFF and set system option mode 449 to ON. (Always
request Hitachi Data Systems Support Center assistance when changing
system option modes

HUR (VSP, HUS VM and USP V)

System Option Modes 448 and 449 (continued)


• Relationship to Path Watch Time Setting
• Best Practice: SOM 449 ON, Use M-JNL to buffer Journal Updates

SOM 449 SOM 448 Path Watch Time Result


ON Don’t Care Don’t care JNLfills
OFF ON Don’t care Suspend Now
OFF OFF USE Suspend when timer expires

Page 4-36 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Journal Group Configuration Details

USP V

Changing Options for a Journal Group


• In the Journal Operation panel, do either of the following:
In the tree, right-click a journal group and then select
JNL groups and Change JNL Option from the pop-up menu
In the upper-right list, right-
click the desired journal group
and then select JNL groups
and Change JNL Option from
the pop-up menu
• In the Change JNL Option panel,
change journal group options and
then select Set

• See the Preset list in the Journal


Operation panel to check the
settings that you have made

HDS Confidential: For distribution only to authorized parties. Page 4-37


Hitachi Storage Navigator Configuration
Journal Group Configuration Details

USP V

Changing Options for a Journal Group (cont’inued)


• Inflow Control - Slows response to host by invoking Synchronous Mode
• Data Overflow - Interval between journal volume full and suspension of
groups (0-600 sec). Active only when Inflow Control is set
• Copy Pace - Low, Medium, High
• Path Watch Time - Specify how
long to defer group suspend after
path failure. To enable, set SVP
Mode 449 off

• Forward Path Watch Time - Set


Path Watch Time on RCU
• Use of Cache - Hold Journal Data
in RCU Journal Cache
• Speed of Line - Set line speed of
data transfer in Mb/sec (for takeover)
• Delta Resync Failure - Action taken if Delta Resync operation fails

Journal Options can be changed before pairs are created, or when all pairs are
suspended.
Inflow Control: Allows you to specify whether to restrict inflow of update I/Os
to the journal volume (in other words, whether to delay response to the hosts).
Yes indicates inflow will be restricted. No indicates inflow will not be
restricted.
Note: If Yes is selected and the metadata or the journal data is full, the update
I/Os may stop (Journal groups suspended).
Data Overflow Watch: Allows you to specify the time (in seconds) for
monitoring whether metadata and journal data are full. This value must be
within the range of 0 to 600 seconds.
Note: If Inflow Control is No, Data Overflow Watch does not take effect and
does not display anything.
Copy Pace: Allows you to specify the pace (speed) for an initial copy activity for
one volume. The default is Low.
Low: The speed of the initial copy activity is slower than Medium and High.
Medium: The speed of the initial copy activity is faster than Low and slower
than High.

Page 4-38 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Journal Group Configuration Details

If you specify Medium, make sure that the amount of update I/Os is 10
Mb/sec or less per one parity group. If it exceeds 10 Mb/sec, data volume
pairs may become split (suspended).
High: The speed of the initial copy activity is faster than Low and Medium. If
you specify High, make sure that update I/Os will not occur. If update I/Os
occur, data volume pairs may become split (suspended).
Note: The remote storage system administrator cannot specify this option.
Unit of Path Watch Time: Allows you to specify the interval from when a path
gets blocked to when a mirror gets split (suspended). This value must be within the
range of 1 to 60 minutes.
Note: Make sure that the same interval is set to both the master and restore journal
groups in the same mirror, unless otherwise required. If the interval differs between
the master and restore journal groups, these journal groups will not be suspended
simultaneously. For example, if the interval for the master journal group is 5
minutes and the interval for the restore journal group is 60 minutes, the master
journal group will be suspended in 5 minutes after a path gets blocked and the
restore journal group will be suspended in 60 minutes after a path gets blocked.
Caution: By default, the factory enables (turns ON) SVP mode 449, disabling the path
watch time option. If you’d like to enable the path watch time option, please disable
mode 449 (turn it OFF).
Note: If you want to split a mirror (suspend) immediately after a path becomes
blocked, please disable SVP modes 448 and 449 (turn OFF).
Path Watch Time: Indicates the time for monitoring blockade of paths to the
remote storage system.
Forward Path Watch Time: Allows you to specify whether to forward the Path
Watch Time value of the master journal group to the restore journal group. If the
Path Watch Time value is forwarded, the two journal groups will have the same
Path Watch Time value.
Yes: The Path Watch Time value will be forwarded to the restore journal
group.
No: The Path Watch Time value will not be forwarded to the restore journal
group. No is the default.
Blank: The current setting of Forward Path Watch Time will remain
unchanged.
Caution: This option cannot be specified in the remote site.
Use of Cache: Allows you to specify whether to store journal data in the restore
journal group into the cache.
Use: Journal data will be stored into the cache.
Note: When there is insufficient space in the cache, journal data will also be
stored into the journal volume.

HDS Confidential: For distribution only to authorized parties. Page 4-39


Hitachi Storage Navigator Configuration
Journal Group Configuration Details

Not Use: Journal data will not be stored into the cache.
Blank: The current setting of Use of Cache will remain unchanged.
Caution: This setting does not take effect on master journal groups. However, if
the CCI horctakeover command is used to change a master journal group into a
restore journal group, this setting will take effect on the journal group.
Speed of Line: Allows you to specify the line speed of data transfer. The unit is
Mb/sec (megabits per second).
You can specify one of the following: 256, 100, or 10.
Caution: This setting does not take effect on master journal groups. However, if the
CCI horctakeover command is used to change a master journal group into a
restore journal group, this setting will take effect on the journal group.
Delta resync Failure: Allows you to specify the processing that would take place
when delta resync operation cannot be performed.
Entire: All the data in primary data volume will be copied to remote data
volume when delta resync operation cannot be performed. The default is
Entire.
None: No processing will take place when delta resync operation cannot be
performed. Therefore, the remote data volume will not be updated. If Delta
Resync pairs are desired, they will have to created manually.
Caution: This option cannot be specified in the remote site.

Page 4-40 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Journal Group Configuration Details

VSP and HUS VM

Journals after pair creation


• Master Journal - primary
data volumes
• Restore - Secondary
Data Volumes

HDS Confidential: For distribution only to authorized parties. Page 4-41


Hitachi Storage Navigator Configuration
Journal Group Configuration Details

VSP and HUS VM

Journal Detail Information


1. In the Journal Operation panel navigation tree, select a journal group
2. Right-click the selected journal group and then select JNL Groups and
JNL Detail from the pop-up menu
3. View detailed information
about the journal group
(see notes)

The Journal Detail panel displays the following:


Journal (LDKC): Indicates the number of a journal group
Attribute: Indicates the attribute of the journal group
Initial: A journal group in initial status. Journal volumes are registered in this
journal group, but no primary data volumes or remote data volumes
Master: A master journal group. Journal groups and primary data volumes
are registered in this journal group
Restore: A restore journal group. Journal groups and remote data volumes
are registered in this journal group
Blank: Neither journal volumes nor data volumes are registered in this
journal group
JNL Volumes: Indicates the number of journal volumes registered in the journal
group
JNL Capacity: Indicates the total capacity of all the registered journal volumes
Data Volumes: Indicates the number of data volumes associated with the journal
group
Data Capacity: Indicates the total capacity of all the data volumes
Inflow Control: Indicates whether to restrict inflow of update I/Os to the journal
volume (whether to delay a response to the hosts)

Page 4-42 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Journal Group Configuration Details

Data Overflow Watch: Indicates the time (in seconds) for monitoring whether
metadata and journal data are full
Use of Cache: Indicates whether to store journal data in the restore journal group
into the cache
JNL Volumes: Displays a list of registered journal volumes:
Parity Group: indicates the parity group where a journal volume belongs.
CU:LDEV: Indicates the CU number and the LDEV number of a journal
volume.
Capacity: Indicates the capacity of a journal volume in gigabytes.
Emulation: Indicates the emulation type of a journal volume.
CLPR: Indicates the number and the name of the CLPR where the journal
volume belongs.
Mirrors: Displays a list of mirrors:
Mirror ID: Indicates a mirror ID. This column is blank if the attribute of the
journal group is neither Master nor Restore.
Attribute: Indicates whether the Mirror is a Master or Restore Mirror
Status: Indicates the status of a journal group (or a mirror) in the primary storage
system.
Initial A journal group in initial status. Journal volumes are registered in this
journal group, but not primary data volumes or remote data volumes. When you
create a Universal Replicator volume pair, data volumes will be registered in a
journal group. The status of the journal group will change to Active.
Active Either of the following:
Initial copy is in progress. The primary data volume and the remote data
volume are not synchronized.
Initial copy is finished. The primary data volume and the remote data
volume are synchronized.
Note: If a journal group is in Active status, some of the data volume pairs
in the journal group might be split. If this happens, the word Warning is
displayed. To restore such data volume pairs, use the Pair Operation
panel.
Halt Accept An operation for splitting the mirror has been started. The
status of the mirror will immediately change to Halting.
Note: Halt Accept can indicate status of restore journal groups, but cannot
indicate status of master journal groups.
• Halting An operation for splitting or deleting the mirror is in progress. The
primary data volume and the remote data volume are not synchronized. When
you split a mirror, the status will change in the following order: Halting, Halt,

HDS Confidential: For distribution only to authorized parties. Page 4-43


Hitachi Storage Navigator Configuration
Journal Group Configuration Details

Stopping and finally Stop. When you delete a mirror, the status will change in
the following order: Halting, Halt, Stopping, Stop and finally Initial.
• Halt An operation for splitting or deleting the mirror is in progress. The primary
data volume and the remote data volume are not synchronized.
• Stopping An operation for splitting or deleting the mirror is in progress. The
primary data volume and the remote data volume are not synchronized.
• Stop: Either of the following:
• An operation for splitting the mirror is finished.
• The operation for deleting the mirror is in progress. The primary data
volume and the remote data volume are not synchronized.
• Blank Neither journal volumes nor data volumes are registered in this journal
group.
CTG: Indicates the number of a consistency group to which the mirror belongs.
This column is blank if there is no consistency group.
S/N: Indicates the serial number of the remote storage system. This column is
blank if the attribute of the journal group is neither Master nor Restore.
Pair JNLG: Indicates the number of a journal group in the remote storage system.
This column is blank if the attribute of the journal group is neither Master nor
Restore.
Controller ID: Indicates the controller ID (that is, storage system family ID) of
the remote storage system. This column is blank if the attribute of the journal
group is neither Master nor Restore.
Path Watch Time: Indicates the time for monitoring blockade of paths to the
remote storage system.
Pairs: Indicates number of data pairs in the Mirror
Capacity: of the data volumes
Copy Pace setting specified when the pair was created
Transfer speed setting in paircreate parameters
Delta Resync Failure setting
Remote Command Device location if defined

Page 4-44 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Journal Group Configuration Details

USP V

Journal Group Detail


1. In the Journal Operation
panel navigation tree,
select a journal group

2. Right-click the selected


journal group and then
select JNL Group and
JNL Status from the
pop-up menu
3. View detailed information
about the journal group
(see notes)

The JNL Group Detail panel displays the following:


JNL Group: Indicates the number of a journal group.
Attribute: Indicates the attribute of the journal group.
Initial: A journal group in initial status. Journal volumes are registered in this
journal group, but no primary data volumes or remote data volumes.
Master: A master journal group. Journal groups and primary data volumes
are registered in this journal group.
Restore: A restore journal group. Journal groups and remote data volumes
are registered in this journal group.
Blank: Neither journal volumes nor data volumes are registered in this
journal group.
JNL Volumes: Indicates the number of journal volumes registered in the journal
group.
JNL Capacity: Indicates the total capacity of all the registered journal volumes.
Data Volumes: Indicates the number of data volumes associated with the journal
group.
Data Capacity: Indicates the total capacity of all the data volumes.
Meta/Data Ratio: Indicates the ratio of metadata area to journal data area.
Currently fixed at 32.

HDS Confidential: For distribution only to authorized parties. Page 4-45


Hitachi Storage Navigator Configuration
Journal Group Configuration Details

Extent: Indicates the number of extents. Currently 32.


Inflow Control: Indicates whether to restrict inflow of update I/Os to the journal
volume (whether to delay a response to the hosts).
Yes indicates inflow will be restricted. No indicates inflow will not be
restricted.
Note: Inflow Control displays nothing if the journal group is a restore journal
group.
Data Overflow Watch: Indicates the time (in seconds) for monitoring whether
metadata and journal data are full.
Note: Data Overflow Watch displays nothing when one of the following conditions is
satisfied:
Inflow Control is No.
The journal group is a restore journal group.
Use of Cache: Indicates whether to store journal data in the restore journal group
into the cache.
Use: Journal data will be stored in the cache. Requires additional cache in
RCU but has significant performance benefits.
Note: When there is insufficient space in the cache, journal data will also be
stored into the journal volume.
Not Use: Journal data will not be stored into the cache.
Caution: This setting does not take effect on master journal groups. However, if
the CCI horctakeover command is used to change a master journal group into a
restore journal group, this setting will take effect on the journal group.
JNL Volumes: Displays a list of registered journal volumes:
Parity Group: indicates the parity group where a journal volume belongs.
CU:LDEV: Indicates the CU number and the LDEV number of a journal volume.
Capacity: Indicates the capacity of a journal volume in gigabytes.
Emulation: Indicates the emulation type of a journal volume.
CLPR: Indicates the number and the name of the CLPR where the journal
volume belongs.
Mirrors: Displays a list of mirrors: - Mirror ID: Indicates a mirror ID. This
column is blank if the attribute of the journal group is neither Master nor Restore.
Status: Indicates the status of a journal group (or a mirror) in the primary storage
system.
Initial A journal group in initial status. Journal volumes are registered in this
journal group, but not primary data volumes or remote data volumes. When
you create a Universal Replicator volume pair, data volumes will be
registered in a journal group. The status of the journal group will change to
Active.

Page 4-46 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Journal Group Configuration Details

Active Either of the following:


Initial copy is in progress. The primary data volume and the remote data
volume are not synchronized.
Initial copy is finished. The primary data volume and the remote data
volume are synchronized.
Note: If a journal group is in Active status, some of the data volume pairs in the
journal group might be split. If this happens, the word Warning is displayed. To
restore such data volume pairs, use the Pair Operation panel.
• Halt Accept An operation for splitting the mirror has been started. The status of
the mirror will immediately change to Halting.
Note: Halt Accept can indicate status of restore journal groups, but cannot
indicate status of master journal groups.
• Halting An operation for splitting or deleting the mirror is in progress. The
primary data volume and the remote data volume are not synchronized. When
you split a mirror, the status will change in the following order: Halting, Halt,
Stopping and finally Stop. When you delete a mirror, the status will change in
the following order: Halting, Halt, Stopping, Stop and finally Initial.
• Halt An operation for splitting or deleting the mirror is in progress. The primary
data volume and the remote data volume are not synchronized.
• Stopping An operation for splitting or deleting the mirror is in progress. The
primary data volume and the remote data volume are not synchronized.
• Stop: Either of the following:
• An operation for splitting the mirror is finished.
• The operation for deleting the mirror is in progress. The primary data
volume and the remote data volume are not synchronized.
• Blank Neither journal volumes nor data volumes are registered in this journal
group.
CTG: Indicates the number of a consistency group to which the mirror belongs.
This column is blank if there is no consistency group.
S/N: Indicates the serial number of the remote storage system. This column is
blank if the attribute of the journal group is neither Master nor Restore.
Pair JNLG: Indicates the number of a journal group in the remote storage system.
This column is blank if the attribute of the journal group is neither Master nor
Restore.
Controller ID: Indicates the controller ID (that is, storage system family ID) of
the remote storage system. This column is blank if the attribute of the journal group
is neither Master nor Restore. Note: The controller ID for a TagmaStore™ USP and
NSC disk storage system is 4.
Path Watch Time: Indicates the time for monitoring blockade of paths to the
remote storage system.

HDS Confidential: For distribution only to authorized parties. Page 4-47


Hitachi Storage Navigator Configuration
Journal Group Configuration Details

To Delete Journal Volumes, Group Attribute must be Initial (no


pairs) or Group Status must be Stop (all pairs suspended) or Hold
(Delta Resync pairs suspended)
• In the Journal Operation panel navigation tree, select a journal group
• Select one or more Journal Volumes

Note: The last Journal Volume in the group cannot be deleted if there are any
pairs in the group, without regard to the pair status

Page 4-48 HDS Confidential: For distribution only to authorized parties.


Hitachi Storage Navigator Configuration
Review

Review

Preparation Checklist

1. Install necessary license keys on both storage systems


2. Map logical devices to ports if necessary
3. Establish physical paths between storage systems
a. Make sure all cables, switches, extenders and converters are in place
4. Configure replication port attributes on both MCU and RCU
storage systems
5. ADD DKC on both MCU and RCU storage systems
• This will create and test the Universal Replicator links between MCU and
RCU
• Note that the ADD DKC operation mustbe done on both storage systems
so that paths are established in the correct direction
6. Create journal groups on both MCU and RCU storage systems
• Set appropriate Journal Options after creating journal groups

HDS Confidential: For distribution only to authorized parties. Page 4-49


Hitachi Storage Navigator Configuration
Module Review

Module Review

1. If the minimum path setting is higher than the Universal Replicator


paths available, what will occur?
2. When Universal Replicator is configured through Storage
Navigator, what is the first step?
3. What is the minimum number of replication links used for Universal
Replicator?
4. When a Journal group is created, what Journal group attribute is
shown?
5. What LDEV emulation types can be used for Journal Volumes?

Page 4-50 HDS Confidential: For distribution only to authorized parties.


5. Storage Navigator for
Operations
Module Objectives

Upon completion of this module you should be able to:


• Prepare storage systems and use Hitachi Universal Replicator commands
to perform operations in Hitachi Storage Navigator
• Describe the features and operation of Usage Monitor
• Invoke Universal Replicator history function
• Describe troubleshooting procedures for path failures and suspended
pairs

HDS Confidential: For distribution only to authorized parties. Page 5-1


Storage Navigator for Operations
Preparation for Operations

Preparation for Operations

Configuration Review
• Universal Replicator license keys installed on candidate primary (MCU)
and remote (RCU) Hitachi enterprise storage systems
• At least two logical fibre paths configured and activated between the
storage systems
• At least one Journal Group present in each MCU and RCU
• A list of candidate LDEVs for P-VOLs and associated S-VOLs showing
current port mapping details:
Port ID
Host Group ID
LUN number

Hitachi enterprise storage systems include:


Virtual Storage Platform
Universal Storage Platform
Universal Storage Platform V
Universal Storage Platform VM
Network Storage Controller

Page 5-2 HDS Confidential: For distribution only to authorized parties.


Storage Navigator for Operations
Commands Overview

Commands Overview

Command Function/Description
Pairdisplay To view detailed information about a pair of data volumes
Status transition: N/A
Paircreate Creates a Universal Replicator volume pair
Status transition: SMPL > COPY > PAIR
Pairsplit -S Deletes a Universal Replicator volume pair
Status transition: Any status/SMPL > SMPL
Pairsplit -r Splits a pair
Status transition: Any status/SMPL and PSUE > PSUS
Pairresync Resynchronizes a pair
Status transition: PSUS/PSUE > COPY > PAIR

HDS Confidential: For distribution only to authorized parties. Page 5-3


Storage Navigator for Operations
Commands Overview

Page 5-4 HDS Confidential: For distribution only to authorized parties.


Storage Navigator for Operations
Commands Overview

Pair Operations
1. Launch Universal
Replicator on primary
storage
system (MCU)
2. Click on the Pair
Operation tab
3. Select desired port from
the tree view
4. Select candidate P-VOL
from the list on the right
5. Right-click and select
the desired operation

To begin data replication:


Open Storage Navigator in Modify mode and navigate to the Universal
Replicator interface.
Click on the Pair Operation tab. All mapped LUNS are displayed on the left side
of the window.
Select the Universal Replicator production volumes from the list.
Note: Universal Replicator only supports OPEN-V type volumes.

HDS Confidential: For distribution only to authorized parties. Page 5-5


Storage Navigator for Operations
paircreate

paircreate

Paircreate panel

• Volume information

• Journal Information

• Detail Information

Note: You can select and right-click more than one volume if you want to create
more than one pair at one time. Choose all the remote volumes from the same
remote storage system.

Page 5-6 HDS Confidential: For distribution only to authorized parties.


Storage Navigator for Operations
paircreate

Volume information
• P-VOL is selected already.
• Select S-VOL by entering:
Port ID
Host Group ID
LUN number
• If S-VOL information is not known, open Storage Navigator on remote
storage system and look at LUN manager.

When the dialog box appears, select the appropriate S-VOL.


S-VOL Values:
Port - S-VOL Port
You can specify the port number with two characters. For instance, you can
abbreviate CL1-A to 1A (not case-sensitive).
GID - Host Group number
LUN - LUN number
If you need a reference, please look at the LUN map listing in the Pair Operation tab to
find the S-VOL you are looking for.
If a logical volume is an external volume, the symbol "#" appears after the LDEV
number. For detailed information about external volumes, please refer to the
Universal Volume Manager User's Guide.
If you selected more than one primary data volume, select the remote data volume
for the primary data volume being displayed. The remote data volumes for the rest
of the primary data volumes are automatically assigned according to the LUN.
For example, if you select three primary data volumes and select LUN01 as the S-
VOL for the first primary data volume, the remote data volumes for the two
other primary data volumes will be LUN02 and LUN03.

HDS Confidential: For distribution only to authorized parties. Page 5-7


Storage Navigator for Operations
paircreate

Journal Group information


• M-JNL: JNL Group in Master DKC
• Mirror ID: Recommended: Use Mirror ID 1 for UR pairs (see note)
• R-JNL: JNL Group in Remote DKC
• CT Group: Consistency Group Number for this M-JNL/R-JNL association
(see note)
• DKC: Select appropriate remote DKC

Mirror:
M-JNL: Master Journal Group.
Mirror ID: Set to 1 even if defining a 2DC configuration. This will allow 3DC
configuration at a later date. (TC requires Mirror ID 0.)
R-JNL: Restore Journal Group
CT Group:
Assign a Consistency Group number for this particular M-JNL/R-JNL association.
Ensure that the CT Group selected is not in use by ShadowImage or TrueCopy
Async.
If a Universal Replicator volume pair already exists in the Journal Group, the CT
Group setting will have already been made and does not have to be set again. There
will be an asterisk (*) next to the C/T group number. Also, the corresponding pairs of
journal volumes will appear automatically.

Page 5-8 HDS Confidential: For distribution only to authorized parties.


Storage Navigator for Operations
paircreate

Detail information
• Initial Copy:
Entire - all cylinders
None - no cylinders
Delta - create Delta Resync pairs
• Select data copy scheduling Priority
1-256 where 1 is highest

• Error Level:
Mirror (Group if USPV) - All volume pairs to suspend on error
LU - Only affected pair will suspend on error

Set/Apply
• Volume status changes from SMPL to Copy
• When copy completes, refresh screen to see status change to Pair

HDS Confidential: For distribution only to authorized parties. Page 5-9


Storage Navigator for Operations
Detailed Information

Detailed Information

Displays detailed pair information


• Sync Rate = Base Journal
• Copy Progress
• Journal groups
• Mirror ID
Can also be displayed through
Storage Navigator on remote
storage system

Status: Indicates the status of the pair. If the pair is split (or suspended), the suspend
type is displayed. If the pair is waiting for initial copy, the word Queuing is
displayed.
Sync Rate: If the volume in the primary storage system is a primary data volume,
progress of an initial copy operation is displayed. If the volume in the primary
storage system is a remote data volume, Sync Rate displays information in the
following ways:
If the volume pair is not split, nothing is displayed.
If the volume pair is split and therefore is in PSUS or PSUE status, Sync. usually
displays synchronization rate (that is, concordance rate) between the remote data
volume before it became split and the remote data volume after it became split.
For example, the synchronization rate is 100 percent if the contents of the remote
data volume are the same before and after the volume pair became split.
Note: If a failure in the initial copy operation causes the volume pair to be split, Sync.
displays nothing. If a failure occurs in the initial copy operation, the Detailed
Information dialog box displays the phrase "Initial copy failed."
P-VOL: Indicates the primary data volume. The first line displays the port number,
the GID, the LUN and LDKC:CU:LDEV (the number of LDKC, the number of CU,

Page 5-10 HDS Confidential: For distribution only to authorized parties.


Storage Navigator for Operations
Detailed Information

and the number of LDEV) of the primary data volume; the GID is a group number
for a host group.
If the primary data volume exists in the primary storage system, the first line also
displays the CLPR number and the CLPR name.
If the primary data volume is an LUSE volume, the LUN is the LDEV number of
the top LDEV (that is, the smallest LDEV number in the group of LDEVs that are
combined as an LUSE volume).
The second line displays the device emulation type. The third line displays
the volume capacity.
S-VOL: Indicates the remote data volume.
The first line displays the port number, the GID, the LUN and LDKC:CU:LDEV
(the number of LDKC, the number of CU, and the number of LDEV) of the
remote data volume; the GID is a group number for a host group.
If the remote data volume exists in the primary storage system, the first line also
displays the CLPR number and the CLPR name.
If the remote data volume is an LUSE volume, the LUN is the LDEV number of
the top LDEV (that is, the smallest LDEV number in the group of LDEVs that are
combined as an LUSE volume).
The second line displays the device emulation type. The third line displays the
volume capacity.
Notes:
If a volume is an external volume, the pound (#) appears after the LDEV number.
For detailed information about external volumes, see the Universal Volume
Manager User's Guide.
If the remote data volume is a volume of Hitachi Universal Storage Platform™
and Hitachi Network Storage Controller, “00” is displayed as the LDKC number.
If there is no remote data volume in the primary storage system, the port ID, GID,
and LUN of the volume that you specified in the remote storage system when
creating the pair will be displayed. If you change or delete the port ID, GID, or
LUN of the volume in the remote storage system, incorrect information will be
displayed. So, unless you have any special circumstances, do not change or
delete the port ID, GID, or LUN that you specified when creating the pair.
CLPR: Indicates the CLPR number and the CLPR name of the volume in the
primary storage system.
M-JNL Group: Indicates the master journal group.
R-JNL Group: Indicates the restore journal group.
Mirror ID: Indicates the mirror ID.
CT Group: Indicates the consistency group number.

HDS Confidential: For distribution only to authorized parties. Page 5-11


Storage Navigator for Operations
Detailed Information

S/N (CTRL ID): displays the five-digit serial number and the control ID of the
remote storage system. The control ID is enclosed by parentheses.
Path Type: Indicates the channel type of the path interface between the storage
systems (Fiber).
Note: In the current version, the channel type is always displayed as Fiber.
Initial Copy Priority: Indicates priority (scheduling order) of the initial copy
operations. The value can be within the range of 1 to 256 (disabled when the status
becomes PAIR).
Error Level: Indicates the range used for splitting a pair when a failure occurs. The
default is Group.
Group: If a failure occurs with a pair, all pairs in the consistency group where the
pair belongs will be split.
LU: If a failure occurs with a pair, only the pair will be split.
S-VOL Write: Indicates whether write I/O to the remote data volume is enabled or
disabled (enabled only when the pair is split).
Other Information: Displays the following:
Established Time: Indicates the date and time when the volume pair was created.
Updated Time: Indicates the date and time when the volume pair status was last
updated.
Refresh the Pair Operation window after this window is closed: If this check box is
selected, the Pair Operation window will be updated when the Detailed Information
dialog box closes.
Previous: Displays the pair status information for the previous pair in the list (the
pair in the row above).
Next: Displays the pair status information for the next pair in the list (the pair in the
row below)
Notes:
The Display Filter settings (see Filtering Information in the List in the Pair
Operation Window) can affect how Previous or Next is recognized.
The list displays a maximum of 1,024 rows at once. The Previous and Next
buttons on the Detailed Information dialog box can only be used for the currently
displayed 1,024 rows.

Page 5-12 HDS Confidential: For distribution only to authorized parties.


Storage Navigator for Operations
pairdisplay

pairdisplay

JNL Status after paircreate


• Note that the Master and
Restore Journals
correspond to the
Mirror ID specified
during paircreate

HDS Confidential: For distribution only to authorized parties. Page 5-13


Storage Navigator for Operations
pairsplit

pairsplit

S-VOL Write:
• Disabled by default
• Enabled: Allows R/W of
S-VOL after split
Range: Suspend Mirror
(group) or LU (volume)
Suspend Mode:
• Flush - Send update to
S-VOL
• Purge - Convert Journal
Update data to changed
cylinders and suspend
immediately

Result: PSUS for all pairs

S-VOL Write: Allows you to specify whether to permit hosts to write data to the
remote volume. The default is Disable (that is, do not permit):
Disable: Hosts cannot write data to the remote volume while the pair is split.
Enable: Hosts can write data to the remote volume while the pair is split. This
option is available only when the selected volume is a primary volume.
Range: Allows you to specify the split range. The default is Mirror (all volumes in
the consistency group)
LU: Only the specified pairs will be split.
Note: If you select pairs with PAIR status and other than PAIR status in the same
consistency group, an unexpected suspension may occur during the pair operations
(Pairsplit-r, Pairsplit-S, and Pairresync) under heavy I/O load conditions. You can
estimate whether the I/O load is heavy or not from the rate of journal cache (around
30%), or if you cannot see the journal cache rate, from the frequency of host I/O. The
suspend pair operations should be performed under light I/O load conditions.
Group: All pairs in the same consistency groups as the selected pairs will be split.
Note: If the following two conditions are satisfied and you select Apply, a warning
message will be displayed and processing cannot be continued:

Page 5-14 HDS Confidential: For distribution only to authorized parties.


Storage Navigator for Operations
pairsplit

The Preset list contains two or more pairs belonging to the same consistency
group.
The Range column displays Group for at least one of the above pairs.
To be able to continue processing, do either of the following:
Ensure that the Range column displays LU for all pairs in the same
consistency group.
In the Preset list, select all but one pair in the same consistency group,
right-click the selected pairs, and then select Delete.
Suspend Mode: Allows you to specify how to deal with update data that has not
been copied to the remote volume. The default is Flush:
Flush: When you split the pair, update data will be copied to the remote volume.
Purge: When you split the pair, update data will not be copied to the remote
volume. Instead it will convert to changed cylinders in P-VOL differential bitmap.
lf you resync the pair later, the changed P-VOL data will be copied to the remote
volume.
Set: Applies the settings to the Preset list in the Pair Operation panel.

HDS Confidential: For distribution only to authorized parties. Page 5-15


Storage Navigator for Operations
pairresync

pairresync

Select and right-click the split


pair that you want to resync
From the pop-up menu,
select Pairresync.
Range: Specify Mirror or LU
Resync Mode: Default is
Normal. May also be Delta or
Return to Standby. See
Student notes.
Expected result: PAIR status

If any pair was suspended due to an error condition (use the Pairdisplay panel to
view the suspend type), make sure that the error condition has been removed. The
primary storage system will not resume the pairs until the error condition has been
removed.
The Pairresync panel displays the following:
Range: Allows you to specify the restore range.
LU: Only the specified pairs will be restored.
Mirror: Default setting. All pairs in the same consistency groups as the selected
pairs will be restored.
Priority: Allows you to specify the desired priority (1-256) (scheduling order) for the
pair-restoring operations.
Note: If the Range is Mirror, you cannot change the Priority option.
DKC: Indicates the storage system.
Resync Mode: Indicates the processing after recovery of the pairs.
Normal: Split pair whose status is PSUS or PSUE will be recovered.
Delta: Delta resync operation will be performed.

Page 5-16 HDS Confidential: For distribution only to authorized parties.


Storage Navigator for Operations
pairresync

Return to standby: The status of pairs will be recovered from HLDE to HOLD.
Error Level: Allows you to specify the range used for splitting a pair when a failure
occurs:
Mirror: If a failure occurs with a pair, all pairs in the consistency group where
the pair belongs will be split.
LU: If a failure occurs with a pair, only the pair will be split.
Note: If the Range is Mirror, you cannot change the Error Level option.

HDS Confidential: For distribution only to authorized parties. Page 5-17


Storage Navigator for Operations
Change Pair Option

Change Pair Option

Allows change of Error Level from Mirror to LU (VSP only)

Page 5-18 HDS Confidential: For distribution only to authorized parties.


Storage Navigator for Operations
pairsplit

pairsplit

pairsplit -S delete function


Range: Mirror or LU
If LU is selected, only
that pair is deleted

The Pairsplit-S panel displays the following:


Range: Allows you to specify the delete range. The default is Mirror.
LU: Only the specified pairs will be deleted.
If you select pairs with PAIR status and other than PAIR status in the same
consistency group, an unexpected suspension may occur during the pair
operations (Pairsplit-r, Pairsplit-S, and Pairresync) under heavy I/O load
conditions. You can estimate whether the I/O load is heavy or not from the
rate of journal cache (around 30%), or if you cannot see the journal cache rate,
from the frequency of host I/O. The pair operations should be performed
under light I/O load conditions.
Group: All pairs in the same consistency groups as the selected pairs will be
deleted.
Note: Do not use this option when deleting pairs at the remote storage system during
disaster recovery.
Note: If the following two conditions are satisfied and you click Set, a warning
message is displayed and processing cannot continue:

HDS Confidential: For distribution only to authorized parties. Page 5-19


Storage Navigator for Operations
pairsplit

The Preset list contains two or more pairs belonging to the same consistency
group
The Range column displays Group for at least one of the above pairs
To be able to continue processing, do either of the following:
Ensure that the Range column displays LU for all pairs in the same
consistency group.
In the Preset list, select all but one pair in the same consistency group,
right-click the selected pairs, and then select Delete.
Delete Mode: Allows you to specify whether to delete the pairs forcibly. When the
status of the pairs to be deleted is SMPL or Deleting, the default setting is Force.
Otherwise, the default setting is Normal.
Force: The pairs will forcibly be deleted even if the primary storage system is
unable to communicate with the remote storage system.
Note: If issued on S-VOL, FORCE will delete S-VOLS without regard for (or
changing) P-VOL status. To recover pair, delete P-VOL and issue paircreate again.
Normal: The pairs will be deleted only if the primary storage system is able to
change the pair status of the primary and remote volumes to SMPL
Set: Applies the settings to the Preset list in the Pair Operation panel

Page 5-20 HDS Confidential: For distribution only to authorized parties.


Storage Navigator for Operations
pairsplit

pairsplit -S Options
• Delete Mode:
Normal
Force - When issued
at remote site on S-
VOLs, forces S-VOLs
to simplex without
regard for P-VOL
status

HDS Confidential: For distribution only to authorized parties. Page 5-21


Storage Navigator for Operations
Usage Monitor

Usage Monitor

Real-time monitoring operations


Displays up to four Usage Graphs
Usage Monitor data cannot be saved
Use Performance Monitor to collect and archive Universal Replicator
data

Page 5-22 HDS Confidential: For distribution only to authorized parties.


Storage Navigator for Operations
Usage Monitor

Start Usage Monitoring on VSP


and HUS VM
1. On Storage Navigator Browser
screen, navigate to
Performance Monitor
2. Scroll right and select Edit
Monitoring Switch
3. Set Monitoring Switch to
Enable

If you set 1 minute for Gathering Interval, the sampling data will be held one day. If
you set 15 minutes for Gathering Interval, the sampling data will be held 15 days.
When Gathering Interval is changed, the data obtained before changing is deleted.

HDS Confidential: For distribution only to authorized parties. Page 5-23


Storage Navigator for Operations
Usage Monitor

Start Usage Monitoring on USP V


1. Navigate to Performance
Manager > Monitoring Options
2. Select Enable in the Monitoring
Switch Current Status box
3. Set desired Gathering Interval
4. APPLY the change
USP - Enable and apply Usage
Monitor directly in URz Monitor
page

1. Select the Usage Monitor tab


2. Right-click on one of the quadrants
3. Select Display Item from the dialog

Page 5-24 HDS Confidential: For distribution only to authorized parties.


Storage Navigator for Operations
Usage Monitor

Display Item panel


1. Select appropriate item in
Select Volume section.
2. In the Monitor Data box, select
the appropriate items. You
must select at least one.

3. Useful items to monitor:


Initial Copy Average Tranfer
Rate
M and R JNL Data Utilization
Rate
4. Click Set to close the Display
Item panel.
Usage Operations panel
now displays a graph
showing the selected I/O
statistics data for the
selected LUs
thorized parties.

Page 5-25
Storage Navigator for Operations
Usage Monitor

Sample graphs

Page 5-26 HDS Confidential: For distribution only to authorized parties.


Storage Navigator for Operations
History

History

LIFO file - Export function saves operation history into a .tgz file
• Use File > Refresh to start logging history data

HDS Confidential: For distribution only to authorized parties. Page 5-27


Storage Navigator for Operations
Options

Options

Options Tab: Allows changing the default maximum number of Base


Journal operations (initial copies) that can be started at one time

Page 5-28 HDS Confidential: For distribution only to authorized parties.


Storage Navigator for Operations
Troubleshooting

Troubleshooting

Universal Replicator User Guide, Section 10, “Troubleshooting” has


tables for:
• General Troubleshooting
• Troubleshooting Logical Paths (replication links)
• Troubleshooting suspended pairs
By Suspension Type
By Hardware Problem
• Command Control Interface Error Codes
• Service Information Messages pertinent to replication

HDS Confidential: For distribution only to authorized parties. Page 5-29


Storage Navigator for Operations
Troubleshooting

Example of information in User Guide

Suspend Applies
Description Corrective Action
Type To
PSUE, Primary The primary storage Clear the error condition at the remote storage
by RCU data system suspended a system or remote data volume. If you need to
volume pair because the primary access the remote data volume, delete the pair from
storage system detected the primary storage system. If data in the remote
an error condition at the data volume has been changed, delete the pair from
remote storage system. the primary storage system and then recreate the
The suspend type for the pair by using the Paircreate panel. If data in the
remote data volume is remote data volume has not been changed, restore
by MCU. the pair from the primary storage system.
PSUE, Primary The primary storage Check the path status on the DKC Status panel (see
S-VOL data system detected an error Table 10-2).
Failure volume during communication Clear any error conditions at the remote storage
with the remote storage system and the remote data volume. If you need to
system or detected an access the remote data volume, delete the pair from
I/O error during update the remote storage system. If data in the remote
copy. In this case, the data volume has been changed, delete the pair from
suspend type for the the primary storage system and then re-create the
remote data volume is pair by using the Paircreate panel. If data in the
usually by MCU. remote data volume has not been changed, restore
the pair from the primary storage system.

Page 5-30 HDS Confidential: For distribution only to authorized parties.


Storage Navigator for Operations
Commands and Status Review

Commands and Status Review

Command Function/Description
Pairdisplay To view detailed information about a pair of data volumes
Status transition: None
Paircreate Creates a Universal Replicator volume pair
Status transition: SMPL COPY PAIR
Pairsplit -S Deletes a Universal Replicator volume pair
Status transition: Any status SMPL
Pairsplit -r Splits a pair
Status transition: Any status (including PSUE) PSUS
Pairresync Resynchronizes a pair
Status transition: PSUS/PSUE COPY PAIR

HDS Confidential: For distribution only to authorized parties. Page 5-31


Storage Navigator for Operations
Module Review

Module Review

1. List the Universal Replicator commands provided by Storage


Navigator.
2. With a pair split, what is the default Read/Write attribute set on an
S-VOL?
3. A normal pair split is performed using the pairsplit -S option.
True or False?
4. Describe the steps to start the Usage Monitor.
5. A volume pair can be resynchronized if the storage system has
reported an error and the error has not been fixed.
True or False? W hy?

Page 5-32 HDS Confidential: For distribution only to authorized parties.


6. Command Control
Interface
Configuration and
Operations
Module Objectives

Upon completion of this module, you should be able to:


• Describe the history of Command Control Interface (CCI), sometimes
called RAID Manager
• Identify CCI software components
• Describe the checklist used to configure
• Create a physical configuration
• Create and revise the configuration files
• Describe the command set
• Identify CCI Microsoft Windows® subcommands
• Discuss configuration setting commands
• Perform troubleshooting and corrective actions for operations

HDS Confidential: For distribution only to authorized parties. Page 6-1


Command Control Interface Configuration and Operations
Overview

Overview

Origins
• Hitachi Open Remote Copy (HORC) is the original name for Hitachi
TrueCopy Remote Replication
• Hitachi Open Remote Copy Manager (HORCM) is the original name for
the management software now called Command Control Interface (CCI)
• Hitachi Multi-RAID Coupling Facility (HMRCF) is the original name for
Hitachi ShadowImage In-System Replication
• Both products were managed with HORCM, now CCI
• The acronyms HORCM and MRCF are still used internally by CCI

Page 6-2 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Overview

CCI hardware components


• Command Device
Communication path for CCI
Accept commands from CCI for execution by replications
• Remote Command Device
Externally attached command device to allow CCI to remotely manage
a replication
• Replication Licenses must be installed on all participating storage
systems
All Hitachi storage systems support CCI software operations.
• CCI software versions relate to storage microcode level
• Check documentation for current versions

CCI documentation for all platforms


• MK-90RD7008 — Hitachi Command Control Interface Installation and
Configuration Guide
• MK-90RD7009 — Hitachi Command Control Interface Command
Reference
• MK-90RD7010 — Hitachi Command Control Interface User and
Reference Guide

HDS Confidential: For distribution only to authorized parties. Page 6-3


Command Control Interface Configuration and Operations
Overview

CCI software
• Installs on SAN-attached or network-attached servers
• Communicates with the storage systems using FC paths (SAN-attached)
or TCP/IP (network-attached) to the command devices
• Requires no communication with the devices containing data to be
replicated

CCI internals
• HORCM Instances
Primary instance manages P-VOLs
Secondary instance manages S-VOLs
• HORCM Configuration Files - two required (minimum)
• Define location of local CCI server, service name of the local instance
• Defines location of Command Device
• Define devices used by the replication (either P-VOLs or S-VOLs)
• Define location of remote server running the remote instance, along with its
service name

Page 6-4 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Overview

Two Servers - Two HORCM Instances

WAN

HORCM HORCM
HORCM0.conf HORCM1.conf
Commands Commands

Server HORCM Server HORCM


Software & Instance0 Software & Instance1
Application Application

Command Command
Device Device
Primary Secondary
Volume Volume
VSP, USP V, USP VSP, USPV, USP

Shown here are the four components mentioned on the previous slide. The
relationships between these components include:
There are always at least two instances, a sending instance and a receiving
instance
Instance 0 is the sending instance and Instance 1 is the receiving instance.
Each instance relies on a configuration file in order to communicate with the
other instance, as well as to communicate with the system.
The configuration file defines the volumes that will be paired up
If you have two instances, you will have two corresponding configuration files.
When a command is issued, usually via a script, the instance sends the command to
the CMD Device. The system then actuates the command.

HDS Confidential: For distribution only to authorized parties. Page 6-5


Command Control Interface Configuration and Operations
Checklist

Checklist

Install CCI software


Map P-VOLs and S-VOLs as LUNs to desired host ports
• Note that CCI only needs access to Command Device, not to P-VOLs and
S-VOLs
Create and map CMD Device to port accessible to CCI
Edit and save (or create new) HORCM configuration files
• Minimum two files will be required
Edit the Services File
Set Environment Variables (optional)
Start Instances
Issue Commands

Page 6-6 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Configuration

Configuration

Install CCI
• Microsoft Windows Server
Download latest version

Copy to appropriate folder

Run Setup.exe

• UNIX
ftp files in binary mode with mget
mkdir /HORCM
run ./RMInstsh.txt script

HDS Confidential: For distribution only to authorized parties.


Page 6-7
Command Control Interface Configuration and Operations
Configuration

All Hitachi replications require LUNs (mapped LDEVs)


If not available:
• Create candidate P-VOLs and S-VOLs
• Use procedures appropriate to the model of storage system, map desired
LDEVs to the appropriate ports
• Record Port, Host Group, LUN numbers
• Release any ShadowImage RESERVED volumes
Reserved Volumes cannot be used by CCI

CCI depends on S-VOLs being defined ahead of time using LUN Manager. You
cannot define LUNs through CCI.
Reserved Volumes cannot be used by CCI. Remember that Storage Navigator
allows setting of ShadowImage S-VOL reserves. However, reserving S-VOLs blocks
access to the LDEV by the host. Since CCI resides on the host, the S-VOLs that were
reserved through the GUI will be blocked.
Make sure S-VOL LUNs are mapped to the appropriate ports. An example backup
scenario might be a P-VOL is mapped to production server ports and S-VOL is
mapped to backup server ports.

Page 6-8 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Configuration

Command Device

Create a Command Device


• Use appropriate procedure for your storage platform to:
Create minimum size volume (46MB)
Map the device to the correct ports
Mark it as the Command Device
• Right-click and select CMD DEV

Note: Command Device is a raw device. No file systems or


partitions are created on the device.

46MB is the smallest OPEN-V LDEV that can be created. The volume designated as the
command device is used only by the primary storage system and is a raw device.
Multiple command devices in one storage system are allowed.
Procedure:
Use Storage Navigator to create the smallest possible OPEN-V volume.
Set Command Device attribute ON.
Map the device to the host port(s).

HDS Confidential: For distribution only to authorized parties. Page 6-9


Command Control Interface Configuration and Operations
Configuration

VSP and HUS VM Command Device

Set Command Device using Storage Navigator browser window

Step 1

Step 3

Step 2

Page 6-10 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Configuration

VSP and HUS VM Command Device

Set VSP Command Device using Storage Navigator browser window


• If IP Command Device will be used, enable User Authentication

Step 1

Step 2

Step 3

Remote Command Device

CCI Instance can operate replication in remote storage system


through the remote command device mapped in the local storage
system
Useful in 3DC Delta
Resync and 2DC
Cascade environments
Usage will be discussed
in Delta Resync Module

HDS Confidential: For distribution only to authorized parties. Page 6-11


Command Control Interface Configuration and Operations
Configuration

Remote Command Device

VSP Remote Command Device

HORCM.CONF File Overview


• Provides a definition of hosts, command devices, groups, and volumes to
the Command Control Interface instance
• After CCI installation, a sample text file is located at
/HORCM/etc/horcm.conf
• Edit the sample file to remove all commented lines except for the lines
associated with these parameters
• Create two files named horcm0.conf and horcm1.conf and copy to:
/etc directory (UNIX)
/HORCM/etc and /WINNT (NT) or /WINDOWS (XP)
• Assigned numbers are arbitrary. Class examples are:
horcm0.conf controls P-VOLs
horcm1.conf controls S-VOLs

Ensure that there are no hidden characters or spaces at the end of lines.
When saving horcm.conf files in Microsoft Windows, do not save as .txt files.

Page 6-12 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Configuration

Create HORCM configuration files


• Required parameters:
HORCM_MON
HORCM_CMD
HORCM_LDEV (or HORCM_DEV if desired)
HORCM_INST
HORCM_CTQM (for MxN Consistency Group) see note
• Minimum of two HORCM files are required
• Best Practices:
Instance numbers defined in file name horcmN.conf (for example,
horcm0.conf)
Define P-VOLs in even instances and S-VOLs in odd instances.
Make liberal use of comments

MxN Consistency Group will be covered in a later module.

HDS Confidential: For distribution only to authorized parties. Page 6-13


Command Control Interface Configuration and Operations
Configuration

Sample HORCM configuration file (CCI V01-17-03 and later)

HORCM_MON
HORCM_MON #IP service name (or number) poll timeout
Identifies the primary host 172.16.0.44 horcm0 (50000) 1000 3000

HORCM_CMD
HORCM_CMD #Windows Unix
Identifies the command device \\.\CMD-30095 \\.\CMD-30095:/dev/rdsk/

HORCM_LDEV
HORCM_LDEV* #dev_group dev_name Serial# CU:LDEV(LDEV#) MU#
Identifies the volumes UR1 dev1 30095 00:00 1
UR1 dev2 30095 00:01 1

HORCM_INST HORCM_INST
Identifies the remote host #dev_group IP service name (or number)
UR1 172.16.0.45 horcm1 (50001)

New parameters have been added to support Open UR MxN Consistency


Group. See MxN Consistency Group Module for details.

*CCI Version 01-17-03 introduced the HORCM_LDEV parameter. Absolute LUN


numbers are no longer required, which eases the work required to produce valid
HORCM configuration files.

Page 6-14 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Configuration

Windows Command Device


• Old Method: \\.\PHYSICALDRIVEX (can change if Disk Manager
changes the disk order)
• New Method: Not susceptible to re-ordering of Windows disk
\\.\CMD-30095
• Use the first Command Device found on
Subsystem Serial Number 30095
\\.\CMD-30095-250
• Use LDEV 250 (decimal)
\\.\CMD-30095-250-CL1-A-1
• Port-Host Group

Command Device - Solaris

Solaris Command Device Example: Issue format command. Find


disk with the CM string (For example: c2t0d1 <HITACHI - Open-V -
CVS - CM>)
Defining the Command Device in the horcm.conf files
• Old Method: Raw disk pathname - /dev/rdsk/c2t0d1s2
• New methods:
\\.\CMD-30095:/dev/rdsk/
- Use first available command device

\\.\CMD-30095-250:/dev/rdsk/
- Use LDEV#250

\\.\CMD-30095-250-CL1-A-1:/dev/rdsk/
- Full specification for S/N 30095, LDEV 250 (decimal) connected to Port
CL1-A, host group 1

HDS Confidential: For distribution only to authorized parties. Page 6-15


Command Control Interface Configuration and Operations
Configuration

Command Device - Other UNIX Platforms

HP-UX: /dev/rdsk/* or /dev/rdisk/disk* (See note)


AIX: /dev/rhdisk*
Linux: /dev/sd .
z/Linux: /dev/sd
MPE/iX: /dev/...
Tru64: /dev/rrz*c or /dev/rdisk/dsk*c or /dev/cport/scp*
IRIX64: /dev/rdsk/*vol or /dev/rdsk/node_wwn/*vol/*

Note: Substitute directory containing device file name for * in the above
statements

Out of Band Command Device

Uses the SVP IP Address (VSP and HUS VM only)


CMD User Authentication ON must be set in Storage Navigator
• Example:
\\.\IPCMD-10.17.104.86-31001
Will require the SVP login and password when the instance starts

Format: \\.\IPCMD-<SVP IP address>-<31001>[-Unit ID]


• SVP IP address: Same as used for Storage Navigator (IPv6 supported)
• 31001: The default SVP UDP communication port number. This value is
fixed
• Optional [-Unit ID]: Use to specify the unit ID of the storage system when
using the multiple unit connection configuration. This can be omitted.

Page 6-16 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Configuration

For CCI version 01-16-00 and earlier, find absolute LUN numbers
• Enter correct information in HORCM_MON and HORCM_CMD
parameters.
• Leave HORCM_DEVand HORCM_INST parameters commented out.
• Start the instance (example: # horcmstart 0).
• Execute raidscan -p CL1-A -fx.

HORCM_MON
#IP service poll timeout
172.16.0.44 horcm0 1000 3000

HORCM_CMD
\\.\PHYSICALDRIVE8 (must use old CMD DEV spec)

#HORCM_DEV
#dev_group dev_name port TID LUN MU#

#HORCM_INST
#dev_group IP service

Absolute LUN Numbers

Group and Device Names

Naming the groups and devices


• Each DEV_GROUP is mad
of one or more DEV_NAME
• Try to be simple yet descrip
• Names can be up to 31
characters
Match names to S-VOLs to
ensure that:
• Device names match for bo
the P-VOLs and the S-VOL
• Device group names must
match for both P-VOLs and
S-VOLs

• Device Names must be unique


within each file

HDS Confidential: For distribution only to authorized parties. Page 6-17


Command Control Interface Configuration and Operations
Configuration

Restrictions

Configuration File Considerations


• CCI reads the config files at startup. Anytime you make a change or
update to the configuration file, stop the HORCM instance, then restart it.
• Watch out for extra spaces and tabs in the file.
• Group names and device names are:
Case sensitive
Must be unique within the file
Must match in both files
• Make sure the HORCM configuration files have been saved in the correct
directory.
UNIX: /etc
Windows: Highest level Windows folder

Mirror IDs

0
0 0 0 1
1 1 1
2

2 2 2
0
3 3
1 0
HUR P-VOL HUR S-VOL/
SI P-VOL 2 1

ShadowImage 2
L1 S-VOLs
ShadowImage
L2 S-VOL
Bitmap numbers are also
known as Mirror Unit Numbers.

Page 6-18 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Configuration

For CCI V01-17-03 and later, use HORCM_LDEV parameter to


create HORCM files for both P-VOLs and S-VOLs

Best Practice: Universal Replicator MU# must be 1 in both configuration files

HORCM _MON HORCM _MON


#ip_address service poll(10ms) timeout(10ms) #ip_address service poll(10ms) timeout(10ms)
10.17.105.4 horcm0 1000 3000 10.17.105.5 horcm1 1000 3000

HORCM _CMD HORCM _CMD


#dev_name #dev_name
\\.\CMD-10145 \\.\CMD-10156

HORCM _LDEV P-VOLS HORCM _LDEV S-VOLS


#dev_group dev_name Serial# CU:LDEV(LDEV#) MU# #dev_group dev_name Serial# CU:LDEV(LDEV#) MU#
ur1 pair1 10145 00:00 1 ur1 pair1 10156 02:00 1
ur1 pair2 10145 00:01 1 ur1 pair2 10156 02:01 1

HORCM _INST HORCM _INST


#dev_group ip_address service #dev_group ip_address service
ur1 10.17.105.5 horcm1 ur1 10.17.105.4 horcm0

When naming these files, they should be called horcm + decimal number + .conf. This
is necessary for CCI to detect which configuration files to use during the process.
CCI needs the specified number to start the instance and recognize which
configuration file to use with that instance.
Mirror Unit Number (MU#) for 3DC TrueCopy/Universal Replicator
environment:
Defines the mirror unit number (0 - 3) of one of four possible TrueCopy/Universal
Replicator bitmap associations for an LDEV in a Cascaded or Multi-remote 3DC
environment. If this number is omitted, it is assumed to be zero (0).
The MU# for TrueCopy must specify either blank or 0
The MU# for Universal Replicator must specify h1 in both horcm files
Best practice: Always use Mirror ID h1 for UR. This will allow a 2DC UR
replication to be converted easily to 3DC at a later date

HDS Confidential: For distribution only to authorized parties. Page 6-19


Command Control Interface Configuration and Operations
Configuration

Services File

Edit the Services File


• UNIX: /etc/services
• Microsoft Windows: C:\Windows\system32\drivers\etc\services
• This step can be omitted if service numbers are coded in the conf files

For Microsoft Windows: Make sure to


include a CR/LF at the end of the last line.

Edit Services file to add the horcm instances.

Page 6-20 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Configuration

mkconf

CCI Configuration Files: mkconf usage


• Example: mkconf.sh -g ORA -i 0 -m 1 will use as input the results of the raidscan
command and create a horcm0.conf file for the dev_group ORA with Mirror Unit
numbers set to 0

HORCM_MON
#ip_address service poll(10ms) timeout(10ms)
127.0.0.1 52323 1000 3000
HORCM_CMD
#dev_name
#UnitID 0 (Serial# 30095)
/dev/rdsk/c23t3d0
HORCM_DEV
#dev_group dev_name port# TargetID LU# MU#

# /dev/rdsk/c23t0d0 SER = 30095 LDEV = 192 [ FIBRE FCTBL = 4 ]


ORA ORA_000 CL2-B 0 0 1
HORCM_INST
#dev_group ip_address service
ORA 127.0.0.1 52323

Actual output of mkconf usage on Microsoft Windows server is below. Note that
the procedure differs from the UNIX procedure shown in a previous slide. Some
lines in the Device File were deleted for clarity.
C:\HORCM\etc>
C:\HORCM\etc>raidscan -p cl1-a-0 -CLI -fx
PORT# /ALPA/C TID# LU# Seq# Num LDEV# P/S Status Fence P-Seq# P-
LDEV#
CL1-A-0 ef 5 1 0 20068 1 40 SMPL - - - -
CL1-A-0 ef 5 1 1 20068 1 41 SMPL - - - -
CL1-A-0 ef 5 1 2 20068 1 42 SMPL - - - -
CL1-A-0 ef 5 1 3 20068 1 43 SMPL - - - -
CL1-A-0 ef 5 1 4 20068 1 44 SMPL - - - -
CL1-A-0 ef 5 1 5 20068 1 45 SMPL - - - -
CL1-A-0 ef 5 1 6 20068 1 46 SMPL - - - -
CL1-A-0 ef 5 1 7 20068 1 47 SMPL - - - -
CL1-A-0 ef 5 1 8 20068 1 48 SMPL - - - -
CL1-A-0 ef 5 1 9 20068 1 49 SMPL - - - -

HDS Confidential: For distribution only to authorized parties. Page 6-21


Command Control Interface Configuration and Operations
Configuration

CL1-A-0 ef 5 1 10 20068 1 4a SMPL - - - -


CL1-A-0 ef 5 1 11 20068 1 4b SMPL - - - -
CL1-A-0 ef 5 1 12 20068 1 700 SMPL - - - -
C:\HORCM\etc>raidscan -p cl1-a-0 -CLI -fx >> temp
# Note that raidscan output is sent to a temp file. Next we pipe temp to the
mkconf command as shown below:
C:\HORCM\etc>type temp | c:\horcm\tool\mkconf.exe -g test -i 5 -m 0
starting HORCM inst 5
HORCM inst 5 starts successfully.
HORCM Shutdown inst 5 !!!
A CONFIG file was successfully completed.
starting HORCM inst 5
HORCM inst 5 starts successfully.
DEVICE_FILE Group PairVol PORT TARG LUN M SERIAL LDEV
Harddisk1 test test_000 CL1-A 1 00 20068 64
Harddisk5 test test_001 CL1-A 1 40 20068 68
Harddisk1 test test_000 CL1-A 1 00 20068 64
Harddisk1 test test_000 CL1-A 1 00 20068 64
Harddisk1 test test_000 CL1-A 1 00 20068 64
Harddisk5 test test_001 CL1-A 1 40 20068 68
Harddisk12 test test_011 CL1-A 1 11 0 20068 75
Harddisk1 test test_000 CL1-A 1 00 20068 64
HORCM Shutdown inst 5 !!!
Please check
'C:\HORCM\etc\horcm5.conf','C:\HORCM\etc\log5\curlog\horcm_*_log.txt',
and modify 'ip_address and service'.
C:\HORCM\etc>
The mkconf command creates the following file. Note that the loopback IP
address is used, along with the default service number instead of a configuration
file name. Using the service number eliminates the need to edit the etc/services file.
Some commented entries were removed for clarity:
# Created by mkconf on Thu Oct 25 13:02:44
HORCM_MON

Page 6-22 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Configuration

#ip_address service poll(10ms) timeout(10ms)


127.0.0.1 52323 1000 3000
HORCM_CMD
#dev_name
#UnitID 0 (Serial# 20068)
\\.\PhysicalDrive13
HORCM_DEV

#dev_group dev_name port# TargetID LU# MU#


# Harddisk1 SER = 20068 LDEV = 64 [ FIBRE FCTBL = 2 ]
test test_000 CL1-A 1 0 0
# Harddisk5 SER = 20068 LDEV = 68 [ FIBRE FCTBL = 2 ]
test test_001 CL1-A 1 4 0
# Harddisk2 SER = 20068 LDEV = 65 [ FIBRE FCTBL = 2 ]
test test_002 CL1-A 1 1 0
# Harddisk3 SER = 20068 LDEV = 66 [ FIBRE FCTBL = 2 ]
test test_003 CL1-A 1 2 0
# Harddisk4 SER = 20068 LDEV = 67 [ FIBRE FCTBL = 2 ]
test test_004 CL1-A 1 3 0
# Harddisk6 SER = 20068 LDEV = 69 [ FIBRE FCTBL = 2 ]
test test_005 CL1-A 1 5 0
# Harddisk7 SER = 20068 LDEV = 70 [ FIBRE FCTBL = 2 ]
test test_006 CL1-A 1 6 0
# Harddisk8 SER = 20068 LDEV = 71 [ FIBRE FCTBL = 2 ]
test test_007 CL1-A 1 7 0
# Harddisk9 SER = 20068 LDEV = 72 [ FIBRE FCTBL = 2 ]
test test_008 CL1-A 1 8 0

# Harddisk10 SER = 20068 LDEV = 73 [ FIBRE FCTBL = 2 ]


test test_009 CL1-A 1 9 0
# Harddisk11 SER = 20068 LDEV = 74 [ FIBRE FCTBL = 2 ]
test test_010 CL1-A 1 10 0
# Harddisk12 SER = 20068 LDEV = 75 [ FIBRE FCTBL = 2 ]
test test_011 CL1-A 1 11 0
HDS Confidential: For distribution only to authorized parties. Page 6-23
Command Control Interface Configuration and Operations
Configuration

# ERROR [LDEV LINK] Harddisk1 SER = 20068 LDEV = 64 [ FIBRE


FCTBL = 2 ]
HORCM_INST
#dev_group ip_address service
test 127.0.0.1 52323

Page 6-24 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Configuration

Command Device Security

Enhanced command device defined using the Edit Command Device


function (or SNMP).
Defined for each command device created
CCI recognizes the attribute on startup.
CCI recognizes only volumes permitted by the facility
Called Command Device Protection on USP V

Start Instances
• Microsoft Windows
horcmstart <instance number>
• UNIX
horcmstart.sh <instance number>
Commands
• pairdisplay - Important! Confirm copy direction
• paircreate
• pairsplit
• pairresync

HDS Confidential: For distribution only to authorized parties. Page 6-25


Command Control Interface Configuration and Operations
Configuration

Environment parameters can now be set in command line.

• -I[instance#] - specifies Instance number

Instance 0 example: pairdisplay -g ur1 -I0

• Remote Replication
• -IH[instance#] or -ITC[instance#] - specifies the command as a
Universal Replicator or TrueCopy Remote Replication operation,
Instance 0 example: pairdisplay -g ur1 -fxce -IH0

• In-system Replication
• -IM[instance#] or -ISI[instance#] - specifies the command as
ShadowImage, CoW, or Thin Image command
Instance 10 example: pairdisplay -g sigrp -IM10

Page 6-26 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Commands

Commands

Universal Replicator enhanced

pairdisplay displays pair information


• Enhanced for MxN Consistency Group
paircreate creates Universal Replicator pairs
• Enhanced for MxN Consistency Group
pairsplit splits a Universal Replicator pair
pairresync resynchronizes a Universal Replicator pair
pairevtwait waits for a event transition
pairmon monitors and reports on pair events
raidscan displays port configuration
raidqry displays CCI host configuration
raidar displays port I/O activity)

HDS Confidential: For distribution only to authorized parties. Page 6-27


Command Control Interface Configuration and Operations
Commands

Universal Replicator enhanced

raidvchkscan checks volume information


pairsyncwait waits for split pairs to synchronize
inqraid displays relation between special files and PDEVs
paircurchk checks the currency of remote volumes by evaluating
the data consistency based on pair status and fence level
horctakeoff converts 3DC Multi-remote to 3DC Cascade
horctakeover reverses MCU/RCU relationship (failover)

Page 6-28 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Commands

pairdisplay - specific options for HUR


-fxce - Displays LDEVs in hex, % synchronized, journal and external
information
#pairdisplay -g ur1 -fxce -IH0

Seq# - Serial Number of the storage system


LDEV - CU and DEV numbers specified in the horcm files %
- Percent synchronized
CTG - CT group ID for UR pair. (MxN Consistency Group if defined)
JID - UR journal group ID
AP - Number of active paths for UR links
External volume information

CCI supports the -fcxe option with the pairdisplay command so that you can discover the
external LUNs on the pair volume. This will show you additional information on the pair
volumes.
Output of the pairdisplay command:
Group = group name (dev_group) as described in the configuration definition file
Pair Vol(L/R) = paired volume name (dev_name) as described in the configuration
definition file. (L) = local host; (R) = remote host
(P,T#,L#) (TrueCopy/Universal Replicator) = port, TID, and LUN as described in the
configuration definition file.
Seq# = serial number of the RAID storage system
LDEV# = logical device number
P/S = P-VOL / S-VOL attribute
Status = status of the paired volume
Fence (TrueCopy/Universal Replicator) = fence level
% = copy operation completion, or percent pair synchronization
P-LDEV# = Partner LDEV#
M = Write status of S-VOL, W = read/write, - (dash) = read only
CTG = Consistency Group ID
JID = Journal ID
AP = Number of active remote replication links

HDS Confidential: For distribution only to authorized parties. Page 6-29


Command Control Interface Configuration and Operations
Commands

pairdisplay - other options


-v jnl - Displays journal information such as Q-MARKER (sequence number) and Q-COUNT
(remaining sequence numbers)

-v jnlt - Displays journal information plus timeout value settings


Equivalent to same parameters in raidvchkscan command, discussed later.

-v ctg - displays Inflow Control setting, Timer settings

pairdisplay (within a shell script)

pairevtwait -g $datagroup -s psus -t 600 if


[ $? -ne 0 ]
then
pairdisplay -g $datagroup -fxc -IH0
exit 1
else
echo "$W Pair '$datagroup' already Split"
fi

Page 6-30 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Commands

paircreate UR Example

#paircreate -g ur1 -vl -f async 00 -jp 00 -js 01 -IH0

-g CCI grp_name
-vl Establishes copy direction Local to Remote (normal) direction -required
-f async [CTGID] — Specify Fence Level async for Universal Replicator. CTGID
defaults to next sequential ID if not specified.
-jp <id> — M-JNL group ID
-js <id> — R-JNL group ID

pairsplit
• -P Purge remaining Journal data to bitmaps without updating S-VOL
• -R Force S-VOL to simplex mode forcibly. Issued at secondary site
• -RS Set S-VOLs to SSWS status. Allows full host access
• -RB Change S-VOL from SSWS to SSUS (normal split status)
• -l Split local volume only. Issued at primary site

HDS Confidential: For distribution only to authorized parties. Page 6-31


Command Control Interface Configuration and Operations
Commands

pairsplit — examples
• # pairsplit -g ur1 -IH0
• # pairsplit -g ur1 -d pair1 -IH0
• # pairsplit -g ur1 -rw -IH0
• # pairsplit -g ur1 -S -IH0
• # pairsplit -g ur1 -RS -IH1 (Executed at secondary site to force S-VOLs
to SSWS status)

• Example:
pairsplit -g ur1 -IH0
pairdisplay -g ur1 -fxce -IH0

In the event of a temporary primary site outage, pairsplit -RS can be executed at the
Local or Remote Site to force S-VOLs into SSWS status so that host applications can use
the S-VOLs.

Page 6-32 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Commands

pairsplit (within a shell script)

for datagroup in $VGNAMES


do
pairsplit -g $datagroup -IH0
if [ $? -ne 0 ]
then
echo "Error Pairsplit command failed for '$datagroup' \n"
exit 1
else
echo "Split command '$datagroup' successful"
fi
pairevtwait -g $datagroup -s psus -t 7200
if [ $? -ne 0 ]
then
echo "Error Pairsplit not complete for '$datagroup' \n"
exit 1
else
echo "Split '$datagroup' successful"
fi
done

pairresync
• Examples
# pairresync -g ur1 -IH0
# pairresync -g ur1 -d pair1 -IH0

• Pairdisplay example after pairresync -g ur1 -IH0

HDS Confidential: For distribution only to authorized parties. Page 6-33


Command Control Interface Configuration and Operations
Commands

Pairresync - Manual takeover operation


• Parameters: NOTE: HORCTAKEOVER command is recommended.
-swaps— Executed from the secondary site to reverse replication
direction. (Swaps P-VOL and S-VOL)

Primary Site Secondary Site


S-VOLs P-VOLs

-swapp — Executed from the new primary site to set the original
primary volume back to P-VOL (reverses replication direction back to
the original direction)

Primary Site Secondary Site


P-VOLs S-VOLs

-l — Sets local volume (must be P-VOL) back to PAIR status


regardless of S-VOL status

Note: pairresync -swaps is the second command of a failover operation at a remote


site. It attempts to change S-VOLs in SSWS status to P-VOLs. If successful,
MCU/RCU relationship is reversed.

Page 6-34 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Commands

pairresync (within a shell script)

for datagroup in $VGNAMES


do
pairresync -g $datagroup -IH6
if [ $? -ne 0 ]
then
echo "Error Pairresync command failed for '$datagroup' \n"
exit 1
else
echo "Start of resynchronization of '$datagroup' successful"
fi
echo "Wait for end of resynchronization"
$pairevtwait -g $datagroup -s pair -t 7200 if
[ $? -ne 0 ]
then
echo "Error Resync not complete \n"
exit 1
fi
pairdisplay -g $datagroup -fcx
done

HDS Confidential: For distribution only to authorized parties. Page 6-35


Command Control Interface Configuration and Operations
Commands

raidvchkscan parameters
• -v jnl

• Q-Marker:Displays the sequence # of the journal group ID, called the Q-


marker
For PJNL, Q-Marker shows the latest sequence number generated
For S-JNL, the Q-marker shows the latest sequence number settled to
remote data volume cache
• Q-CNT:Displays the number of remaining Q-Markers within each journal
volume
For P-JNL, Q-CNT shows how many sequence numbers (and
associated updates) are waiting to be sent
For S-JNL, Q-CNT shows how many sequence numbers are waiting to
settle to the S-VOL

JNLS: Displays the status in the journal group


- SMPL journal group contains no data volumes, same as INITIAL status
- PJNN Primary Journal Normal Normal
SJNN Secondary Journal Normal Normal
- PJSN Primary Journal Suspend Normal
SJSN Secondary Journal Suspend Normal
- PJNF Primary Journal Normal Full”
- PJSF Primary Journal Suspend Full”
SJSF Secondary Journal Suspend Full
- PJSE Primary Journal Suspend Error (including group suspension caused by
link failure)
- SJSE Secondary Journal Suspend Error (including group suspension caused by
link failure)
AP: Displays the active paths
U(%): Displays the usage rate of the journal data
D-SZ: Displays the capacity for the journal data on the journal volume
Seq#: Displays the serial number of the RAID
Num: Displays the number of LDEV configured the journal volume
LDEV#: Displays the first number of LDEV configured the journal volume

Page 6-36 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Commands

raidvchkscan parameters
• -v jnlt: displays Universal Replicator timer settings

Additional details of raidvchkscan -v jnlt


• DOW - Data Overflow Watch
• PBW - Path Blockade Watch
• APW - Active Path Watch

HDS Confidential: For distribution only to authorized parties. Page 6-37


Command Control Interface Configuration and Operations
Commands

pairevtwait
• Waits until a specific pair status is achieved before returning control
• Useful for scripts that need to wait until a specific pair status is achieved

pairevtwait -g <group> -s <status> -t <timeout>

Example: A pairresync command terminates before resynchronization of the remote


(or primary) volume is complete. Use pairevtwait to verify that the resync operation
completed successfully (status changes from COPY to PAIR).

Page 6-38 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Commands

pairmon
• Obtains the pair status transition of each volume pair and reports it
• Runs in background
• Issues messages on pair status changes
• -allsnd
Reports all events if there is pair status transition information.
# pairmon -allsnd -nowait
Group Pair vol Port targ# LUN# LDEV# Oldstat code > Newstat code
oradb oradb1 CL1-A 1 5 145 SMPL 0x00 > COPY 0x01
oradb oradb2 CL1-A 1 6 146 PAIR 0x02 > PSUS 0x04

raidqry
• Displays the CCI version and information about the connected host and
storage system.

HDS Confidential: For distribution only to authorized parties. Page 6-39


Command Control Interface Configuration and Operations
Commands

raidar
• Displays port statistics

IOPS: # of I/Os (read/write) per second (total I/O rate)


HIT(%): Hit rate for read I/Os (read hit rate)
W(%): Ratio of write I/Os to total I/Os (percent writes)
IOCNT: number of write and read I/Os

Page 6-40 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Commands

pairsyncwait
• -fq This option is used to display the number of remaining Q-Markers
within CT group.
• -m [Q-marker] can confirm the transfer of a specific sequence number to
remote site. In other words, determines whether or not pairs are
synchronized while in PAIR STATUS.

# pairsyncwait -g oradb -nowait -m 01003408e0 -fq -IH7


UnitID CTGID Q-Marker Status Q-Num QM-Cnt
0 3 01003408e0 NOWAIT 2 105

# pairsyncwait -g oradb -t 50 -fq -IH6


UnitID CTGID Q-Marker Status Q-Num QM-Cnt
0 3 01003408ef TIMEOUT 2 5

Note: Pairsyncwait is typically used to determine if remote site S-VOLs have reached a
desired point in time. For example, to ensure that application awareness state
(VSS, Oracle hot backup, etc.) has reached the remote site.
Details of pairsyncwait output table:
UnitID: The Unit ID, in case of multiple DKC connection
CTGID: The CTGID within the UnitID
Q-Marker: The latest sequence # of the MCU P-VOL when the command was
received
Status: The status after the execution of the command
Q-Num: The number of processes queued to wait for synchronization within the
CTGID of the Unit
QM-Cnt: The number of remaining Q-Marker within CT group of the Unit
TrueCopy/Async sends a token called “dummy record set” at regular interval
time. Therefore QM-Cnt always shows “2” or “3” even if Host has no writing.
When specifying “-nowait -fq” the “QM-Cnt” will be shown as the number
of remaining Q-Marker at this time within CT group.
When specifying “-nowait -m <marker> -fq” the “QM-Cnt” will be shown as
the number of remaining Q-Marker from the specified <marker> within CT
group.

HDS Confidential: For distribution only to authorized parties. Page 6-41


Command Control Interface Configuration and Operations
Commands

When specifying "TIMEOUT" without “ -nowait” the “QM-Cnt” will be


shown as the number of remaining Q-Marker at this timeout within CT group.
“QM-Cnt” will be shown as “-“ , if the status for Q-Marker is invalid (i.e. status
is "BROKEN" or "CHANGED").

Page 6-42 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Commands

inqraid -fw
Displays cascading volume status (3DC)

Example:

Note: P = P-VOL, S=S-VOL, s=simplex

This option is added to display all of the cascading volume status.

HDS Confidential: For distribution only to authorized parties. Page 6-43


Command Control Interface Configuration and Operations
Commands

paircurchk
(Pair Currency Check)

The CCI paircurchk command checks the currency of remote volumes by evaluating
the data consistency based on pair status and fence level.

Page 6-44 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Scripted Commands for Disaster Recovery

Scripted Commands for Disaster Recovery

horctakeover
• Useful for temporary loss of primary or remote site
• Execution:
Checks the specified volume or group attributes (paircurchk)
Decides which takeover function to implement based on the attributes
Takeover Switch executes the chosen takeover function
• Temporary loss of Remote Site - P-VOL-takeover
• Both sites operational - Swap-takeover
• Temporary loss of primary site - S-VOL-takeover

Note: See Data Protection module for detailed description

HDS Confidential: For distribution only to authorized parties. Page 6-45


Command Control Interface Configuration and Operations
Scripted Commands for Disaster Recovery

horctakeoff
• Change 3DC multi-target to 3DC cascade while host applications are
running
• Conditions: Temporary primary site failure, failover to Local site has
occurred
• Execute horctakeoff
• If successful, execute horctakeover to switch operations to the Remote
site without affecting host operations at the Local site
• Operates on either individual volume or volume group

Note: See Data Protection module for detailed description

After horctakeoff execution, horctakeover command will failover replication from


the Local Site to the Remote Site without stopping the host application.

Page 6-46 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Microsoft Windows Subcommands

Microsoft Windows Subcommands

Provides UNIX-like functionality for Microsoft Windows hosts


Can be executed by appending
-x <subcommand> to any CCI command
Recommendation: Use a non-destructive command (such as
pairdisplay) when using these subcommands
• findcmddev
• drivescan
• portscan
• sync
• syncd
• mount
• umount
• umountd

HDS Confidential: For distribution only to authorized parties. Page 6-47


Command Control Interface Configuration and Operations
Microsoft Windows Subcommands

sync and syncd


• Flushes the system cache to disk
• syncd flushes the system cache to disk and waits (30 sec) for delayed
(paging) IO then dismounts the drive
• sync and syncd do not propagate to volume mount points under a
specified drive letter

pairdisplay -x sync all


pairdisplay -x sync D:
pairdisplay -x sync D:\mountpoint
pairdisplay -x syncd hdisk3 hdisk4
pairdisplay -x syncd \Vol2

Mount
• Mounts the specified volume to a drive letter or volume mount point

pairdisplay -x mount
pairdisplay -x mount D: hdisk3 p1
pairdisplay -x mount E: \Vol2
pairdisplay -x mount F:\mountpoint \Vol3

Page 6-48 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Microsoft Windows Subcommands

Umount
• Unmount volume (deletes drive letter or volume mount point mapping)
• Will flush the system cache to disk prior to unmounting

pairdisplay -x umount D:
pairdisplay -x umount \Vol2
pairdisplay -x umount F:\mountpoint

Special Facilities for Windows 2008/2003/2000 Systems


• Signature changing facility (see CCI section 4.20.1)
• Directory mount facility (section 4.20.2)
• CCI supports saving/restoring the GUID Diskid of the GPT Basic disk to
the inqraid command

HDS Confidential: For distribution only to authorized parties. Page 6-49


Command Control Interface Configuration and Operations
Microsoft Windows Subcommands

Special Facilities for Windows 2008/2003/2000 Systems (cont’)


• LDM (Logical Disk Manager) Volume Discovery
Volume Discovery Function
• Physical level - Use $Physical as Key Word for the discovery
• LDM volume level - Use $Volume as Key Word for the discovery
• Drive letter level - Use $LETALL as Key Word for the discovery

• System Buffer Flushing Function


• System buffers associated with logical drives can be flushed

Page 6-50 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Configuration Setting Commands

Configuration Setting Commands

RAIDCOM Commands — CCI configuration setting commands:


• Enable non-replication operations to be executed by CCI
• Supported operations:
Creation, modifying, and deletion of Dynamic Provisioning and Copy-
on-Write Pools and virtual volumes
Setting remote replication link attribute, creation of RCU and R-DKC
Creation, modification, and deletion of UR Journals
Creation, modification, and deletion of external storage paths and
external volumes
• For overview of CCI Configuration Setting operations, see “Provisioning
Operations with CCI” section in Hitachi Command Control Interface User
and Reference Guide, MK-90RD7010

• For details on individual commands, see “Configuration Setting


Commands” section in Hitachi Command Control Interface Command
Reference, MK-90RD7009

HDS Confidential: For distribution only to authorized parties. Page 6-51


Command Control Interface Configuration and Operations
Configuration Setting Commands

RAIDCOM configuration file example


• If desired, create minimum horcm.conf file
• Command Device User Authentication must be ON

HORCM_MON
#ip_address service poll(10ms)
timeout(10ms)
127.0.0.1 horcm99 10024 3000
HORCM_CMD
#dev_name dev_name dev_name
\\.\CMD-53238 or \\.\IPCMD-10.4.9.1-31001

Release MODIFY mode on Storage Navigator


Start instance and login using SVP login and password
Execute RAIDCOM commands

RAIDCOM Command example from the Lab Activity


• Add new volumes to an existing UR Journal group

Page 6-52 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Troubleshooting

Troubleshooting

Instance will not start


• Check CCI logs:
HORCM/logx/curlog should have useful information
Common items to check
• Check that CMD device path is correct in the configuration file
• Check that CMD device is labeled; if not label it
• Check that the configuration file is saved in the correct OS directory and
that file attribute is .conf
• Check environment variables
• Check configuration files for hidden characters
• Check that service names have been entered in the services files

HDS Confidential: For distribution only to authorized parties. Page 6-53


Command Control Interface Configuration and Operations
Troubleshooting

If paircreate fails, check to see if:


• Correct devices are specified in the configuration file:
Use raidscan
• Remember that LUNs are displayed in hex in LUN Manager so LUN 10 is
actually LUN 16 in the configuration file
• Remember that there may be absolute LUNs to consider
Run raidscan to find the absolute LUN value if using an older
version of CCI
• S-VOLs are not reserved through Storage Navigator ShadowImage
function
• Links are in place and working properly
• Check command error log

Page 6-54 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Troubleshooting

Troubleshooting Information

Information Type Filename


Problem Starting HORCM Startup /HORCM/Logn/Curlog
CCI Log (N=Instance #)
Problem with CCI Command Log /HORCM/Logn/horcc_HOST.Log
Commands
Command Errors Error Log /HORCM_Log/horcmlog_HOST/hor
cm.log

$HORCM_Log Default Directory: /HORCM/Logn/Curlog (N=Instance #)

$HORCC_Log Default Directory: /HORCM/Logn

HDS Confidential: For distribution only to authorized parties. Page 6-55


Command Control Interface Configuration and Operations
Troubleshooting

CCI Error Codes


• Check /HORCM/log*/curlog/horcmlog_HOST/horcm.log where:
* is the instance number.
HOST is the host name.
• Example: 11:06:03-37897-10413- SSB = 0xSSB1,SSB2
• Error codes appear on the right of the equal symbol (=)

Error codes are specific to each replication and each storage


platform.
• Consult the Hitachi Universal Replicator User Guide for your particular
model of Hitachi enterprise storage
• For error codes not listed, contact the Hitachi Data Systems Support
Center

Page 6-56 HDS Confidential: For distribution only to authorized parties.


Command Control Interface Configuration and Operations
Module Summary

Module Summary

In this module, you should have learned to:


• Describe the history of CCI, sometimes called RAID Manager
• Identify CCI software components
• Describe the checklist used to configure
• Create a physical configuration
• Create and revise the configuration files
• Describe the command set
• Identify CCI Microsoft Windows subcommands
• Discuss configuration setting commands
• Perform troubleshooting and corrective actions for operations

HDS Confidential: For distribution only to authorized parties. Page 6-57


Command Control Interface Configuration and Operations
Module Review

Module Review

1. What Storage Navigator operations are required before RAID


Manager CCI can be used to manage Universal Replicator pairs?
2. The Command Device must be mapped to a port so that user
applications can access it. True or False?
3. If using the new HORCM_LDEV parameter, do you need to code
the absolute LUN numbers?

Page 6-58 HDS Confidential: For distribution only to authorized parties.


7. Data Protection
Concepts and
Practices
Module Objectives

Upon completion of this module, you should be able to:


• Identify current Data Protection bundle specifications
• Evaluate Data Protection planning considerations for:
• Rolling Disaster
• Data Consistency
• RPO and RTO
• Analyze Remote Copy Solutions
• Minimize the Remote Copy Resynchronization Vulnerability
• Perform failover operations with Hitachi CCI software
• Identify best practices for Data Protection

HDS Confidential: For distribution only to authorized parties. Page 7-1


Data Protection Concepts and Practices
Bundles

Bundles

VSP and USP V:


• Disaster Recovery Bundle includes these software products:
Hitachi TrueCopy Remote Replication
Hitachi Universal Replicator Remote Replication
• Disaster Recovery Extended bundle enables the following features:
3 Data Center Cascade and Multi-target including Delta Resync

Universal Replicator MxN Consistency Groups

Hitachi Unified Storage


• Remote Replication
• Remote Replication Extended

Page 7-2 HDS Confidential: For distribution only to authorized parties.


Data Protection Concepts and Practices
Planning Considerations

Planning Considerations

Regulatory requirements driving data recovery


• Sarbanes-Oxley - Public Company Accounting Reform and Investor
Protection Act of 2002, Title III Corporate Responsibility, Section 404
Assessment of Internal Control

Commonly interpreted to require specific information technology


controls on data archiving and retention
• Basel II, III - International banking accord
Defines operational risk guidelines for international banking
Establishes internal archiving requirements
• Email archiving - Litigation defense under business records laws
Required by many regulatory entities and law enforcement
organizations

HDS Confidential: For distribution only to authorized parties. Page 7-3


Data Protection Concepts and Practices
Planning Considerations

Weigh cost of recovery against recovery


time, cost and value of the data

Page 7-4 HDS Confidential: For distribution only to authorized parties.


Data Protection Concepts and Practices
Planning Considerations

Two categories of data protection

Disasters of scale
• Point disaster representing a single event at a single point of time.
Isolated storage system hardware failure, for example
• Site-wide disaster impacting operations at an entire facility, caused by
fire, earthquake or hurricane
Disasters of time include
• Immediate disaster where a single, distinct event impacts all
components at the same time, such as a tsunami or explosion (see note)
• Rolling disaster where several components fail at different points in
time, for example a fire takes down a server, then storage, network,
eventually the entire site

Typically this is the most severe type of disaster

Note: ‘Technically' even in major disruption event, technology components will fail at
different times (in a more "rolling disaster“ fashion).

HDS Confidential: For distribution only to authorized parties. Page 7-5


Data Protection Concepts and Practices
Planning Considerations

Typical Data Protection Tiers

Technology Typical Typical Disk


Distance
Tier RPO Range RTO Range Usage

Tape Backup 24-168 hrs 2-168 hrs None Usually local

Virtual Tape 12-48 hrs 1-24 hrs Disk Pool Any (replicated)

Disk Point-in-Time 2-36 hrs 0.25-12 hrs 2 / Pool Any

Synchronous Remote Copy 0-2 mins 1-8 hrs 2 Local

Asynchronous Remote Copy 0-10 mins 0.5-8 hrs 2 Any

3 Data Center with Sync 0-2 mins 1-8 hrs 3 Local/Any

3 DC Async 0-10 mins 1-8 hrs 3 Any/Any

Note: Actual RPO and RTO Ranges are dependent on features such as host clustering,
automated failover packages, and other factors.

Page 7-6 HDS Confidential: For distribution only to authorized parties.


Data Protection Concepts and Practices
Planning Considerations

Rolling Disaster — Worst Case

A rolling disaster occurs when an unplanned outage event takes


place over a span of time — anywhere from a few minutes to several
hours. For example:
• A fire starts in the network area
• This causes a link failure while remote backups are running
• A few moments later, fire reaches the other converter cabinet and the
other links fail
• Fire spreads to the data center floor and storage systems are affected

Primary Site Remote Site


2
3
1

During a rolling disaster, not all components fail at precisely the same moment.
In this situation, a system may still be able to process transactions and issue
updates to primary storage devices, but due to earlier failures, updates may not
replicate successfully to the secondary site.
Rolling disasters pose a challenge because they may result in corrupted and
unusable data at the remote site, requiring difficult and very lengthy recovery
processes.
To protect against rolling disasters, a data replication technology must be able to
freeze remote replicas at a point in time prior to or during the onset of the outage.
This ability to create point-in-time images of data is what differentiates remote copy
technology from simple mirroring.
Because the remote and local I/O of a synchronous replication succeed or fail
together, this replication approach does not introduce data inconsistencies
following a disaster. Rolling disasters are primarily a challenge for remote
asynchronous replication, and one of the principle areas of concern is write order
fidelity.

HDS Confidential: For distribution only to authorized parties. Page 7-7


Data Protection Concepts and Practices
Planning Considerations: Rolling Disaster

Planning Considerations: Rolling Disaster

Rolling Disaster — Worst Case

If there is no control over dependent writes to the backup volumes,


out of sequence data is committed to the remote volumes

Primary Site Remote Site


2
3
1

Data received at the remote disk control unit (RCU) will be


incomplete, with dependent writes missing
Solution:
• Preserve write order fidelity at the remote site
• Maintain Data Consistency in a rolling disaster

In the context of data replication, data consistency represents the ability to


recover from a failure or disruptive event. A fundamental concept of data
consistency that enables quick recovery is the dependent write, the pervasive logic
among complex data structures comprising databases, file systems, etc., that
determines the sequence in which writes are issued.
Preserving dependent writes maintains the consistency of the data and allows
systems and applications to restart after a sudden failure.

Page 7-8 HDS Confidential: For distribution only to authorized parties.


Data Protection Concepts and Practices
Planning Considerations

Planning Considerations

Data Consistency

Represents the ability to recover from a failure or disruptive event by


preserving dependent writes
• A dependent write is a data update that cannot be executed until a
previous write, on which it is dependent has been executed, thus
preserving write order fidelity

Three levels of data consistency with different implications within the


application and data architecture

HDS Confidential: For distribution only to authorized parties. Page 7-9


Data Protection Concepts and Practices
Planning Considerations

Data Consistency Levels

1. I/O consistency, or crash recovery consistency, refers to remote


data that is not necessarily transaction consistent, but is still in a
restartable state.

• Provided by Hitachi remote replication products and other vendor’s


replications

2. Transaction consistency - A transaction is a logical unit of work


that may include hundreds or thousands of updates
• Achieved when:
An application is shut down (quiesced), allowing the replication to
“catch up”
The application/database or other system component rolls back or rolls
forward after a restart
• Local mirroring can assist by providing generational rollback data

Data Consistency Levels

3. Application consistency
• Implies multiple transaction streams generated by one (or more)
applications that have each been recovered to a common consistent state
• Collectively, the streams need to be synchronized based on the
application requirements. This can be thought of as “user consistency”
• Provided by application and operating system tools

Page 7-10 HDS Confidential: For distribution only to authorized parties.


Data Protection Concepts and Practices
Planning Considerations

RPO and RTO

Evaluation of risk tolerance is stated in terms of how much data must


be recovered to resume operations, the RPO, and the outage
duration, the RTO

• RPO: The worst case time between the interruption in operations and the
last recoverable backup, where potentially lost data weighs against cost
Represents a measure of the amount of data that has to be recovered
• RTO: The time to resume operations after the interruption

When evaluating the cost of business continuity solutions, the greatest cost
component is usually the bandwidth needed to support remote replication. The
greatest benefit of a high bandwidth to data change rate ratio is maintaining a
minimal RPO.

HDS Confidential: For distribution only to authorized parties. Page 7-11


Data Protection Concepts and Practices
Planning Considerations

RPO and RTO

To maintain a continuous replica copy, bandwidth must exceed the


average write workload that occurs during any given RPO interval
subject to the capacity limitations of the buffering mechanism

• Establish desired RPO


• Identify the matching interval with greatest write activity
• Calculate link bandwidth and buffer capacity required to keep up with this
traffic
• In practice, data on hand can be used to revise resource requirements

RPO and RTO

M-JNLs and RPO


• Capacity must be sufficient to absorb inflow change rate spikes
• But, any M-JNL data accumulation represents an increase in RPO

JNL VOL
Throughput

RPO
Link
Bandwidth
Data Change
Rate
Time
Typical Change Rate Peak

Page 7-12 HDS Confidential: For distribution only to authorized parties.


Data Protection Concepts and Practices
Planning Considerations

Compare Remote Copy Solutions

Universal Replicator: Hardware Asynchronous Remote Copy


• Benefits:
Journals provide buffers to increase resistance to link problems
RPO less dependent on link bandwidth than other Async remote
replications (see note)
Maintains consistency of remote data with hardware consistency
grouping and sequence numbering scheme
No CPU involved; storage system does copy
• Restrictions:
Non-zero RPO due to loss of Update Copy, RTO will be a few minutes
(high availability environment with takeover function) to 8 hours
(manual bring-up operations)

Journal volume throughput becomes limiting factor (see note)


Exclude temp volumes (depending on customer requirements)
Available only for USP, USP V, VSP and later models

Note: When evaluating the cost of business continuity solutions, the greatest cost
component is usually the bandwidth needed to support remote replication.
Universal Replicator reduces the cost outlay for link bandwidth. Some of that cost is
transferred to the one-time cost of providing maximum possible journal volume
throughput. Link bandwidth is still important, because unsent data on the MCU’s
Journal Volume represents an increase in RPO.

HDS Confidential: For distribution only to authorized parties. Page 7-13


Data Protection Concepts and Practices
Planning Considerations

Compare Remote Copy Solutions

TrueCopy: Hardware Synchronous Remote Copy


• Benefits: Zero RPO, low RTO
No CPU involvement in remote copy
Writes go to remote cache:
• Automatically maintains single-volume data consistency
• Restrictions:
Host performance impact:
• Holds I/O in MCU cache until reply comes back from RCU
• Turnaround times become unacceptable at longer distances
No hardware level grouping function. Must use CCI Software
Consistency Preservation function or Replication Manager to provide
multi-volume consistency of secondary volumes
Best Practice is to exclude Temp data from replication based on
customer requirements

Note: Hitachi Replication Manager (HRpM) can create and manage TrueCopy
Consistency Groups.

Page 7-14 HDS Confidential: For distribution only to authorized parties.


Data Protection Concepts and Practices
Planning Considerations

Resynchronization Vulnerability

Affects remote replications that use bitmaps to track changes during


pair suspension
Loss of primary bitmap during resync will result in inconsistent data
on S-VOLs
Solution: Minimize Resync Time with At-Time Split function
• ShadowImage groups matched pair-for-pair with Universal Replicator
R-JNL Groups
• Provide a “gold copy” of replicated data at remote site
• Minimizes the length of time pairs are split

Recovery

File Recovery:
• When the primary or secondary storage system suspends a remote pair
due to a disaster, the secondary data volume may contain in-process,
inconsistent data

• Neither the Storage System nor the replication can help recover the files
• File recovery procedures will be similar to those used for recovering data
volumes that become inaccessible due to control unit failure

Lost Updates:
• No hardware remote replication provides any function for retrieving lost
updates
• Requires application features such as database log file
• Verify currency of files used for file and database recovery

HDS Confidential: For distribution only to authorized parties. Page 7-15


Data Protection Concepts and Practices
Failover

Failover

horctakeover command

Checks the specified volume’s or group’s attributes (paircurchk)


Determines where it is running
Decides takeover function based on type of replication (TC or UR)
Executes the chosen takeover function, and returns the result
Takeover-switch process determines which of the following functions
to execute:
• Swap takeover - reverses replication direction during planned switch of
production to secondary site
• P-VOL takeover - for TrueCopy Sync only. Splits P-VOL
• S-VOL takeover - puts S-VOLs in SSWS state when primary site is
temporarily unusable. Allows full access to S-VOLs after production is
switched to the secondary site

Horctakeover:
Operates at either volume or group level
If S-VOL-takeover (pairsplit -RS) is specified for a group, the data consistency
check is executed for all volumes in the group, and all inconsistent volumes are
found in the execution log file and displayed (same as paircurchk command).
When switching production to the secondary site, the takeover command allows
planned swapping of the primary and secondary volumes, (SWAP-Takeover) so
that replication operations can be continued using the reversed volumes.
When control is handed back to the original primary node, swapping the
volumes again eliminates the need to copy them. The takeover command also
allows the secondary volume to be separated for disaster recovery operations.

Page 7-16 HDS Confidential: For distribution only to authorized parties.


Data Protection Concepts and Practices
Failover

horctakeover command

Execute HORCTAKEOVER Takeover-Switch Is the local No


on the primary site or the evaluates the status of volume the
remote site the P-VOL and S-VOL P-VOL?

Yes

Fence Is the Is the


level Yes remote No primary No
Data, node node
Status? available? available?

No Yes Yes

No Operation Execute P-VOL Execute Swap Execute S-VOL


Takeover Takeover Takeover

HDS Confidential: For distribution only to authorized parties. Page 7-17


Data Protection Concepts and Practices
Failover

horctakeover command

P-VOL Takeover - TrueCopy Sync only


• Generated when Secondary site is unreachable
• P-VOL-PSUE takeover:
Changes the primary volume to the suspend (PSUE , PSUS) state
which enables write I/Os to all primary volumes of the group
The action of the P-VOL-PSUE- takeover may cause PSUE and/or
PSUS to be intermingled in the group
• P-VOL-SMPL takeover:
• If the P-VOL-PSUE takeover function fails, the P-VOL-SMPL takeover
function is executed

The P-VOL-takeover function operates only on TC Synchronous remote replication. It


releases the pair state to avoid fencing P-VOLs when the TC fence level is set to data
or status.
This function allows the takeover node to use the primary volume (for example,
reading and writing are enabled), on the assumption that the remote node
(possessing the secondary volume) cannot be used. P-VOL takeover can be specified
for a paired volume or a group.
For Universal Replicator or TC Synchronous with fence level never, P-VOL-takeover
will not be executed.

Page 7-18 HDS Confidential: For distribution only to authorized parties.


Data Protection Concepts and Practices
Failover

horctakeover command

swaptakeover
• Reverses replication direction during planned switch of production to
secondary site when all hardware at both sites is still operational
• This function internally executes the following commands to swap the
primary and secondary volumes:
pairsplit -RS forces the secondary volume into SSWS mode
pairresync -swaps swaps the secondary volume with the primary
volume
• If successful:
S-VOL becomes P-VOL (MCU - RCU relationship is reversed)
New S-VOL is resynchronized

When the P-VOL status of the remote node is PAIR and the S-VOL data is consistent, a
planned swap of primary and secondary volumes can be executed. Swaptakeover can
be specified for a paired volume or a group.
This reverses the remote copy direction and it will synchronize the pair. To move
back to the original state issue swap takeover again, at the new secondary site.

HDS Confidential: For distribution only to authorized parties. Page 7-19


Data Protection Concepts and Practices
Failover

horctakeover command

S-VOL Takeover
• Invoked when the primary site is temporarily unusable and production is
switched to the secondary site. The S-VOL can used without restriction,
while noting all changed data locations in its bitmap

• Can be executed manually with the pairsplit -RScommand, which


forces the secondary volume into SSWS mode
• S-VOL takeover can be specified for a paired volume or a group
When specified for a group, a data consistency check is executed for
all volumes in the group, and all inconsistent volumes are displayed
• When access to the primary site is restored, execute pairresync -swaps
command to reverse copy direction and resync the data to the new
S-VOLs (at the original primary site)

Using CCI

Suspend Swap (pairsplit -RS at remote Site)


• Checks the pair status of secondary data volumes and splits groups (or
individual volumes) to ensure consistency of secondary data volumes
• S-VOLs in SSWS (SWAPPING status)
• Remote volumes can now be used without restriction
• Useful for Data Protection testing that requires updating on the secondary
site in the Suspend status while production stays up at the primary site

Page 7-20 HDS Confidential: For distribution only to authorized parties.


Data Protection Concepts and Practices
Failover

Using CCI

Resync Swap (pairresync -swaps at remote site)


• Reverses primary data volumes and secondary data volumes if primary
site is still usable
Attempts to do a resync of the new secondary data volume (now in the
former primary site)
If successful, reverses the MCU/RCU relationship and prepares the
secondary site to function as if it were a primary site
• Operations can be started at the new primary site, secondary site, using
secondary data volumes that are now primary volumes
• Before switching production back to the original primary site:
Execute horctakeover at original primary site

Using CCI

pairresync -swapp at new primary site


• Reverses a remote replication that has already been swapped
• Issue pairsplit -RS at new secondary site
• Execute pairresync -swapp at new primary site after production comes
down but before bringing up production at original primary site
• If successful, reverses the MCU/RCU relationship again. The P-VOLs are
now back in the primary site
• The function is exactly the same as horctakeover executed at the
original primary site

HDS Confidential: For distribution only to authorized parties. Page 7-21


Data Protection Concepts and Practices
Best Practices

Best Practices

Situation — MCU consistency group has P-VOL = PAIR or COPY


after pairsplit -RS and S-VOLs are in error state
1. pairsplit -RS was executed while communication is unavailable
(network failure, MCU power down), or…
2. pairsplit -RS was executed while all pairs in RCU are suspended

PAIR SSUE
P-VOL S-VOL
PAIR SSUE App
P-VOL S-VOL
HOST

• Fix: Reverse Copy Direction


After MCU and network recovers, execute pairsplit -RS again, then
issue resync swap (pairresync -swaps)
• Pair status in RCU consistency group does not change, but pair status in
MCU consistency group changes to suspend
• Execute resync swap again to change MCU volumes back to P-VOLs

Page 7-22 HDS Confidential: For distribution only to authorized parties.


Data Protection Concepts and Practices
Best Practices

Situation: Some pairs in the consistency group did not participate in


Takeover.
• Individual pairs had been split before Takeover was issued

PSUS SSUS
P-VOL S-VOL
PAIR PAIR App
S-VOL P-VOL
HOST

• Fix: Reverse Copy Direction for individual pairs


execute resync swap (pairresync -swaps) for the affected pair

Situation: P-VOL= COPY or PAIR, S-VOL= suspend


• Reasons:
RCU power down
RCU internal error w/o delete pair
• Fix:
Restart by issuing pairsplit then resync

PAIR PSUE
P-VOL S-VOL

PAIR PSUE
P-VOL S-VOL

HDS Confidential: For distribution only to authorized parties. Page 7-23


Data Protection Concepts and Practices
Best Practices

Situation: P-VOL= PSUE, S-VOL= COPY


• Possible Reasons:
Failure during paircreate
• Fix: Issue paircreate again.
If restart is unsuccessful, issue pairsplit -S, then execute paircreate
again

PSUE COPY
P-VOL S-VOL

COPY COPY
P-VOL S-VOL

Monitoring

Remote Copy
• TrueCopy - Monitor pair status, consistency group status
• Universal Replicator - Universal Replicator resists extended link failure
by buffering data in journal volumes. Monitor pair status, Journal Group
status, Journal Volume utilization

Page 7-24 HDS Confidential: For distribution only to authorized parties.


Data Protection Concepts and Practices
Best Practices

Monitoring

Remote Copy (continued)


• TrueCopy and Universal Replicator
At remote site, check secondary volume consistency
Consistency is good if:
• The CCI consistency preservation function was used
• All pairs were in PAIR status at the time of the disaster
• There is no volume based pair operation

HDS Confidential: For distribution only to authorized parties. Page 7-25


Data Protection Concepts and Practices
Best Practices

TrueCopy Sync Consistency Group

Use CCI TrueCopy Consistency Preservation Function


-fg parameter with Consistency Group ID defines the TrueCopy

Sync group
• Provides multi-volume write order consistency for TrueCopy volumes (see
note)
• Create CCI script using these commands to control TrueCopy Sync
Consistency Group

Use UR MxN Consistency Groups to ensure consistency of data


across multiple journal groups and multiple storage systems

Note: For TrueCopy copy groups with no C/T group ID specified, the update order
is only preserved by copy pairs on the hardware. When Fence Level is Never, if a
failure occurs on a copy destination volume within the copy group, group
consistency might be lost because copy pairs other than the failed copy pair within
the same copy group continue processing. When applied to a planned outage, the
suspend operation takes place for each copy pair. Consequently, consistency within
the copy group might be broken.

Page 7-26 HDS Confidential: For distribution only to authorized parties.


Data Protection Concepts and Practices
Best Practices

Data Protection Configuration

Generic Recovery Configuration - Remote replication combined with


ShadowImage replication
• ShadowImage at remote site provides remote backup copies for rapid
recovery, split-mirror backup, decision support, and testing and
development

• Provides a remote disaster restart and recovery capability, as well as


reducing or eliminating remote copy resynchronization vulnerability

Primary Site Remote Site

TrueCopy ShadowImage
or UR S-VOL
TrueCopy Secondary
or UR VOL
Primary VOL --------------------
Shadow Image
P-VOL ShadowImage
S-VOL
Remote tape
archive

In the diagram, UR stands for Universal Replicator.

HDS Confidential: For distribution only to authorized parties. Page 7-27


Data Protection Concepts and Practices
Best Practices

ShadowImage At-Time Split

AT-TIME split function provides consistent ShadowImage copy of UR


S-VOL data without splitting UR groups:
When pairsplit issued to SI consistency group using UR S-VOLs as
P-VOLs:
• Universal Replicator settles data to secondary volumes
• ShadowImage splits with multi-volume consistent data on ShadowImage
S-VOLs

Requirement: UR groups must match ShadowImage consistency


groups exactly

Page 7-28 HDS Confidential: For distribution only to authorized parties.


Data Protection Concepts and Practices
Best Practices

Review

Universal Replicator Remote Copy Monitoring


• UR does not have heart-beat timeout.
Resists extended link failure by storing data in JNL VOL
• At remote site, check S-VOL consistency.
• Consistency of S-VOLs is good if:
All pairs were in PAIR status at the time of the disaster
There is no volume based pair operation

HDS Confidential: For distribution only to authorized parties. Page 7-29


Data Protection Concepts and Practices
Module Review

Module Review

1. What are the two CCI commands used sequentially to invoke


secondary site takeover?
2. What should be the status condition of the secondary volume
following the issuance of the pairsplit -RS?
3. What is the expected result of the two commands?

Page 7-30 HDS Confidential: For distribution only to authorized parties.


8. Three Data Center
Operations
Module Objectives

Upon completion of this module, you should be able to:


• Describe the purpose of three data center (3DC) replication
• Identify the different 3DC configurations
• Discuss 3DC Operations
• Describe disaster recovery operations in 3DC environment

HDS Confidential: For distribution only to authorized parties. Page 8-1


Three Data Center Operations
Purpose of 3DC Replication

Purpose of 3DC Replication

Purpose of 3DC Configuration


• Provide redundancy of production data
• Maintain multiple mirrors of the same data at different remote sites
Two methods:
• TrueCopy and Universal Replicator (3DC TC-UR)
TrueCopy - low RPO, RTO, but has application performance impact
Hitachi Universal Replicator - asynchronous replication at any distance
Protection in region-wide disasters
• 3DC Async-Async (also called 3DC UR-UR or 2 Async link)
Configuration (supported on VSP and USP V only)
• Two instances of Universal Replicator
• Same protection in region-wide disasters
• No application performance impact

ShadowImage and Thin Image Snapshot can be used to mirror


replication data at any site

Page 8-2 HDS Confidential: For distribution only to authorized parties.


Three Data Center Operations
Configurations

Configurations

Summary

3DC TC-UR Multi-target and Multi-target with Delta Resync


• Simultaneously copies data from central location to local site with
TrueCopy and to third remote site via HUR
• Delta Resync supports recovery of remote site from synchronous copies
of journal data at Local site if primary site has failed
3DC Async-Async - Multi-target and Multi-target with Delta Resync
• Two instances of HUR simultaneously copy data to different remote sites
• Delta Resync supported (Open Systems only)
3DC Cascade
• Includes 3DC TC-UR, 2DC Cascade Pass-Thru and 3DC UR-UR)
• Replicates data from primary site to local site , then to third remote
location with HUR
Four Data Center
• 3DC multi-target and 3DC cascade combined

HDS Confidential: For distribution only to authorized parties. Page 8-3


Three Data Center Operations
Configurations

3DC TC-UR Multi-target

Three storage arrays: primary, local and remote storage systems


• TrueCopy operates between primary and local sites
• Universal Replicator operates between primary and remote sites
• CCI at all sites

Host CCI Host CCI


Primary Local
Data Center Up to 300 km Data Center
TrueCopy ShadowImage
TrueCopy
HUR P-VOLs Thin Image
Unlimited
Distance
Host CCI
Remote
ShadowImage
Data Center
Thin Image HUR

Data volumes in the primary site function


as TrueCopy P-VOLs and Universal ShadowImage
Replicator P-VOLs concurrently Thin Image

3DC TC-UR Multi-target

TC
0 0 Local Site

1 TrueCopy S-VOL

2
3
HUR
Primary Site
TrueCopy P-VOL/ SI
Universal Replicator P-VOL 0 0 0
1 1 1
2 2 2
3 ShadowImage
Remote Site S-VOL
Universal Replicator
S-VOL
Page 8-4 HDS Confidential: For distribution only to authorized parties.
Three Data Center Operations
Configurations

3DC TC-UR Multi-target with Third Link

Third Link between local and remote sites enables remote


replication in the event of failure at primary site
• Full volume paircreate is required to establish HUR pairs between local
and remote sites
• CCI is required at all sites Remote
CMD DEV
Host CCI Host CCI
Primary Local
Data Center Up to 300 km Data Center
TrueCopy TrueCopy ShadowImage
HUR HUR Thin Image

Unlimited
Distance Host CCI
Remote
ShadowImage
Data Center
Thin Image HUR

Data volumes in the local site can


function as TrueCopy S-VOLs and ShadowImage
Universal Replicator P-VOLs if needed Thin Image

3DC Multi-Target supports two configurations:


2-Link: The Remote Replication software P-VOL in the primary site functions as
both Remote Replication software P-VOL and HUR P-VOL.
3-Link: A link is provided between the Local site and the remote site. This link
greatly increases overall disaster recovery flexibility by providing a quick way to
resume 3DC operations should there be a communications failure between the
primary and remote sites.

HDS Confidential: For distribution only to authorized parties. Page 8-5


Three Data Center Operations
Configurations

3DC TC-UR Multi-target with Delta Resync

With Delta Resync feature, Third Link between local and remote
sites enables quick resumption of remote replication in the event of
failure at primary site

• Only pairresync is required


• CCI is required at all sites Remote
CMD DEV
Host CCI Host CCI
Primary Local
Data Center Up to 300 km Data Center
TrueCopy TrueCopy ShadowImage
HUR HUR Thin Image
Unlimited Delta Resync Pairs
Distance
Host CCI
Remote
ShadowImage
Thin Image
Data Center
HUR

ShadowImage
Thin Image

Three-Link Multi-target Bitmaps Local Site


TrueCopy S-VOL
TC
0 0
1 1
2 Primary HUR 2
HUR Delta
3 Resync 3
Primary Site
TrueCopy P-VOL/ 0
Universal Replicator P-VOL
0 0 1
SI
Mirror ID association of a 1 1 2 ShadowImage
3DC Multi-target Delta 2 2 0 S-VOLs
Resync configuration with
ShadowImage at remote 3 1
site Remote Site 2

Universal Replicator
S-VOL
Page 8-6 HDS Confidential: For distribution only to authorized parties.
Three Data Center Operations
Configurations

3DC UR-UR Journal group setting

All Journal groups (at all sites) participating in 3DC Async


configurations must have UR 3DC option enabled
• UR 3DC can only be enabled
when Journal groups are created
• Cannot be set afterwards (see note)

Note: This means that existing journal groups with existing HUR pairs cannot be
converted to 3DC Async. Therefore, to include any existing non-3DC Async
volumes in a 3DC Async configuration, all pairs will have to be deleted, the Journal
groups deleted and re-created as 3DC Async groups.

HDS Confidential: For distribution only to authorized parties. Page 8-7


Three Data Center Operations
Configurations

3DC UR-UR Multi-target

Three storage arrays: Primary and two remote storage systems


Same HUR M-JNL groups replicate to both remote sites
Requires Remote Command Devices to manage with CCI
CCI installed at all sites
Remote
CMD DEV
Host CCI Host CCI
Primary Local
Data Center Data Center
TrueCopy Unlimited Distance TrueCopy ShadowImage
Remote
CMD DEV HUR HUR Thin Image
Unlimited
Distance
Host CCI
Remote 2
ShadowImage
Thin Image Data Center
HUR Remote
CMD DEV
Data volumes in primary site
function as P-VOLs for both
ShadowImage
Universal Replicator instances Thin Image

3DC UR-UR Multi-target Mirror IDs

HUR
0 0 Remote Site 1
1 1 HUR S-VOL
2 2
3 3
Primary Site

Universal Replicator P-VOL


HUR SI
0 0 0
1 1 1

Mirror ID Association of a 2 2 2 2
Async Link Multi-target 3 ShadowImage
Configuration with S-VOL
Remote Site 2
ShadowImage at Remote Site HUR S-VOL
Page 8-8 HDS Confidential: For distribution only to authorized parties.
Three Data Center Operations
Configurations

3DC UR-UR Multi-target with Delta Resync

Third Link between local and remote sites enables quick resumption
of remote replication in the event of failure at primary site
• Without Delta Resynchronization, a full volume paircreate is required
• If Delta Resync function is installed, then only pairresync is required
• CCI and Remote Command Devices required at all sites Remote
CMD DEV
Host CCI Host CCI
Primary Local
Data Center Data Center
TrueCopy ShadowImage
Remote HUR
HUR Thin Image
CMD DEV
Unlimited

Distance
Host CCI
Remote 2
ShadowImage
Thin Image
Data Center
HUR Remote
CMD DEV

Data volumes in local site can function as


ShadowImage
HUR P-VOLs for both replications if needed Thin Image

3DC Multi-Target supports two configurations:

2-Link: The Remote Replication software P-VOL in the primary site functions as
both Remote Replication software P-VOL and HUR P-VOL.
3-Link: A link is provided between the Local site and the remote site. This link
greatly increases overall disaster recovery flexibility by providing a quick way to
resume 3DC operations should there be a communications failure between the
primary and remote sites.

HDS Confidential: For distribution only to authorized parties.


Page 8-9
Three Data Center Operations
Configurations

3DC UR-UR Delta Resync Mirror IDs Local site


HUR S-VOL
HUR
0 0
1 1
2 2
3 HUR Delta
3
HUR
Resync
Primary Site
Universal 0
Replicator P-VOL 1
0 0
SI
1 1 2
Mirror ID association of a 2 0
2 Async Link Delta
1
Resync configuration with
2
3

ShadowImage at remote Remote Site 2 2 ShadowImage


site Universal Replicator S-VOLs
S-VOL
Page 8-10 HDS Confidential: For distribution only to authorized parties.
Three Data Center Operations
Configurations

Cascade 3DC TC-UR

Three sites: primary, local and remote


• TrueCopy software operates between primary and local sites
• Universal Replicator operates between local and remote sites
• Storage Navigator required at all sites
• CCI required at all sites
Host CCI Host CCI
Primary Local
Data Center Up to 300 km Data Center
TrueCopy ShadowImage
TrueCopy
HUR Thin Image

Unlimited Distance
Host CCI
Remote
ShadowImage
Thin Image Data Center ShadowImage
HUR Thin Image
Data volumes in local site function
as TrueCopy S-VOLs and Universal
Replicator P-VOLs concurrently ShadowImage
Thin Image

In the diagram and the following pages,


TrueCopy stands for TrueCopy Remote Replication
HUR stands for Universal Replicator
Thin Image is the new version of Hitachi Copy-on-Write Snapshot.
Number of pairs has increased to 1024
Works with VSP and new Hitachi Unified Storage (HUS)

HDS Confidential: For distribution only to authorized parties. Page 8-11


Three Data Center Operations
Configurations

Cascade 2DC PassThru

Similar to 3DC Cascade configuration (see setup procedure notes)


• Primary Site: TC Sync P-VOLs
• Local site: TC S-VOLs/UR P-VOLs Dynamic Provisioning volumes
• If System option mode 707 ON - TrueCopy writes to UR JNL
• Remote Site, with UR S-VOLs VSP, USP V and USP VM Only
Host CCI TrueCopy Host CCI
HUR
Primary Local
Data Center Up to 300 km Data Center
HUR Remote
TrueCopy
Journal CMD DEV

Unlimited Distance
Host CCI
Dynamic Provisioning Volumes Remote
ShadowImage
reduces disk requirements Thin Image
Data Center ShadowImage
HUR
Thin Image

If no host is connected, Remote


Command Device is required ShadowImage
Thin Image

2DC Cascade PassThru is similar to 3DC Cascade. OPEN Systems only. Enabled on Edit
JNL Volumes Panel.
Allows use of Hitachi Dynamic Provisioning volumes for data volumes at local site
Primary and Remote Sites can contain intermix of any Hitachi enterprise storage
systems
Local site must be Universal Storage Platform V or VM system
This configuration provides synchronous replication to the Local site and asynchronous
replication to the remote site
Two operational modes, dependent on System Option 707
Setup Procedure
Install TrueCopy and HUR at appropriate sites.
Configure ports and journal groups.
Set system option mode 707 on the HUR primary storage system (local site).
Create HUR journal groups at Local and Remote Sites.
Create UR pairs in the local site. Use Mirror ID 1.
Confirm HUR PAIR Status.
Create a TrueCopy pair in the primary site.
Confirm that the TrueCopy pair status has become PAIR.
Note: This procedure is the opposite of the normal 3DC pair creation order.

Page 8-12 HDS Confidential: For distribution only to authorized parties.


Three Data Center Operations
Configurations

Cascade 2DC PassThru Journal group setting

All Journal groups (at all sites) participating in 2DC PassThru


configurations must have 2DC Cascade option enabled
• Can only be enabled when
Journal groups are created

Note: This means that existing journal groups with existing HUR pairs cannot be
converted to 2DC Cascade.

HDS Confidential: For distribution only to authorized parties. Page 8-13


Three Data Center Operations
Configurations

Cascade TC-UR Mirror IDs

3DC Cascade
2DC Cascade PassThru

TC
0 0 0 0
1 HUR 1 1
SI
2 2 2
3 3
0
Primary Site Local Site Remote Site 1
TrueCopy P-VOL TrueCopy S-VOL/ Universal Replicator 2
Universal Replicator S-VOL
P-VOL ShadowImage
S-VOL

3DC UR-UR Cascade

Three sites: primary, local and remote


• First instance of HUR operates between primary and local sites
• Another instance of UR operates between Local and remote sites
• CCI and Remote Command Devices required at all sites
• Storage Navigator required at all sites Remote
CMD DEV
Host CCI Host CCI
Primary Local
Data Center Unlimited Distance Data Center
Remote HUR HUR S-VOL/ ShadowImage
CMD DEV P-VOL P-VOL Thin Image
Unlimited Distance
Host CCI
Remote 2
ShadowImage
Data Center
Thin Image HUR Remote
S-VOL CMD DEV
Data volumes in local site function as
Universal Replicator S-VOLs and Universal
Replicator P-VOLs concurrently ShadowImage
Thin Image

Page 8-14 HDS Confidential: For distribution only to authorized parties.


Three Data Center Operations
Configurations

3DC UR-UR Cascade Mirror IDs

Mirror Unit Number Usage

HUR
0 0 0 0
1 1 HUR 1 1
SI
2 2 2 2
3 3 3
0
Primary Site Remote Site 1 Remote Site 2 1
HUR P-VOL HUR S-VOL/ Universal Replicator 2
HUR P-VOL S-VOL
ShadowImage
S-VOL

4DC

Combines 3DC Multi-target and Cascade


• Four storage arrays: primary, local and two remote subsystems
• TrueCopy or HUR operates between primary and local sites
• Universal Replicator operates between primary and remote site
• CCI and Remote Command Devices connected to all sites
Host CCI Host CCI Host CCI

Primary Data Center Local Data Center Remote 1 Site


TrueCopy TrueCopy
HUR
HUR HUR

Host CCI

Remote 2 Site

ShadowImage
HUR
Thin Image

HDS Confidential: For distribution only to authorized parties. Page 8-15


Three Data Center Operations
Configurations

4DC Mirror Unit Diagram

Local Site Remote 1 Site


TrueCopy S-VOL Universal Replicator S-VOL
TC
0 0 0
1 1 1
2 HUR
2 2
HUR
3 3 3
Primary Site
TrueCopy P-VOL/ SI

Universal Replicator P-VOL 0 0 0


1 1 1
2 2 2
3 ShadowImage
S-VOL
Remote Site 2
Universal Replicator S-VOL

3DC Configuration items common to both Cascade and Multi-target


• Uni-directional TrueCopy links are required between sites containing
TrueCopy pairs
• Bi-directional Universal Replicator links are required between sites
containing Universal Replicator pairs
Operational minimum is two links in each direction
• Each host should be network-connected to each other to allow use of CCI
• Prevent host I/O to TrueCopy S-VOL in Local site
• Mandatory Universal Replicator Fence Level “async” for CCI
• Intermix of Hitachi enterprise storage systems
• Data volumes exist in primary, local and remote sites
• Disaster Recovery Extended license is required for VSP, HUS VM,
USP V and USP VM

Page 8-16 HDS Confidential: For distribution only to authorized parties.


Three Data Center Operations
Configurations

Configuration Considerations
• No change in pair status conditions due to 3DC
• No change in pair operations in 3DC
Exception: Paircreate, Resync and Resync swap operations that result
in prohibited copy combinations are denied
• Pair status transitions are the same
• Maintenance procedure of DKCs are the same
• For 3DC Async, all participating Journal groups at all sites must have
HUR 3DC enabled

HDS Confidential: For distribution only to authorized parties. Page 8-17


Three Data Center Operations
Operations

Operations

To begin 3DC TC-UR replication:


• If TrueCopy pairs exist, but are not members of TC Sync Consistency
group, convert TC pairs to Consistency group with pairresync -fg (see
note)

• If necessary, create TrueCopy pairs in TC Sync Consistency group


• If Cascade, then immediately split TrueCopy groups
• Create Universal Replicator pairs, matching HUR consistency groups with
TC consistency groups
• If Cascade, resync TrueCopy pairs

Note: CCI pairresync command with -f[g] <fence> [CTGID] (TrueCopy only):
Changes existing TC Sync volumes to TC Sync Consistency group.
Does not require deletion of pair
Split TC group with normal pairsplit command, then issue pairresync -g
<grpname> -fg <fence level> <CTG ID>
Only valid for normal resync at primary site. Not valid with -swaps or -swapp
option from either primary or Local site

Page 8-18 HDS Confidential: For distribution only to authorized parties.


Three Data Center Operations
Operations

To begin 3DC UR-UR (3DC Async Async) operations


• In all participating storage systems:
When creating new Journal groups, use 3DC Async setting in the Edit
Journal Volumes panel
If existing Journal IDs will be used, then delete and create new Journal
groups with 3DC Async setting
• Note that both HUR instances use the same M-JNL groups but different
Mirror IDs
• Create UR pairs in any order
First copy must complete before starting second copy

HDS Confidential: For distribution only to authorized parties. Page 8-19


Three Data Center Operations
Disaster Recovery with 3DC

Disaster Recovery with 3DC

Review of scripted Disaster Recovery CCI commands


• horctakeover - Useful for temporary loss of primary or secondary site
Checks the specified volume’s or group’s attributes with paircurchk

Takeover Switch executes the chosen takeover function

• Temporary loss of Remote Site - P-VOL-takeover


• Both sites operational - Swap-takeover
• Temporary loss of Primary Site - S-VOL-takeover
• horctakeoff - New
Change 3DC multi-target to 3DC cascade while host applications running
Conditions:
• Temporary primary site failure, failover to local site has occurred
• Execute horctakeoff
• Execute horctakeover to switch operations to the Remote Site without affecting
Host operations at the local site
• Operates on either individual volume or volume group

Full discussion of these commands is in the Data Protection module

Page 8-20 HDS Confidential: For distribution only to authorized parties.


Three Data Center Operations
Disaster Recovery with 3DC

Situation - 3DC Cascade with failure at primary site


• Split Universal Replicator pairs
• Issue either horctakeover or pairsplit -RS on S-VOLs at local site
Forces S-VOLs to SSWS status
• Resync Universal Replicator pairs

Host CCI Host CCI


Local

Data Center
rueCopy/ S-VOL ShadowImage
HUR P-VOL SWSS Thin Image

Host CCI
Remote
ShadowImage
Thin Image HUR Data Center
S-VOL

ShadowImage
Thin Image

After primary site recovers, execute horctakeover at primary site to change primary
TrueCopy volumes back to P-VOLs.

HDS Confidential: For distribution only to authorized parties. Page 8-21


Three Data Center Operations
Disaster Recovery with 3DC

Situation: 3DC Cascade with failure at both primary and local sites
• Execute horctakeover or pairsplit -RS on Universal Replicator S-VOLs
at Remote site
• Forces Universal Replicator S-VOLs to SSWS status

Host CCI Host CCI

r r
P-VOL S-VOL ShadowImage
Thin Image

Host CCI
Remote
ShadowImage
Thin Image HUR Data Center
SWSS

ShadowImage
Thin Image

After local site recovers:


Execute horctakeover to change local site Universal Replicator volumes back to
P-VOLs
Re-establish pairs after Primary Site recovers

Page 8-22 HDS Confidential: For distribution only to authorized parties.


Three Data Center Operations
Disaster Recovery with 3DC

Situation: 3DC Multi-Target with failure at Primary Site, failover to


Local site
• Execute horctakeover or pairsplit -RS on S-VOLs at Local site

Host CCI Host CCI


Local

Data Center
P-VOL S-VOL ShadowImage
SWSS Thin Image

Host CCI
Remote
ShadowImage
Thin Image Data Center
S-VOL

ShadowImage
Thin Image

After the Primary Site recovers:


Execute horctakeover to change Primary Site volumes back to P-VOLs

HDS Confidential: For distribution only to authorized parties. Page 8-23


Three Data Center Operations
Disaster Recovery with 3DC

Situation: 3DC Multi-Target with failure at Primary site, failover to


REMOTE site
• Execute horctakeover or pairsplit -RS on Universal Replicator S-VOL at
Remote site
• Forces Universal Replicator S-VOLs to SSWS status
• When Primary Site recovers, execute horctakeover at primary site

Host CCI Host CCI


Local
r Data Center
P-VOL S-VOL ShadowImage
Thin Image

Host CCI
Remote
ShadowImage
Thin Image S-VOL Data Center
SWSS

ShadowImage
Thin Image

Situation: 3DC Multi-Target with failure at Local site


• Universal Replicator Pairs are not affected
• Resync Primary pairs after local site recovers

Host CCI Host CCI


Primary

Data Center r
ShadowImage
P-VOL S-VOL
Thin Image

Host CCI
Remote
ShadowImage
Thin Image
Data Center
S-VOL

ShadowImage
Thin Image

Page 8-24 HDS Confidential: For distribution only to authorized parties.


Three Data Center Operations
3DC Disaster Recovery

3DC Disaster Recovery

3DC Multi-target failback to Primary site


• From Local Site
• After recovery of former primary site function:
At former primary site, use CCI horctakeover operations on TrueCopy
Volumes
Resync or re-create the Universal Replicator pairs as needed
• From Remote Site
At former primary site, use CCI horctakeover operations on Universal
Replicator volumes
Resync or re-create TrueCopy pairs as needed

HDS Confidential: For distribution only to authorized parties. Page 8-25


Three Data Center Operations
Module Review

Module Review

1. Three data center replication supports TrueCopy Asynchronous


software. True or False?
2. TrueCopy can be cascaded behind Universal Replicator. True or
False?
3. The TrueCopy Initial Copy pair can be started after the Universal
Replicator Initial Copy. True or False?

Page 8-26 HDS Confidential: For distribution only to authorized parties.


9. Delta
Resynchronization
Module Objectives

Upon completion of this module, you should be able to:


• Describe concepts and specifications of Hitachi Universal Replicator
Multi-target Delta Resync function
• Configure Multi-target Delta Resync
• Describe in detail the sequence of Delta Resync operations
• Outline the new command control interface functionality added to support
Delta Resync

HDS Confidential: For distribution only to authorized parties. Page 9-1


Delta Resynchronization
Concepts

Concepts

Multi-target 3DC review

Hitachi TrueCopy and Hitachi Universal Replicator in 3DC Multi-


target configuration

TrueCopy
P-VOL
JNL-VOL S-VOL
JNL-VOL
P-VOL
Primary Local
JNL Group
Site Site

S-VOL
JNL-VOL
S-VOL
Remote
JNL Group Site

Page 9-2 HDS Confidential: For distribution only to authorized parties.


Delta Resynchronization
Concepts

Multi-target 3DC review

Third link can be provided between local and remote sites to ensure
that a remote replication can continue if loss of primary site occurs
UR pairs can be established using third link when primary site fails

TrueCopy
P-VOL
JNL-VOL S-VOL
JNL-VOL
P-VOL Primary Local
JNL Group
Site Site
Universal Replicator

S-VOL
JNL-VOL
S-VOL
Remote
JNL Group Site

Delta Resync

Third link UR volumes are pre-created with Delta Resync option and
remain in Hold status until resynced in the event of primary UR failure

TrueCopy
P-VOL
JNL-VOL S-VOL
JNL-VOL
P-VOL Local
Primary
JNL Group
Site Site
Universal Replicator

Third Link - Universal Replicator pairs


S-VOL
in HOLD status and can quickly be
JNL-VOL resynced if primary site fails
S-VOL
Remote
JNL Group Site

HDS Confidential: For distribution only to authorized parties. Page 9-3


Delta Resynchronization
Concepts

Multi-target 3DC review

VSP Remote Command Device

Remote Command Devices: When the delta resync operation has been performed
and pair status is changed to PAIR, the delta resync P-VOL must be updated from
the host for longer than five minutes. This is required to insure internal
communications between the Local and remote sites.
However, you can work around this five-minute-plus update requirement by setting
up command devices and remote command devices. With remote command devices
set up, communications between the two sites is performed automatically, and the
delta resync is ready to use when the operation is run.
This requires setting up two command devices and two remote command devices
on each site — the local, Local, and remote sites — as explained in the following
general guidelines. Consult the Hitachi Universal Volume Manager User Guide for
more complete information about remote command devices.
1. Create four command devices each on the local, Local, and remote sites.
2. Set up and dedicate two external ports and two target ports on each site for the
command/remote command devices. Configure paths between external ports
and target ports. For details about the external ports, see Hitachi Universal Volume
Manager User Guide. For instructions on setting paths, see Provisioning Guide for
Open Systems.

Page 9-4 HDS Confidential: For distribution only to authorized parties.


Delta Resynchronization
Concepts

3. On each site, map a command device via a target port to a device on one of the
other sites. The device on the other site should be mapped to as a remote
command device, using an external port on that system.

4. Repeat the previous step so that two command devices on each site are mapped
to a remote command device on each of the other two sites.

HDS Confidential: For distribution only to authorized parties. Page 9-5


Delta Resynchronization
Concepts

Multi-target 3DC review

Delta Journal Operation


• Normal resync operates by transferring changed tracks noted in the
differential bitmaps
• Delta resync operates by journal copying
If TrueCopy/UR pairs are in PAIR at the time of primary site failure:
• Journal data of Delta Resync Universal Replicator Pairs in local site
matches Journal Data in Primary Site Replicator Pairs up to point of failure
• Takeover operation at local site conditions TrueCopy/UR S-VOLs to be
used as P-VOLs
These volumes are already the P-VOLs of the Delta Resync Universal
Replicator pairs
• Applications coming up at local site results in new Delta Journal Data
• When Delta Resync occurs, only this new Journal Data in local site must
be sent to remote site

Journal data in the Local Site are copied to the Universal Replicator secondary
(Remote) site by journal copy. Only the journal data, which are not yet restored to
the secondary data volume in the Universal Replicator secondary site, are copied in
chronological order. When the journal copy completes, journal restore takes place in
the Universal Replicator secondary site.
Note: When the total capacity of stored journal data exceeds 80% of the TrueCopy
secondary site’s Delta Resync journal volume, old journal data will be automatically
deleted. Therefore, if the Delta Resync journal data volume utilization exceeds 80% the
secondary data volume will not be able to be restored completely.
In that case, according to the setting of the journal group option, all tracks of the
Delta Resync pairs in the Local Site will be copied to the Universal Replicator
secondary data volumes in the Remote Site, or delta resync operation finishes
without any processing. If the TrueCopy pairs are in pair state at the time of Primary
Site failure, or they are synchronized periodically, the total capacity of the journal data
which is not restored to the Universal Replicator secondary site will not exceed 80% of
the Delta Resync journal volume.
Usually, if the pair between TrueCopy primary site and secondary site is
synchronized periodically, the total capacity of the journal data that is not restored
to the Universal Replicator secondary site will not exceed 80% of the journal volume.
Though, for example if the Universal Replicator pair is suspended and the pair has
not been resynchronized for a long time, journal data of more than 80% of the

Page 9-6 HDS Confidential: For distribution only to authorized parties.


Delta Resynchronization
Concepts

journal volume capacity may be stored before they are restored to Universal
Replicator secondary data volume. In such cases, you may not perform delta resync
operation properly.
Warning: Even if the capacity of the journal data does not exceed 80% of the journal
volume, note that journal data will or may be destroyed in the following cases.
When you restore the TrueCopy pair, then update the P-VOL
When you restore the Universal Replicator pair between the primary site and the
Universal Replicator secondary site, then update the P-VOL
When the retry processing occurs because of a delay of the P-VOL update
When the update of the TrueCopy S-VOL is delayed

HDS Confidential: For distribution only to authorized parties. Page 9-7


Delta Resynchronization
Concepts

Multi-target 3DC review

If TrueCopy volumes in a 3DC TC-UR configuration are normally run


in SPLIT status with periodic resync:
• When the TrueCopy pairs are resynced, all changed data sent to the
TrueCopy S-VOLs in the local site is reflected to the Delta JNL volumes
• By default, if Delta JNL volumes utilization goes above 80%, old JNL data
is discarded
In the event of primary site failure at this point, Delta Resync cannot be
performed
All data at the local site must be sent to the remote site
• For best results, keep the TrueCopy pairs in PAIR status

Multi-target 3DC review Local Site


TrueCopy S-VOL
0 0
1 1
2 2
3 3
Primary Site
TrueCopy P-VOL/ 0
Universal Replicator P-VOL
0 0 1
SI
Mirror ID association of a 1 1 2 ShadowImage
3DC Multi-target Delta S-VOLs
Resync configuration with 2 2 0
ShadowImage at remote 3 1
site Remote Site 2

Universal Replicator
S-VOL

Page 9-8 HDS Confidential: For distribution only to authorized parties.


Delta Resynchronization
Specifications

Specifications

Delta Universal Replicator between local site and remote site

# Item Specification
Normal Universal Replicator specification of 8192 data
1 Number of pairs
volumes per Journal Group.

2 JNL Groups Normal Universal Replicator specification

3 Delta Resync Links Normal Universal Replicator bidirectional paths required


Extenders between Normal Universal Replicator specification
4
Local and Remote Site

If remote site is updated earlier than local site, Delta resync cannot be
performed. To avoid the problem, journals arriving at remote site are delayed
by one minute before writing to Universal Replicator S-VOL.

Note: The total number of pairs between local site and remote site is model-
dependent (16K for Universal Storage Platform, 32K for VSP and USP V).

HDS Confidential: For distribution only to authorized parties. Page 9-9


Delta Resynchronization
Specifications

Pair status review


SM PL The volume is not currently assigned to a pair. When the initial copy is started by a
paircreate operation, the volume status changes to Copy.
COPY The initial copy operation is in progress. Data in the primary data volume is not
synchronized with data in the secondary data volume. When the initial copy is complete, the
status will change to PAIR.
PAIR The volume is paired with another volume. The two volumes are fully synchronized. All
updates from the host to the primary data volume are duplicated at the secondary data
volume.
PSUS The pair was split by the user (pairsplit-S). The primary data volume and the secondary
data volume are not synchronized.

PSUE The pair has been split by the primary storage system or the secondary storage system due
to an error or failure. The primary data volume and the secondary data volume are not
synchronized
Suspending The primary data volume and the secondary data volume are not synchronized. This pair is
in transition from the PAIR or COPY status to the PSUS/PSUE status to the SMPL status.

Deleting The primary data volume and the secondary data volume are not synchronized. This pair is
in transition form the PAIR, COPY, or PSUS/PSUE status to the SMPL status.

Added for
Delta Resync

Page 9-10 HDS Confidential: For distribution only to authorized parties.


Delta Resynchronization
Configuration

Configuration

Preliminary Conditions:
• Full 3DC multi-target configuration is defined with all TrueCopy and
Universal Replicator volumes in PAIR status
• Necessary Universal Replicator links are defined between local site and
remote site
• Remote DKC is defined for both local and remote sites
• If desired, HDS Technical Personnel will ensure that SVP Mode 506 is set
ON for all storage systems involved

JNLG TrueCopy Pair JNLG

Universal Replicator Pair


1. JNLG creation in local site
2. Condition check
JNLG 3. Delta Universal Replicator pair creation
4. HOLD status

Requirements for Creating Universal Replicator Pair for Delta Resync Operation
To create a Universal Replicator pair for delta resync operation, the following items
are required.
Create the pair in 3DC multi-target configuration.
Use TrueCopy S-VOL in PAIR status as the primary data volume.
Use Universal Replicator data volume in PAIR status as the secondary data
volume.
Use mirror ID 2 (0 is used by TrueCopy and 1 is used by primary Universal
Replicator).
The system option mode 506 must be set to ON at all sites.
In addition to those requirements, all Universal Replicator pairs in the journal group
must satisfy the following requirements when you create more than one Universal
Replicator pair for delta resync operation.
Use the same mirror ID for all pairs.
Use the same restore journal group for all pairs.

HDS Confidential: For distribution only to authorized parties. Page 9-11


Delta Resynchronization
Configuration

To Create Delta Resync Pairs:


• If using 3DC TC/UR, create required JNL groups at local site
• If using 3DC UR/UR, use existing UR JNL groups
• Check that all TrueCopy and UR volumes are in PAIR status
• Using Mirror ID 2, create Delta Resync pairs using the new Journal
groups in the local site and the existing Journal groups in the remote site
Make sure to select a unique Consistency Group ID
• Check that all Delta Resync volumes are in HOLD status

JNLG TrueCopy Pair JNLG

Universal Replicator Pair


1. JNLG creation in local site
2. Condition check
JNLG
3. Delta Universal Replicator pair creation
4. HOLD status

Page 9-12 HDS Confidential: For distribution only to authorized parties.


Delta Resynchronization
Configuration

To Create Delta Resync Pairs:


• If using 3DC TC/UR, create required JNL Groups at local site
• If using 3DC UR/UR, use existing UR JNL Groups
• Check that all TrueCopy and UR volumes are in PAIR status
• Using Mirror ID 2, create Delta Resync pairs using the new Journal
groups in local site and existing Journal groups in remote site
Make sure to select a unique Consistency Group ID.
• Check that all Delta Resync volumes are in HOLD status
TrueCopy S-VOL
TrueCopy Pair PAIR
JNLG status
0 0 JNLG

1 2
Universal Replicator Pair
Universal Replicator 1 Mirror IDs

S-VOL PAIR status 2


JNLG 1. JNLG creation in local site

2. Condition check
3. Delta Universal Replicator pair creation
4. HOLD status

HDS Confidential: For distribution only to authorized parties. Page 9-13


Delta Resynchronization
Configuration

Configure Port Attributes for Universal Replicator Links from Local to


Remote storage systems

Page 9-14 HDS Confidential: For distribution only to authorized parties.


Delta Resynchronization
Configuration

Add DKC to both Local and


Remote Storage Systems
1. Enter the serial number of the
remote DKC
Controller ID - specifies model of
remote DKC
2. Select the ports used for Universal
Replicator Links.
3. Click the Option button

Note: Port column signifies the Initiator ports


Pair-Port column contains associated
RCU Target port

When you assign Logical Paths, use the port allocations you set as Initiator and RCU
Target. Make sure an Initiator and RCU Target are assigned together. If two
Initiators are grouped together, this will cause an error.
Display Features
DKC S/N: Allows you to enter the five-digit serial number of the remote storage
system.
Controller ID: Allows you to enter the controller ID (storage system family ID)
of the remote storage system.
Note: The controller ID for a Universal Storage Platform is 4.
Path Gr. ID: Allows you to enter the path group ID. Path group IDs are used for
identifying groups of logical paths. One path group can contain up to eight
logical paths.
Note: In the current version, you cannot enter path group IDs. Also, you cannot
clear the Default check box. The number of path groups per one remote
subsystem is always 1.
M-R Path: Allows you to specify logical paths from initiator ports on the local
subsystem to RCU target ports on the remote subsystem.
Port: Displays a list of initiator ports on the local subsystem. Select an initiator
port from this drop-down list, or type in a Port Number.

HDS Confidential: For distribution only to authorized parties. Page 9-15


Delta Resynchronization
Configuration

Pair-Port: Displays a list of all ports on the remote subsystem. Select an RCU
target port on the remote subsystem from this drop-down list, or type in a port
number.
Note: When specifying a port, you can use the keyboard to enter the port number.
When you enter the port number, you can abbreviate the port number into two
characters. For example, you can enter 1A instead of CL1-A. You can use
uppercase and lowercase letters.
Option: Opens the DKC Option panel
Cancel: Cancels the settings you made on the Add DKC panel and then closes
the panel.

Page 9-16 HDS Confidential: For distribution only to authorized parties.


Delta Resynchronization
Configuration

Create Journal Group on Local Site

1. Select LDEV
2. Click Add
3. If setting up 3DC
Async-Async, select
UR 3DC setting
4. Click Set
5. Repeat this step for
the remote storage
system

The Edit JNL Volumes panel displays similar information about:

JNL Volumes - Journal Volumes

Free Volumes - Not registered in journal groups


For each category of volumes:
Parity Group: Indicates the parity group or the external volume group where a
journal volume belongs
Note: If the letter ‘E’ is displayed at the beginning of a group, the group is an
external volume group. In the current version, however, the panel does not display
external volumes.
LDKC:CU:LDEV: The LCKC number is displayed to the left of the colon, the CU
number is displayed between the colons, and the LDEV number is displayed to
the right
Note: If a pound sign (#) is displayed at the end of a volume, the volume is an
external volume. In the current version, however, the panel does not display
external volumes. Consider the implications carefully before selecting an external
volume as a Journal Volume.
Capacity: Indicates the capacity of a journal volume in gigabytes
Emulation: Indicates the emulation type of the volume

HDS Confidential: For distribution only to authorized parties. Page 9-17


Delta Resynchronization
Configuration

Operation: Displays one of the following:


Blank: This column usually blank
Add: Indicates a volume to be added to a journal group
Delete: Indicates a volume to be deleted from a journal group
JNL Volume Buttons
Add: To register volumes in a journal group, select the volumes from Free
Volumes and click Add
Delete: To delete volumes from a journal group. Select the volumes from JNL
Volumes and click Delete
Parity Group/CU change
To change the display in the Free Volumes list:
Parity Group: Select to display volumes belonging to a parity group. Specify a
parity group number in the text boxes to the right, and then click the Show
button.
PG(Ext.): Select to display external volumes belonging to a parity group. Specify a
parity group number in the text boxes to the right, and then click the Show
button.
CU: Select to display volumes belonging to a CU. Then select a CU from the list
to the right.

Page 9-18 HDS Confidential: For distribution only to authorized parties.


Delta Resynchronization
Configuration

3DC TC/UR Delta Resync Failure Option


• Entire - With this setting, when delta resync fails, Entire copy is
automatically executed
• Recommended if customer
priority is to come up at local
site
• None - No operation. In this
case, it is necessary to delete
the pair between Local Site
and Remote Site, and to
create the pair again
Recommended if customer
wants to come up on the
site with the latest data

Storage Navigator Delta Resync Failure Condition


If Delta resync failure of JNLG option is set to Entire, entire copy from local site
to remote site is automatically started. Only for 3DC TC/UR 1x1x1
configurations
If Delta resync failure of JNLG option is set to None, you have to delete the pair
between local site and remote site, then create the pair again.

HDS Confidential: For distribution only to authorized parties. Page 9-19


Delta Resynchronization
Configuration

Delta Resync Universal Replicator pair creation command


TrueCopy
Pair
Set Mirror ID 2 for
Delta Resync pairs
Universal
Replicator Pair

Manually set CT Group


ID to unique value

Select Delta
TrueCopy
Pair

Universal
Replicator Pair

Delta Universal
Replicator Pair

Initial Copy: Allows you to specify whether to start the initial copy operation after
the volume pair is created. The default is Entire:
Entire: The initial copy operation will start after the volume pair is created.
When the initial copy operation executes, all data on the primary data volume
will be copied to the secondary data volume.
None: The initial copy operation will not start after the volume pair is created.
The primary storage system starts copying of update data as needed.
Note: The user must ensure that the primary data volume and secondary data
volume are already identical when using None.
Delta: An initial copy operation will not start after the volume pair is created.
The status of the volume pair will change to HOLD which means that the pair is
for delta resync operation.
Note: Manually set CT Group ID to unique value.

Page 9-20 HDS Confidential: For distribution only to authorized parties.


Delta Resynchronization
CCI Support for Delta Resync

CCI Support for Delta Resync

Delta Resync CCI paircreate - no copy suspended option


(nocsus)

# Mgmt Software Command Description


Select Delta as the Initial Copy option on
1 Storage Navigator GUI Paircreate
paircreate operation panel.
2 CCI paircreate Specify nocsus option

paircreate command with -nocsus option creates the Delta Resync


pairs
• paircreate -g G3 -vl -nocsus -f async <ctgid> -jp <id> -js <id>
The normal CCI pairresync command will execute Delta Resync
when all conditions are met

FHORC [MU#] or -FCA [MU#]


This option is used to create the cascading configuration with -g <group> and
-gs <group> option from the local node (takeover node).
-g <group> is used for specifying the cascading P-VOL, and also -gs <group> option is
used for specifying the cascading S-VOL.
This operation ignores -vl or vr option, because S-VOL will be specified with
-gs <group> option.
-gs <group>
This “s” option is used to specify a group name for cascading S-VOL (defined in the
configuration definition file).

HDS Confidential: For distribution only to authorized parties. Page 9-21


Delta Resynchronization
Specifications

Specifications

Delta Resync Status Conditions

# Pair/JNLG Status Meaning


1 Pair status HOLD The journal data for delta resync operation is stored.
HLDE The journal data for delta resync operation cannot be
created due to failure. HLDE: HoLD Error
2 JNLG status HOLD The journal data for delta resync operation is stored.
HLDE The journal data for delta resync operation cannot be
created due to the failure.
Pair status displayed on the SVP
JNL for delta resync can JNL cannot be created
be created and stored for some reason

HOLD HLDE

HOLD HLDE

Page 9-22 HDS Confidential: For distribution only to authorized parties.


Delta Resynchronization
Sequence of Delta Resync

Sequence of Delta Resync

To execute delta resync:


• Split primary Universal Replicator group
• Confirm status of all pairs primary Universal Replicator group is PSUS,
PSUE, or HOLD
• Issue either takeover operation or CCI command pairsplit -RS on
TrueCopy S-VOLs (UR S-VOLs if 3DC Async) at Local site
Status of S-VOLs must be SSWS or PAIR with reversed copy direction
• Issue Delta Resync command from Storage Navigator or with CCI
pairresync command
System Option Mode 506 - Recommended
• Controls whether or not Delta Resync will result in a full-volume copy if
there are no Delta Resync JNL Data to be sent (see note)
• If JNL Update data are present, Delta Resync Universal Replicator pair
will change directly to PAIR
• If Journal at Local site fills, Delta Resync will fail (see note)

Note: The delta resync operation steps include first using journal copy to copy the
journal data in the Local site to the UR secondary site. Only the journal data that is
not yet sent to the UR secondary site are copied in chronological order. So, if no
changed journal data is present at the local site, no delta journal is available, and the
delta resync fails. It will default to full volume copy in this case.
In delta resync operation, the status of the UR pair changes to PAIR (not COPY).
This is because the delta resync operation sends journal updates (not changed tracks
from the differential bitmap). Therefore, delta resync operation requires less time to
recover the UR pair after a failure occurs.
When the total capacity of stored journal data exceeds 80% (PFUL Status) of the
journal volume of TrueCopy secondary site, old journal data is automatically
deleted. Therefore, the secondary data volume is not completely restored to the UR
secondary site. In that case, either the entire primary data volume is copied to the
secondary data volume (ALL JOURNAL Copy) or delta resync operation finishes
without any processing.
System Option mode 506 Universal Replicator, Universal Replicator for z/OS
enables Delta Resync with no host update I/O by copying only differential JNL
instead of copying all data.

HDS Confidential: For distribution only to authorized parties. Page 9-23


Delta Resynchronization
Sequence of Delta Resync

Mode 506 = ON Without update I/O: Delta Resync is enabled. With update I/O:
Delta Resync is enabled
Mode 506 = OFF: (default): Without update I/O: Total data copy of Delta Resync
is performed. With update I/O: Delta Resync is enabled
Note: Even when mode 506 is set to ON, the Delta Resync may fail and only the total
data copy of the Delta Resync function is allowed if the necessary journal data does not
exist on the primary subsystem used for the Delta Resync operation.

Page 9-24 HDS Confidential: For distribution only to authorized parties.


Delta Resynchronization
Sequence of Delta Resync

Delta Resync From Local Site

When status is HOLD,


Resync Mode is Delta
HOLD

Delta

Delta Universal Pair


Replicator Pair
Universal
Replicator
Pair

HDS Confidential: For distribution only to authorized parties. Page 9-25


Delta Resynchronization
Configuration

Configuration

Recovery from HLDE status:


• Check that status is HLDE
• Resync the Delta Pair with Resync
Mode option Return to Standby
HLDE
• Status should change to HOLD

When status is HLDE, Resync


Mode is Return to Standby
HLDE

Return to Standby
HOLD

HOLD

Page 9-26 HDS Confidential: For distribution only to authorized parties.


Delta Resynchronization
Configuration

Delta Resync with CCI


• During HOLD ERROR (HLDE) recovery
• During Delta resync

Delta Resync pairresync command options

# Aim Mgmt Software Command Description

1 Recovery from Storage pairresync Select Return to standby as the resync


HLDE Navigator mode on the pairresync panel
CCI pairresync No option necessary, recovery will
occur if conditions are met
2 Delta Resync Storage pairresync Select Delta as the resync mode on
Navigator the pairresync panel
CCI pairresync No option necessary, delta resync will
occur if conditions are met

HDS Confidential: For distribution only to authorized parties. Page 9-27


Delta Resynchronization
Sequence of Delta Resync

Sequence of Delta Resync

After Delta resync operation, Universal Replicator between NEW


primary site and remote site transitions to PAIR status
If the TrueCopy direction is reversed, the configuration becomes a
mirror image of the initial configuration

APP

New Local Site New Primary Site

Creating JNL for TrueCopy


delta resync

HOLD
Pair

Page 9-28 HDS Confidential: For distribution only to authorized parties.


Delta Resynchronization
Review

Review

3DC Multi-target Delta Resync Checklist:


• Using Storage Navigator:
1. Define Port Attributes for the Delta Resync links from the local CU to
the remote CU
2. Add a remote DKC definition to the local site
• This activates the links added above
3. Create Journal Groups in the local site (to be used for the Delta
Resync pairs)
4. If 3DC UR-UR is required, use 3DC Async setting for all Journal
groups
5. Create Pairs using correct Mirror ID and Initial Copy Delta
• Delta pairs can be created with CCI if desired
6. Issue Delta Resync either from Storage Navigator or CCI

HDS Confidential: For distribution only to authorized parties. Page 9-29


Delta Resynchronization
Module Review

Module Review

1. The mirror ID of the Delta Resync copy group must be different


from that of the Universal Replicator copy group. W hat mirror ID do
we use for Delta Resync?
2. P-VOLs of the Delta Resync group are the TrueCopy S-VOLs. True
or False?
3. What must be the S-VOLs of the Delta Resync pairs?

Page 9-30 HDS Confidential: For distribution only to authorized parties.


10. Data Transport
Technologies
Module Objectives

Upon completion of this module, you should be able to:


• Identify supported topologies for Hitachi Fibre Channel (FC) replication
links
• Identify transport technologies
• Identify the different SAN extension options and compare their
operational characteristics:
Dark fiber
Synchronous Optical Network (SONET)
Synchronous Digital Hierarchy (SDH)
Coarse Wavelength Division Multiplexing (CWDM)
Dense Wavelength Division Multiplexing (DWDM)
• Examine the role of buffer credits in flow control for optical networks
• Identify the advantages of Fibre Channel over Internet Protocol (FCIP)

HDS Confidential: For distribution only to authorized parties. Page 10-1


Data Transport Technologies
Supported Link Topologies

Supported Link Topologies

Universal Replicator Fibre Channel Connections (topologies)


• Direct connect
• Extended connect through dedicated switches
• Extended connection through DWDM
• Extended connection through WAN converter

Page 10-2 HDS Confidential: For distribution only to authorized parties.


Data Transport Technologies
Supported Link Topologies

Direct Connect: Port Setting = FABRIC OFF, FC-AL (see note)


• Simplest Universal Replicator configuration
• No switches necessary. Runs on dark fiber
• Distance Limitations:
Shortwave multimode Fibre Channel - 500 meter
Longwave single-mode Fibre Channel - 10 km

Data Center A Data Center B

Replication Links

Note: Hitachi implements the Point-to-Point configuration as a special case of a two-


node loop called Direct Connect. Fibre Channel settings for Direct Connect are
FABRIC OFF, FC-AL (same as Arbitrated Loop). Standard point-to-point Fibre
Channel (for example, without a converter or extender) uses a short-wave optical
connection between N-ports at distances up to 500 meters.
The Universal Replicator link connection is dedicated and not be mingled with
the regular SAN. The application writing data to the Universal Replicator P-VOL
can’t be allowed to “see” the secondary volume of the pair (and vice versa). To do so
would cause an LVM error. Therefore, some method of preventing the host
operating systems must be used to prevent this. Unless otherwise noted, this
requirement exists for all configurations.
Each Fibre Channel link consists of an optical fibre pair. Even though there are
two fibres in the pair, they are used to move data in a one-way fashion. As a pair,
the bulk of the traffic is intended to be in a single direction (for example, Initiator to
RCU Target) with the second fibre used just for acknowledgements.
When fibre-channel multimode shortwave connections are used, two switches
are required for distances greater than 0.5 km (1,640 feet), and distances up to 1.5
km (4,920 feet, 0.93 miles) are supported.
When fibre channel interface single-mode longwave connections are used, two
switches are required for distances greater than 10 km (6.2 miles), and distances up
to 30 km (18.6 miles) are supported

HDS Confidential: For distribution only to authorized parties. Page 10-3


Data Transport Technologies
Supported Link Topologies

Dedicated pairs of FC switches


• Distance Limitations
Shortwave multimode Fibre Channel - 1.5 km
Longwave single-mode Fibre Channel - 30 km
“Hard-zone” switches to prevent host access to links

Data Center A Data Center B

Remote Copy Links and host ports


must be zoned separately

Switches used in
matched pairs

Page 10-4 HDS Confidential: For distribution only to authorized parties.


Data Transport Technologies
Supported Link Topologies

Dense Wave Division Multiplexer (DW DM)


• Medium Distance to Long Distance FC
• If DWDM converters do not provide buffer crediting
Use FC switches for buffer credits
Hard-zone switches to prevent host access to links

Data Center A

DWDM DWDM

This topology involves the use of a passive optical multiplexing device known as
Dense Wave Division Multiplexer (DWDM). In this case, the regular storage area
network (SAN) and Universal Replicator traffic can share the DWDM link as long as
no cross connections to Universal Replicator ports are possible.
Note: When using DWDM or other passive devices (at distances of more than a few
km) that do not perform data store-and-forward or some sort of adequate buffer-
crediting, it is a requirement to insert a Fibre Channel switch (as shown) on each
side of the passive DWDM link to provide a buffer credit function.

HDS Confidential: For distribution only to authorized parties. Page 10-5


Data Transport Technologies
Supported Link Topologies

Switched Converters - Any distance


• Converters can be FCIP, ATM, Frame Relay, and more.
• Hard-zone switches to prevent host access to links.
• WAN optimizers can provide additional functionality such as compression.

Note: All Universal Replicator link traffic must be in an isolated zone. Hosts must
not be able to access any ports which connect to Initiator ports.
This interconnect option involves the use of a box that converts Fibre Channel to
something else suitable for very long distance switched circuit such as T3, ATM, or
switched packet (FCIP) transmission.
The requirement for a switch (between the array and converter), is to translate from
N or NL port to E-Port. The regular SAN can share the same converter boxes only in a
way that prevents hosts owning the primary volumes from accessing Universal
Replicator ports. Beyond this, the need for LUN Security still applies.

Page 10-6 HDS Confidential: For distribution only to authorized parties.


Data Transport Technologies
Transport Technologies

Transport Technologies

Local, Metro and Wide Area Networks (LAN, MAN, WAN)

Technology Speed Medium Application


Frame Relay 1.54Mb/sec - 44Mb/sec Twisted pair/coax WAN
T-1, T-2, T-3 1.54Mb/sec - 44Mb/sec Twisted pair/coax WAN
Optical fiber
Ethernet 10Mb/sec 100Mb/sec Twisted pair/coax LAN to MAN
1Gb/sec Optical fiber
10Gb/sec
SONET /SDH 51Mb/sec - 40Gb/sec Optical fiber MAN
Optical Carrier (OC) 51Mb/sec - 40Gb/sec Optical fiber WAN

HDS Confidential: For distribution only to authorized parties. Page 10-7


Data Transport Technologies
Transport Technologies

Long Distance

Frame Relay
• Encapsulates IP data packets within frame relay packets
• Efficient protocol-independent WAN transport medium
• Can transmit voice and data
• Lower cost and higher performance than T1 or T3
• Typical speeds - dialup, T1/ T3
• Still prevalent in rural areas
Telephone T Links (also called Digital Signal Links)
• Voice and data
• T1: 1.544Mb/sec
• T3 (comprised of 28 T1 lines): 44.736Mb/sec
• Fractional T3: 3 to 45Mb/sec
• Fractional T1: 256/ 384/ 512/ 768Kb/sec

Frame Relay
Frame relay has been a popular Wide Area Network protocol. A company running
Ethernet can send the Ethernet protocol across a carrier's frame relay network and
have it come out at the destination location in the Ethernet format. Frame relay
encapsulates the data packets being sent inside of the frame relay packet, then
breaks the frame packet apart once it arrives at the destination location. Frame relay is
very good at efficiently handling high-speed data over wide-area networks;
specifically LAN to LAN communications. It offers lower costs and higher
performance for those applications in contrast to the traditional T1 or T3 services. As
the frame relay network is a shared, switched network, there is no need for
dedicated private lines, although special-purpose local loops (either DS0, T1 or T3
level connections) connect each location to a frame switch.
Frame Relay can be deployed using typical connection speeds including dialup, DS0,
T1, and T3. Most companies use T1 loops which allow for port speeds from 56k to
1.5 Mb/sec. If a port speed of 56k will suit your needs, there is no need to purchase a
T1 local loop. A DS0 (one channel of a T1) will provide up to 64 Kb/sec of
throughput for far less than a T1 would cost.
Telephone T Links
T1 is a digital network (1.544Mb/sec) implemented in the early 1960s by AT&T to
support long-haul pulse-code modulation (PCM) voice transmission. There are also
T1-C, T2, T-3 and T-4 networks. T1-C operates at 3.152 Mb/sec. T-2, implemented in

Page 10-8 HDS Confidential: For distribution only to authorized parties.


Data Transport Technologies
Transport Technologies

the early 1970s to carry one Picturephone or 96 voice channels, operates at 6.312
Mb/sec. T-3 operates at 44.736 Mb/sec and T-4 operates at 274.176 Mb/sec.
T3 (also known as a DS-3) is equal to approximately 672 regular voice-grade
telephone lines, which is fast enough to transmit full-motion, real-time video, and
very large databases over a busy network. A T3 line is typically installed as a major
networking artery for large corporations and universities. A T3 line is comprised of
28 T1 lines, each operating at total signaling rate of 1.544 Mb/sec.
Fractional T3
Fractional to full DS3 or T3 circuits run from speeds of 3 Mb/sec up to 45 Mb/sec. A
fractional T3 is similar to a full T3, only with some of the channels turned off.
Unfortunately, the T3 loop is still required for this service.
Fractional T1
Essentially a T1 line with some of the channels turned off. Typical speeds for
fractional lines are 256, 384, 512 and 768 Kb/sec. Most providers that offer full
connections also offer fractional service.

HDS Confidential: For distribution only to authorized parties. Page 10-9


Data Transport Technologies
SAN Extension to MAN

SAN Extension to MAN

SONET / SDH

Transmits multiple bit streams over Optical Fiber


• Synchronous Optical Network (SONET)/Synchronous Digital Hierarchy
(SDH) in Europe
• Now supports Ethernet and DWDM
• Guaranteed performance and quality of service (QoS)
• Performance Issues:
20-25 microseconds delay at each transmission node
Each additional intermediate through-node introduces another
10 microsecond delay
Supported synchronous distance limit is about 300 km

The maximum allowable delay for synchronous replication tasks is generally


considered to be one millisecond—500 microseconds each way. Considering that
fiber propagation introduces an approximate five microsecond delay for each 0.6
miles (1 km) traveled, the aggregate delay of processing Fibre Channel over
SONET/SDH can quickly add up, limiting the distance between the source and
target of a replication process to around 300 km.

Page 10-10 HDS Confidential: For distribution only to authorized parties.


Data Transport Technologies
SAN Extension to MAN

Dark Fiber

Native Fibre Channel sent directly over dark fiber infrastructure


(unused fiber laid by Telcos)
Eliminates the need for additional media and transmission
conversion equipment
Distance dependent on:
• Type of fiber deployed
Longwave Single Mode - 10 km
Shortwave Multi-mode - 500 m
• Use buffer-to-buffer credits to extend the connection
Fast, low cost deployment

Coarse Wavelength Division Multiplexing (CWDM)

Uses existing dark fiber


Uses Optical Add/Drop Multiplexer filters at each end of the network
• Multiplex and de-multiplex multiple CWDM wavelengths
• Provides up to eight channels of traffic over a single fiber pair
• Carries Fibre Channel, FICON, ESCON, others
• Attenuation reduces the distance between end points to 66 km

HDS Confidential: For distribution only to authorized parties. Page 10-11


Data Transport Technologies
SAN Extension to MAN

Dense Wavelength Division Multiplexing (DWDM)

Optical networking solutions use DW DM technology and supply the


infrastructure for high availability of both synchronous and
asynchronous data mirroring

DW DM provides a higher concentration of fiber and increased


distance with a single-mode connection
It does not have to be dedicated solely to the disk mirroring
application
Protocol independent:
• Supports Fibre Channel, ESCON, FICON, Gigabit Ethernet over FC,
others
Write acceleration feature allows the extension of synchronous
replication distance limitation

Page 10-12 HDS Confidential: For distribution only to authorized parties.


Data Transport Technologies
SAN Extension to MAN

Dense Wavelength Division Multiplexing (DWDM)

Overview of DW DM

Independence
Of Bit Rates
And Formats

• Merges optical traffic onto one common fiber


• Allows high flexibility in expanding bandwidth
• Reduces costly mux/demux function, reuses existing optical signals
• Affected by signal loss (attenuation)

DWDM is a technology that puts data from different sources together on an Optical
Fibre, with each signal carried at the same time on its own separate light wavelength. In
addition DWDM provides:
Separate wavelengths or channels of data can be multiplexed into a light stream
transmitted on a single optical fiber.
Each channel is demultiplexed at the end of the transmission back into the
original source. Different data formats being transmitted at different data rates
can be transmitted together.
Signal Loss
Most general-purpose optical fiber being installed today exhibits loss of 4 to 6 dB per
km (a 60% to 75% loss per km) at a wavelength of 850 nm. When the wavelength is
changed to 1300 nm, the loss drops to about 3 to 4 dB (50% to 60%) per km. At 1550
nm, it is even lower. Premium fibers are available with loss figures of 3dB (50%) per
km at 850 nm and 1 dB (20%) per km at 1300 nm. Losses of 0.5 dB (10%) per km at
1550 nm are not uncommon. These losses are primarily the result of random
scattering of light and absorption by actual impurities in the glass.
The implication is that the greater the loss the shorter the fiber optic ring will have to
be. New fiber optic technologies are being developed to reduce the loss incurred by the
fiber itself.

HDS Confidential: For distribution only to authorized parties. Page 10-13


Data Transport Technologies
SAN Extension to MAN

Dense Wavelength Division Multiplexing (DWDM)

Deployment in Geographically Dispersed SANs (GeoSANs)


• Performance
Current DWDM devices employ Write Acceleration, buffering crediting,
caching and other similar features to improve performance and extend
Synchronous replication distances

• Provides Guaranteed Quality of Service:


Bandwidth allocated to each channel is fixed, not shared
As additional data, ports or wavelengths are added the bandwidth
allocated to each port stays constant

Synchronous versus Asynchronous Replication:


There is a distinct difference in the performance between synchronous and
asynchronous applications, that is, whether or not the application must wait for an
acknowledgement to a data transfer command before resuming processing. Even
though optical networks can offer transmission at wire speeds, when traversing tens
of miles even at the speed of light, propagation delay and device turn-around-time
become the new limiting factors.

Page 10-14 HDS Confidential: For distribution only to authorized parties.


Data Transport Technologies
Optical Flow Control

Optical Flow Control

Protocol Basics

A variety of mechanisms designed into hardware components of the


network to:
• Increase the distance between two end-points in a storage network
• Optimize the round-trip traffic between a local and remote data center
• Use compression to reduce bandwidth consumption
• Provide network protocol optimizations such as additional inline buffer
credits to extend the reach of a Fibre Channel SAN, without reducing the
line rate

HDS Confidential: For distribution only to authorized parties. Page 10-15


Data Transport Technologies
Optical Flow Control

Buffer Credits

Usage
• Each port must have a buffer available for each Fibre Channel frame that
is sent across the cable or fibre
• The time taken to send a frame across increases as distances increase
• In general, with 2Gb/sec Fibre Channel, one buffer credit is required to
transmit 2KB frames across 1 km
• This translates to ten buffer credits required to transmit 2KB frames
across 10 km
• At 4Gb/sec, each frame would occupy one-half kilometer, requiring 20
credits to keep the 10 km pipe full

Each FC device has a buffer. The size determines how many non-stop frames can be
sent to that device. Each FC device tells other FC devices how large a buffer it has.
When received frames are processed and moved out of the buffer space, the
receiving device tells other devices it has available buffer space again. The term for
this is buffer credits.
The following tables summarize buffer credits required to sustain 1 Gb/sec and 2
Gb/sec throughputs over varying distances.
For sustained throughput of 1 Gb/sec:
Buffer Credits Distance
1 Credit 2 km
5 Credits 10 km
25 Credits 50 km
50 Credits 100 km
500 Credits 1,000 km
For sustained throughput of 2 Gb/sec:
Buffer Credits Distance
1 Credit 1 km

Page 10-16 HDS Confidential: For distribution only to authorized parties.


Data Transport Technologies
Optical Flow Control

5 Credits 5 km
10 Credits 10 km
50 Credits 50 km
100 Credits 100 km
1,000 Credits 1,000 km

HDS Confidential: For distribution only to authorized parties. Page 10-17


Data Transport Technologies
Optical Flow Control

Buffer Credits

Buffer Credits are like a conveyor belt carrying buckets


Sending and receiving ports negotiate the number of filled buckets
that can be sent
As buckets are emptied at receiving port and returned, they are re-
filled and sent on their way

Data Blocks Buffered Data Blocks


waiting for buckets headed toward Cache
Full Buckets

Sending Receiving Port


Port
Empty Buckets

Page 10-18 HDS Confidential: For distribution only to authorized parties.


Data Transport Technologies
SAN Extension to WAN

SAN Extension to WAN

Fibre Channel over IP (FCIP)

“Tunnels" Fibre Channel over IP-based Networks


• Allows interconnection of Fibre Channel SANs with TCP/IP used as the
underlying wide-area transport over LANs, MANs, and WANs
• IP provides congestion control and in-order delivery of data
Operation:
• Frames originating on one SAN are wrapped in IP packets and forwarded
to the destination SAN
• At the receiving end, the IP header is removed and native Fibre Channel
frames are delivered to the fabric
• The only devices that talk IP are the FCIP gateways
• FCIP tunneling requires both IP and Fibre Channel management
applications

FCIP
Fibre Channel SANs can be interconnected to meet the needs for remote storage
access. However, by combining IP networking with SAN technology you can extend
the interconnectivity of the SANs across much longer distances. FCIP provides the
transport for traffic going between specific Fibre Channel SANs over LANs, MANs,
and WANs.
FCIP is used to tunnel Fibre Channel traffic between two geographically separate
Fibre Channel SANs. Frames originating on one SAN are wrapped in the IP packets
and forwarded to the destination SAN. At the receiving end the IP header is
removed and native Fibre Channel frames are delivered to the fabric. A Fibre
Channel fabric switch then makes the decision about which end device the frame is
intended for. In terms of discovery, the only devices that have IP addresses are the
FCIP gateways themselves. IP discovery is thus limited to the FCIP gateways, while
Fibre Channel discovery and management is still required for the storage end
devices. Since FCIP tunneling requires both IP and Fibre Channel management
applications, additional overhead is necessary for a tunneled solution.

HDS Confidential: For distribution only to authorized parties. Page 10-19


Data Transport Technologies
SAN Extension to WAN

Fibre Channel over IP (FCIP)

FCIP Protocol
• Transparent Operation for Local and Remote SANs
FCIP gateways (converters) are the only devices that need to be
aware of FCIP encapsulation
It appears like Fibre Channel to the SAN, and IP to the Local / Metro /
Wide Area network
It allows usage of high-performance backbone-quality IP pipes

Fibre Channel Frame

IP Datagram

This protocol is entirely transparent for existent Fibre Channel SANs and involves
usage of the infrastructure of a modern MAN/WAN network. Some application
problems that can be successfully solved using the FCIP protocol are remote backup,
data recovery, and a shared data access. With high-speed MAN/WAN
communications, you can also use synchronous data doubling and a shared
distributed access to data storage systems.

Page 10-20 HDS Confidential: For distribution only to authorized parties.


Data Transport Technologies
SAN Extension to WAN

Fibre Channel over IP (FCIP)

Latest FCIP technology does not allow Class F admin traffic across
IP Network

Principal Fibre Channel


Switch (one per fabric) Principal Fibre Channel

Switch (one per fabric)

FCIP FCIP
Converter Converter
FC Switch FC Switch

LAN/MAN/WAN
Local IP Network Remote
Fibre Channel SAN Fibre Channel SAN

Local SANs remain


independent

Principal Switch Considerations


One principal switch per Fibre Channel fabric
In a multi-site, single Fibre Channel fabric deployment and the principal switch
can be only at one site at any moment

HDS Confidential: For distribution only to authorized parties. Page 10-21


Data Transport Technologies
WAN Technology

WAN Technology

FCIP Review
• Considerations for IP transport
Multiple, geographically separated storage systems that need to be
connected for remote copy applications
Cost concern for DWDM, other network options
Preference of IP over Fibre Channel for backbone
Mature technology - confidence in operation
Remote site data vaulting, data recovery, backup
Requires T3 or faster IP network
Switch administration traffic issues fixed in latest generation
technology

Page 10-22 HDS Confidential: For distribution only to authorized parties.


Data Transport Technologies
SAN Extension Technology Comparison

SAN Extension Technology Comparison

Review of commonly available transport media

Connectivity Pros Cons Used for


Dark Fiber • Bandwidth • Cost SAN extension
• Highest quality of • Availability and synchronous
service replication
• Complexity
DWDM • Bandwidth • Cost SAN extension
• Highest quality of and synchronous
service replication

Optical • Quality (packet Could be shared


Carrier loss and latency) or dedicated for
Networks • Availability different network
services
• Cost efficiency
Ethernet (IP) • Lowest cost • Requires Fibre protocol Most widely used
Networks • Shared with other conversion network services
data services • Highest protocol overhead
• Highest availability • Latency jitter due to routing

HDS Confidential: For distribution only to authorized parties. Page 10-23


Data Transport Technologies
Module Review

Module Review

1. What are the basic variations of Universal Replicator Fibre Channel


connection topologies?
2. Describe how FCIP handles Fibre Channel frames. 3.
Describe the DW DM transport medium.

Page 10-24 HDS Confidential: For distribution only to authorized parties.


11. Hitachi Replication
Manager Overview
Module Objectives

Upon completion of this module you should be able to:


• Understand the purpose, benefits and components of Hitachi Replication
Manager (HRpM)
• List the prerequisites for Replication Manager
• Describe configuring information sources and refresh intervals
• Describe Users and Resource group features
• Describe how to configure and manage Universal Replicator, TrueCopy,
ShadowImage, and Copy-on-Write Snapshot with Replication Manager
• Describe the monitoring and alert functions
• Describe how Replication Manager supports application replica
management

HDS Confidential: For distribution only to authorized parties. Page 11-1


Hitachi Replication Manager Overview
Purpose and Benefits

Purpose and Benefits

Hitachi Replication Manager


Configuration, Scripting,
Task/Scheduler Management
and Reporting

Thin Image/
Universal
Copy-On-Write ShadowImage TrueCopy
Replicato
Snapshot
r

Business
Continuity CCI Future
Manager

Cross-produ
anagement

Replication Manager gives an enterprise-wide view of replication configuration, and


allows configuring and managing from a single location. Its primary focus is on
integration and usability.
For customers who leverage in-system or distance replication capabilities of their
storage arrays, Hitachi Replication Manager is the software tool that configures,
monitors, and manages Hitachi storage array based replication products for both
open systems and mainframe environment in a way that simplifies and optimizes
the:
Configuration
Operations
Task management and automation
Monitoring of the critical storage components of the replication infrastructure
Hitachi Open Remote Copy Manager (name of CCI executable)

Page 11-2 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Purpose and Benefits

Configures, monitors, and manages Hitachi replication products for


open systems and mainframe environments
Replication Configuration Management
• Enables users to set up all Hitachi replication products without requiring
other tools, for both local and remote storage systems
Multiple User Design and Role-based User Access Control
• Achieves stringent access control for multiple users
Task Management
• Allows scheduling and automation of the configuration of replicated data
volume pairs

Hitachi Replication Manager configures, monitors and manages Hitachi replication


products on both local and remote storage systems. For both open systems and
mainframe environments, Replication Manager simplifies and optimizes the
configuration and monitoring, operations, task management and automation for
critical storage components of the replication infrastructure. Users benefit from a
uniquely integrated tool that allows them to better control recovery point objectives
(RPOs) and recovery time objectives (RTOs).

HDS Confidential: For distribution only to authorized parties. Page 11-3


Hitachi Replication Manager Overview
Purpose and Benefits

Features

Pair Lifecycle Management


• Simplified replication configuration from setup to deletion
Setup > Definition > Creation (Initial copy) > Operation > Monitoring >
Alerting > Deletion
Provides GUI based editing and management (shutdown/restart of
instances) of underlying HORCM.conf files
Storage System Configuration Functions
• Set up functionality required for copy pair management
Setting command devices, DMLU, Journal Groups and pools
Setting up Remote Paths for remote replication
Copy Pair Creation or Deletion
• Pair Configuration wizard
Intuitive pair definition screen with topological view
• Task Scheduler
Scheduler functionality that allows users to execute the copy
operations at off-peak time

Page 11-4 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Purpose and Benefits

Centralized Monitoring - Provides a quick alert mechanism of


potential problems using SNMP or email
• Unexpected changes in copy status
• Exceeded user-defined thresholds
Resource utilization (journals and sidefile)
Recovery Point Objective (RPO) of Target Copy group

Replication Manager can send an alert when a monitored target, such as a copy pair
or buffer, satisfies a preset condition. The conditions that can be set include:
thresholds for copy pair statuses, performance information, and copy license usage.
You can specify a maximum of 1,000 conditions.
Alert notification is useful for enabling a quick response to a hardware failure or for
determining the cause of degradation in transfer performance. Alert notifications are
also useful for preventing errors due to buffer overflow and insufficient copy
licenses, thereby facilitating the continuity of normal operation. Because you can
receive alerts by email or SNMP traps, you can also monitor the replication
environment while you are logged out of Replication Manager.

HDS Confidential: For distribution only to authorized parties. Page 11-5


Hitachi Replication Manager Overview
Components

Components

Replication Manager components


• Management Server with Device Manager and Replication Manager
installed
• Pair Management Server (Open Systems) - CCI host
Device Manager agent
RAID Manager (CCI)
• Pair Management Server (Mainframe)
Business Continuity Manager or Mainframe Agent
• Host (Application Server)
Application Agent
• One Hitachi Replication Manager Server can manage and monitor
volumes from multiple Hitachi Device Manager Servers

Management Server: A management server provides management information


in response to requests from management clients. Device Manager is a
prerequisite software for Replication Manager. Replication Manager and Device
Manager are installed on the same management server. If multiple sites are used,
a management server is required for each site. Also, the management server at
the remote site can be used to manage pairs when the local site management
server fails.
Pair Management Server (open systems/mainframes): A Pair Management
server collects management information, including Copy Pair statuses and
performance information for remote copying. If multiple sites are used, at least
one Pair Management server is required for each site. More than one Pair
Management server can be setup at each site.
CCI and a Device Manager agent are installed on each Pair Management
server for open systems
Business Continuity Manager or Mainframe Agent is installed on each
Pair Management server for mainframes
A Pair Management server can also be a host (Application server).

Page 11-6 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Components

TIP: When determining whether to set up Pair Management servers to be


independent of hosts, consider security and the workloads on the hosts.
Host (Application Server): Application programs are installed on a host. The
Device Manager agent is optional if the server is used as a host but is not a pair
management server.

HDS Confidential: For distribution only to authorized parties. Page 11-7


Hitachi Replication Manager Overview
Initial Setup

Initial Setup

Prerequisites validation
Register all information sources
Refresh information in Replication Manager
Set up refresh and monitoring parameters
Set up users and resource groups
Organize resources - Sites

Page 11-8 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Prerequisite Software

Prerequisite Software

Install Device Manager, add Replication Manager license


Before starting Replication Manager operations, confirm that:
• Pair Management servers are set up with:
Device Management Agent
CCI
Command Device
• Add the resources to Device Manager:
Storage Systems (HRpM 7.3.1 required to manage HUS VM)
Pair Management servers
Hosts (Device Manager agent optional)
• License keys for replication products, Device Manager and Replication
Manager
• Microcode versions are at recommended levels, as required for the
program products

Hitachi Device Manager agent: After the Device Manager agents are installed,
configure hdvmagt_account.bat and hdvmagt_schedule.bat
Configure Hitachi Device Manager: After installing Device Manager software, add
the storage systems, hosts, and Pair Management servers to be managed in
Replication Manager.
Note: HDvM 7.0 supports agent-less discovery of hosts using the host data collector.
The agent-less discovery is used for reporting host information and does not
support replication operations.
For performing replication operations using Replication Manager, a Pair
Management Server must be setup with HDvM Agent, CCI and Command Device.

HDS Confidential: For distribution only to authorized parties. Page 11-9


Hitachi Replication Manager Overview
Launching

Launching

Launch from Hitachi Command Suite main window

Page 11-10 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Launching

Launch from URL

http://<server IP address>:23015/ReplicationManager/
or
http://<server hostname>:23015/ReplicationManager/

In the Web browser address bar, enter the login URL for the management server
where Replication Manager is installed. The Back To Login window appears,
followed by the User Login window.
When you log in to Replication Manager for the first time, you must use the built-in
default user account and then specify Replication Manager user settings. The user ID
and password of the built-in default user account are as follows:
User ID: System
Password: manager (default)
If Replication Manager user settings have already been specified, you can use the
user ID and password of a registered user to log in. If you enabled authentication
using an external authentication server, use the password registered in that server.

HDS Confidential: For distribution only to authorized parties. Page 11-11


Hitachi Replication Manager Overview
Register Information Sources

Register Information Sources

Provide environment configuration to Replication Manager


Interface with the managed resources for replication operations
Device Manager server on which Replication Manager is installed is
automatically registered as information source
Can register up to 99 additional information sources
Possible information sources
• Device Manager Server
• Application Agent (MS-SQL
and MS-Exchange)
• BC Manager and Mainframe
Agent

Before you can use Replication Manager to manage resources, you must register an
information source. In open systems, this information source is the Device Manager
server. In mainframe systems, this information source is either Business Continuity
Manager or Mainframe Agent. Once the information sources are registered, you can
view host information, information about the connected storage systems, and copy
pair configuration information as Replication Manager resources. You can register a
maximum of 100 information sources.

Page 11-12 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Register Information Sources

Device Manager agent

Common agent for Device Manager and Replication Manager


Download the agent installer from the Device Manager W eb Client
Install the agent using an operating system account with
administrator or root permissions
To operate CCI instances running on the Device Manager agent, the
service permissions must be changed from LocalSystem to an
operating system user with administrator permissions

Note: Refer to the Hitachi Device Manager Agent Installation Guide.

Device Manager agent

Device Manager agent: Runs on a host to collect host and storage


system information, and reports that data to the Device Manager
server

• Host machine information, such as host names, IP addresses,


Host Bus Adapter (HBA) W orldwide Name (W WN), and iSCSI
name
• Information about LDEVs allocated to the host, such as LDEV
number, storage system, Logical Unit Number (LUN), and LDEV
type
• Information such as file system types, mount points, and usage
• Copy pair information, such as pair types and statuses
Replication Manager management server uses this information for
displaying and managing the pair information
idential: For distribution only to authorized parties.

Page 11-13
Hitachi Replication Manager Overview
Register Information Sources

Device Manager agent

Add new Information Source

Ensure that you have the following Device Manager server information:
IP address or host name
Protocol to be used for communication with Replication Manager (HTTP or
HTTPS)
Port number (the server.http.port value in the server.properties file for the
Device Manager server)
User ID and password where you can log in to the Device Manager server

Page 11-14 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Refresh Configuration from Information Sources

Refresh Configuration from Information Sources

Refresh Setting globally for Pair Management Servers and Device


Manager Server

Refresh Interval Settings for Agent


Specify the copy pair status refresh interval for the pair management server that
belongs to the information source. If you change the pair status refresh interval
settings in this item, the new settings replace the settings made for each pair
management server in the Edit Interval of Refresh Pair Status - pair-management-
server-name dialog box.
Refresh Interval Settings for Device Manager
Specify the copy pair status refresh interval by refreshing Device Manager when
monitoring copy pairs that are not managed by the pair management server.

HDS Confidential: For distribution only to authorized parties. Page 11-15


Hitachi Replication Manager Overview
Refresh Configuration from Information Sources

Collect the latest data from information sources

On your first login to Replication Manager, execute the Refresh Configuration


option to ensure that the Replication Manager repository gets synchronized with the
local Device Manager server. Any addition of a new Information Source should be
followed by a Refresh Configuration.

Page 11-16 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Managing Users and User Permissions

Managing Users and User Permissions

Implements access control in two ways:


• User permissions restrict operations that users can perform
• Resource groups restrict the range of resources that users can access
Provides the following security functions:
• Sets password policy to prevent users from specifying easy-to-guess
passwords
• Enables automatic locking of user accounts if successive login attempts
fail
• Displays a warning banner in the user login window

Built-in user ID System lets you manage all users in Hitachi


Command Suite.
• Cannot change or delete this user ID or its permissions

HDS Confidential: For distribution only to authorized parties. Page 11-17


Hitachi Replication Manager Overview
Sites Views

Sites Views

Groups resources by site for easier GUI management


• Logical sites:
Consist of hosts, storage systems, applications and copy pair
configuration definitions (pair management servers)
In complex replication environment, storage systems might be located
at many sites
Simplifies resource management
• Provides same functionality as the existing resources drawer with logical
structure
• Users can define sites and register resources into them
• A resource can belong to only one site

Page 11-18 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Sites Views

Two sites - Atlanta and Dallas

HDS Confidential: For distribution only to authorized parties. Page 11-19


Hitachi Replication Manager Overview
Launching Views

Launching Views

Four views allow users to understand replication environment


depending on the perspective:
• Storage Systems view
• Hosts view
• Pair Configurations view
• Applications view

Replication Manager provides the following four functional views that allow you to
visualize pair configurations and status of the replication environment from
different perspectives:
Hosts view: This view lists open hosts and mainframe hosts and allows you to
confirm pair status summaries for each host.
Storage Systems view: This view lists open and mainframe storage systems and
allows you to confirm pair status summarized for each. A storage system serving
both mainframe and open system pairs is recognized as two different resources in
order to differentiate open copy pairs and mainframe copy pairs.
Pair Configurations view: This view lists open and mainframe hosts managing
copy pairs with CCI or BCM and allows you to confirm pair status summarized for
each host. This view also provides a tree structure along with the pair
management structure.
Applications view: This view lists the application and data protection status.
This view also provides a tree structure showing the servers and their associated
objects (Storage Groups, Information Stores, and Mount Points).

Page 11-20 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Launching Views

Launching Views

Storage Systems

Additional information available on the tabs

The Storage Systems view provides information about LUNs (Paired and Unpaired),
Journal Groups, copy licenses, command devices and pools.
LUNs (Paired) tab shows the list of LDEVs that are already configured as Copy
Pairs
Clicking on a specific LUN provides detailed information about the Copy
Pair, Copy Type, Pair Status, and much more
A filter dialog is available for LUNs tab, which makes it easier to find target
volumes. You can filter LUNs by using attributes such as Port, HSD, Logical
Group, Capacity, Label and Copy Type
The Cmd Devs tab displays the command devices list configured on the storage
systems
The Pools tab displays detailed information for both Copy on Write and dynamic
provisioning pools
The JNLGs tab displays list of Journal Groups that are configured on the storage
system. This tab is only available for Universal Storage Platform
The Remote Path tab displays the remote paths configured for TrueCopy and
Universal Replicator

HDS Confidential: For distribution only to authorized parties. Page 11-21


Hitachi Replication Manager Overview
Launching Views

The Copy Licenses tab displays the replication related licenses that are installed
on the storage systems
You can also manage (create, edit, delete) resources using the above tabs. Copy
Licenses for program products need to be installed through the Element Manager
for the storage system.

Page 11-22 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Launching Views

Storage Systems

Perspective of storage systems containing the pairs

Storage Systems view — This view lists open and mainframe storage systems and
allows you to confirm pair status summarized for each. A storage system serving
both mainframe and open system pairs is recognized as two different resources in
order to differentiate open copy pairs and mainframe copy pairs.

HDS Confidential: For distribution only to authorized parties. Page 11-23


Hitachi Replication Manager Overview
Launching Views

Hosts

Perspective of hosts using the pairs

Hosts view — This view lists open hosts and mainframe hosts and allows you to
confirm pair status summaries for each host.

Page 11-24 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Launching Views

Pair Configuration Servers

Perspective of hosts managing the pairs

Pair Configurations view — This view lists open and mainframe hosts managing
copy pairs with CCI or BCM and allows you to confirm pair status summarized for
each host. This view also provides a tree structure along with the pair management
structure.

HDS Confidential: For distribution only to authorized parties. Page 11-25


Hitachi Replication Manager Overview
Launching Views

Application Servers

Perspective of applications (MS-Exchange/MS-SQL Server) being


managed

Applications view — This view lists the application and data protection status. This
view also provides a tree structure showing the servers and their associated objects
(Storage Groups, Information Stores, and Mount Points).

Page 11-26 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Universal Replicator Operations

Universal Replicator Operations

Create Remote Path

HDS Confidential: For distribution only to authorized parties. Page 11-27


Hitachi Replication Manager Overview
Universal Replicator Operations

Wizard will set


Initiator and RCU
Target Replication
Link attributes and
create DKC
definitions (see note)

Note: Select reverse direction path is grayed out. The reverse link configuration is
mandatory for Universal Replicator.
TrueCopy
1. Specify the port for the local storage system CU (MCU) and the port for the
remote storage system CU (RCU).
2. Initiator and RCU Target are set automatically as the attributes of the specified
ports.
3. You can specify either CU Free (to connect only from the local storage system to
a remote storage system using a dynamically assigned MCU-RCU pair) or CU
Specific (to connect each path using a specified MCU and RCU).

Universal Replicator
1. Specify the port for the local storage system and the port for the remote storage
system. You must set paths for both directions.
2. Initiator and RCU Target are set automatically as the attributes of the specified
ports.

Page 11-28 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Universal Replicator Operations

Create Journal W izard

Journal Groups are used to keep the journal data for asynchronous data transfer and
must be set up before creating Universal Replicator volume pairs. Journal groups
must be set in each storage system on both the primary and secondary side.
Universal Replicator uses journal volumes as volume copy buffers. You must set up
journal groups before creating Universal Replicator volume pairs. Journal groups
are used to keep the journal data for asynchronous data transfer. Journal groups
must be set in each storage system on both the primary and secondary side. The
journal volume for the primary site and the primary volume, and the journal volume
for the secondary site and the secondary volume, are defined as journal groups.

HDS Confidential: For distribution only to authorized parties. Page 11-29


Hitachi Replication Manager Overview
Universal Replicator Operations

Select Journal Volumes

Page 11-30 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Universal Replicator Operations

Set Up Journal Group Options

Inflow Control: Allows you to specify whether to restrict inflow of update I/Os to the
journal volume (in other words, whether to delay response to the hosts).
Yes indicates inflow will be restricted. No indicates inflow will not be restricted
Note: If Yes is selected and the metadata or the journal data is full, the update I/Os
may stop. (Journal Groups suspended)
Data Overflow Watch: Allows you to specify the time (in seconds) for monitoring
whether metadata and journal data are full. This value must be within the range of 0 to
600 seconds.
Note: If Inflow Control is No, Data Overflow Watch does not take effect and does not
display anything
Path Watch Time: Allows you to specify the interval from when a path gets blocked to
when a mirror gets split (suspended). This value must be within the range of 1 to 60
minutes.
Note: Make sure that the same interval is set to both the master and restore journal
groups in the same mirror, unless otherwise required. If the interval differs between the
master and restore journal groups, these journal groups will not be suspended
simultaneously. For example, if the interval for the master journal group is 5 minutes and
the interval for the restore journal group is 60 minutes, the master journal group will be
suspended in 5 minutes after a path gets blocked, and the restore journal group will be
suspended in 60 minutes after a path gets blocked.

HDS Confidential: For distribution only to authorized parties. Page 11-31


Hitachi Replication Manager Overview
Universal Replicator Operations

Caution: By default, the factory enables (turns ON) SVP mode 449, disabling the path
watch time option. If you’d like to enable the path watch time option, please disable
mode 449 (turn it OFF).
Note: If you want to split a mirror (suspend) immediately after a path becomes blocked,
please disable SVP modes 448 and 449 (turn OFF).
Forward Path Watch Time: Allows you to specify whether to forward the Path Watch
Time value of the master journal group to the restore journal group. If the Path Watch
Time value is forwarded, the two journal groups will have the same Path Watch Time
value.
Yes: The Path Watch Time value will be forwarded to the restore journal group.
No: The Path Watch Time value will not be forwarded to the restore journal group.
No is the default.
Blank: The current setting of Forward Path Watch Time will remain unchanged.
Caution: This option cannot be specified in the remote site.
Use of Cache: Allows you to specify whether to store journal data in the restore journal
group into the cache.
Use: Journal data will be stored into the cache.
Note: When there is insufficient space in the cache, journal data will also be stored
into the journal volume.
Not Use: Journal data will not be stored into the cache.
Blank: The current setting of Use of Cache will remain unchanged.
Caution: This setting does not take effect on master journal groups. However, if the
horctakeover option is used to change a master journal group into a restore journal
group, this setting will take effect on the journal group.
Speed of Line: Allows you to specify the line speed of data transfer. The unit is Mbps
(megabits per second).
You can specify one of the following: 256, 100, or 10.
Caution: This setting does not take effect on master journal groups. However, if the
horctakeover option is used to change a master journal group into a restore journal
group, this setting will take effect on the journal group.
Delta resync Failure: Allows you to specify the processing that would take place when
delta resync operation cannot be performed.
Entire: All the data in primary data volume will be copied to remote data volume
when delta resync operation cannot be performed. The default is Entire.
None: No processing will take place when delta resync operation cannot be
performed. Therefore, the remote data volume will not be updated. If Delta Resync
pairs are desired, they will have to be created manually.
Caution: This option cannot be specified in the remote site.

Page 11-32 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Universal Replicator Operations

Journal Group Status

Create UR pairs

HDS Confidential: For distribution only to authorized parties. Page 11-33


Hitachi Replication Manager Overview
Universal Replicator Operations

Task Management > Pair Settings

JNLG ID(P) - Select the journal group ID of the primary volume of the copy pair.
This list displays from 0 to 255 (unused) journal group IDs.
JNLG ID(S) - Select the journal group ID of the secondary volume of the copy pair.
This list displays from 0 to 255 (unused) journal group IDs.

Page 11-34 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Universal Replicator Operations

Task Management > Set Task Options > Schedule

Select whether to execute the tasks immediately, or at a specified date and time.
Execute Immediately
If you want to execute the task immediately, select this radio button. The task
will start when the Pair Configuration Wizard ends
Execution Date
Select this radio button to execute the task at the specific date and time that you
select from the drop-down list
Modify Pair Configuration File Only (Do not create Pair)
Select this check box if you do not want the task to create a copy pair. When the
check box is selected, the task only modify the CCI configuration definition file.
This item is displayed when the task type is create.

HDS Confidential: For distribution only to authorized parties. Page 11-35


Hitachi Replication Manager Overview
Universal Replicator Operations

View Pair Status

Advanced Options on Pair Operations

Page 11-36 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
TrueCopy Operations

TrueCopy Operations

Set up copy type as TCS (TrueCopy Sync)


• Select P-VOL or S-VOL

HDS Confidential: For distribution only to authorized parties. Page 11-37


Hitachi Replication Manager Overview
TrueCopy Operations

Specify Copy Group attributes

Pair management servers are required on the primary as well as secondary site.

Page 11-38 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
TrueCopy Operations

Pair Configurations View

Specify additional settings

HDS Confidential: For distribution only to authorized parties. Page 11-39


Hitachi Replication Manager Overview
TrueCopy Operations

Pair Status Change

Page 11-40 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
ShadowImage Replication Operations

ShadowImage Replication Operations

Specify Copy Type

HDS Confidential: For distribution only to authorized parties. Page 11-41


Hitachi Replication Manager Overview
ShadowImage Replication Operations

Set Task Options - Schedule

Select whether to execute the tasks immediately, or at a specified date and time.
Execute Immediately
If you want to execute the task immediately, select this radio button. The task
will start when the Pair Configuration Wizard ends
Execution Date
Select this radio button to execute the task at the specific date and time that you
select from the drop-down list
Modify Pair Configuration File Only (Do not create Pair)
Select this check box if you do not want the task to create a copy pair. When the
check box is selected, the task will only modify the CCI configuration definition
file. This item is displayed when the task type is created.
These are the same as the Universal Replicator operations.

Page 11-42 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Copy-on-Write Snapshot /Thin Image

Copy-on-Write Snapshot /Thin Image

Create Pool

1. From the Explorer menu, choose Resources and then Storage Systems. The
Storage Systems subwindow appears.
2. Expand the object tree, and then select a storage system under Storage Systems.
The storage-system-name subwindow appears.
3. Click the Open link. The Open subwindow appears.
4. On the Pools page, click Create Pool. The Create Pool Wizard starts.

HDS Confidential: For distribution only to authorized parties. Page 11-43


Hitachi Replication Manager Overview
Copy-on-Write Snapshot /Thin Image

Create V-VOLs

Replication Manager supports creation of V-VOLs on storage system configurations.


After creating V-VOLs, it is necessary to assign LUNs to them in order to create copy
pairs. Assignment of LUNs should be done using Device Manager. Replication
Manager provides a wizard for creating V-VOLs and associating them with volume
pools.

Page 11-44 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Copy-on-Write Snapshot /Thin Image

Create Snapshot Pair

HDS Confidential: For distribution only to authorized parties. Page 11-45


Hitachi Replication Manager Overview
Alerts

Alerts

Monitoring Copy Operations

Up to 1,000 specific alerts can be generated when a monitored


target, such as a copy pair or buffer, satisfies a preset condition, for
example:
Monitoring Pair Configuration information
• For specific volume
• For specific copy group
• Examine Configuration files
Monitoring Pair Status: Alerts are generated upon a change in status
Monitoring Performance of Remote Copies
• Viewing Copy Progress
• Checking Buffer Usage (M-Journal and R-Journal Utilization)
• Checking Write Delay Time (C/T Delta) for Each Copy Group

You can monitor copy pair configurations in multiple ways using Replication
Manager. You can use a tree view to check the configuration definition file for CCI
that is created by Replication Manager or other products, or to check the copy group
definition file for Business Continuity Manager or Mainframe Agent. You can limit
the range of copy pairs being monitored to those of a host or storage system, and also
check the configuration of related copy pairs. You can also check copy pair
configurations from a copy group perspective.

Page 11-46 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Alerts

Create Alert Setting Wizard

Creating Alerts > 1. Introduction

Create Alert Setting Wizard

Creating Alerts > 2. Select Monitoring Type

HDS Confidential: For distribution only to authorized parties. Page 11-47


Hitachi Replication Manager Overview
Alerts

Create Alert Setting Wizard

Creating Alerts > 3. Select Alert Setting

Create Alert Setting Wizard

Creating Alerts > 4. Edit Alert Action

Page 11-48 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Alerts

Create Alert Setting Wizard

Selecting Pair Status Icons for alerts

HDS Confidential: For distribution only to authorized parties. Page 11-49


Hitachi Replication Manager Overview
Create Alert Setting Wizard

Create Alert Setting Wizard

Creating Alerts > 5. Confirm > 6. Finish

Page 11-50 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Create Alert Setting Wizard

Example of Email Alert

“DETECTED PAIRS” shows individual pair information for a copy group

Detected Pairs is displayed only when an alert is generated for a copy group and the
alert Automarking feature is enabled (Marking Type: Auto).
To limit the volume of information when an enormous number of pairs are involved,
the display is limited to ten pairs along with the following message:
More than 10 pairs were detected.

HDS Confidential: For distribution only to authorized parties. Page 11-51


Hitachi Replication Manager Overview
Setting Up Alerts

Setting Up Alerts

Examples

Alerts — Host > Copy Groups

Page 11-52 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Setting Up Alerts

Examples

Alerts — Host (specific volume)

Examples

Alerts — Copy Licenses

HDS Confidential: For distribution only to authorized parties. Page 11-53


Hitachi Replication Manager Overview
Setting Up Alerts

Examples

Alerts — Storage System > Open > Pools

Page 11-54 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Alert Status

Alert Status

Exporting Alerts

HDS Confidential: For distribution only to authorized parties. Page 11-55


Hitachi Replication Manager Overview
Alert Status

Testing, Editing, or Deleting Alerts

Page 11-56 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Application Replicas

Application Replicas

Snapshot of an application server


• Saved to a series of secondary volumes
• Immediate or scheduled basis
Requires Application Agent installed on the application server
Supported Applications
• MS-Exchange
• MS-SQL
Replication Manager is used to create, manage, and manually
restore replicas
• Supports auto and manual selection of backup targets
• Targets can be rotated

As with copy pair management, the creation and management of application


replicas is organized around tasks and storage assets.

HDS Confidential: For distribution only to authorized parties. Page 11-57


Hitachi Replication Manager Overview
Application Replicas

MS Sequel Server and MS Exchange Server


Simple Setup and Deployment
Application Agent Installer
• Deploys required components for replica management
Application Agent
• Easily downloaded from HRpM GUI to application servers
• Simple Agent Setup
HRpM
• Hides complex parameters that users normally do not need

HRpM Application Agent

Page 11-58 HDS Confidential: For distribution only to authorized parties.


Hitachi Replication Manager Overview
Application Backup and Restore Features

Application Backup and Restore Features

Enhanced Monitoring
• Data Protection Status
Intuitive icon shows the summary status
User can easily identify the possible issues
• Email Notification
Errors can be notified by email for immediate actions

Protection Status for Hosts


and Storage Groups

Email notification settings


for Application Agent

HDS Confidential: For distribution only to authorized parties. Page 11-59


Hitachi Replication Manager Overview
Module Review

Module Review

1. Replication Manager is a standalone product. True or False? Why


or why not?
2. Name the methods Hitachi Replication Manager software uses to
control access.
3. Describe Replication Manager Sites.
4. Replication manager V7 allows creation of TrueCopy Synchronous
Consistency Group. True or False?
5. Describe the Replication functions that can be monitored by
Replication Manager.
6. How many Replication Manager Alert conditions are supported?

Page 11-60 HDS Confidential: For distribution only to authorized parties.


12. Universal Replicator
MxN Consistency
Groups
Module Objectives

Upon completion of this module, you should be able to:


• Identify the licenses needed to use MxN Consistency group (MxN CTG)
• Define the concepts of MxN Consistency group function
• Describe the Architecture of MxN Consistency groups with examples
• Manage MxN Consistency group with CCI
• Define the current restrictions on the use of MxN Consistency groups

HDS Confidential: For distribution only to authorized parties. Page 12-1


Universal Replicator MxN Consistency Groups
Licensing

Licensing

Licensing considerations for MxN Consistency group


• All sites
Disaster Recovery bundle - Licenses Hitachi Universal Replicator and
TrueCopy
Disaster Recovery Extended bundle - Licenses 3DC, MxN
Consistency group
No license capacity required for journal volumes

Page 12-2 HDS Confidential: For distribution only to authorized parties.


Universal Replicator MxN Consistency Groups
Concepts

Concepts

Allows CCI to maintain S-VOL data consistency across multiple


Journal Groups
• Extends a consistency group across Journal Groups (up to 4)
• Journal Groups may exist in separate storage systems (up to 4) per site
• Consistency of Consistency group S-VOLs based on Consistency Q-
Markers (see note)
Supported configurations
• 4x4 2DC with intermix of VSP and USPV
• 4x4 2DC with intermix of VSP and HUS VM
Not currently supported
• Any 3DC configuration
• Cascade configurations
• USP V Intermix with HUS VM

Note: CTQMs should be described as "IDs" indicating a 'batch of updates' in a given


cycle time. CTQMs are used 'in-lieu' of there not being time-stamps for open systems.

HDS Confidential: For distribution only to authorized parties. Page 12-3


Universal Replicator MxN Consistency Groups
Concepts

Configuration Examples

Case 1: 1x1 Configuration with four Journals in one Consistency


group

CCI CCI

HUR

HUR

Legend:
HUR
CCI Command device

Subsystem
HUR
Journal group

MxN CTG

Configuration Examples

Case 2: 4x4 Configuration

CCI CCI

MCU 1 RCU 1
HUR

MCU 2 RCU 2
HUR

MCU 3 RCU 3
HUR Legend:
CCI Command device

MCU 4 HUR
RCU 4 DKC
Journal group
MxN CTG

Page 12-4 HDS Confidential: For distribution only to authorized parties.


Universal Replicator MxN Consistency Groups
Concepts

Configuration Examples

Case 3: 4x1 Configuration

CCI CCI
MCU 1

MCU 1 HUR

MCU 2 HUR
RCU

HUR
MCU 3 Legend:

CCI Command device

MCU4 HUR DKC

Journal group
MxN CTG

Configuration Examples

Case 4: 1x4 Configuration


CCI

HUR RCU 1

HUR
RCU 2

HUR RCU 3
Legend:

CCI Command device


RCU 4 Subsystem
HUR

Journal Group

MxN CTG
onfidential: For distribution only to authorized parties.

Page 12-5
Universal Replicator MxN Consistency Groups
Concepts

CCI Processes for Managing

CCI delivers the CTQ-Marker to the MxN Consistency Group


If CCI instances are shut down, then UR will stop committing
changes to S-VOL
• M-JNLs start to fill up
If you plan to shut down CCI for several hours, suspend the MxN
Consistency groups first
• Freeze - Halts all Journal Obtains for relevant Journal groups
• Mark - A command is sent to increment the extended consistency
timestamp marker (CTQ-Marker)
This is not the same as the usual HUR Q-Marker and is updated much
more slowly
Q-Marker increases once per received I/O, so may increment tens of
thousands of times a second
CTQ-Marker increases once per cycle (by default three seconds)

CCI Processes for Managing

If you plan to shut down CCI for several hours, suspend the MxN
Consistency groups first (continued)
• Run - Releases Channel Processors after the extended consistency
CTQ-Marker is generated
All transactions received after the freeze will use the new CTQ-Marker
With the factory estimate of 3% host write I/O elongation at a 1 second
cycle time
• Wait - CCI waits before starting the next cycle to insert the next CTQ-
Marker

Page 12-6 HDS Confidential: For distribution only to authorized parties.


Universal Replicator MxN Consistency Groups
Concepts

Order of Data Update

Remote CCI Instance tracks incoming CTQ-Marker


The most recent CTQM
received in all R-JNL groups

15:11 QM#5 15:08 QM#5 15:09 QM#5


15:10 QM#4 15:07 QM#4 15:05 QM#4
15:03 QM#3 15:02 QM#3 15:04 QM#3
15:00 QM#2 14:55 QM#2 14:57 QM#2 14:59 QM#2
14:58 QM#1 14:54 QM#1 14:56 QM#1 14:53 QM#1
JNLG1 JNLG2 JNLG3 JNLG4

CTQ-Marker #2 is last common Update and becomes the restore point

Journals not restored

Journals restored

HDS Confidential: For distribution only to authorized parties. Page 12-7


Universal Replicator MxN Consistency Groups
Managing Extended Consistency Groups

Managing Extended Consistency Groups

New Configuration File Parameters

NEW: Now define Journal IDs in horcm files - Allows multiple sets of
Journal group IDs in one Copy group
Note: Enter the Journal
NEW: HORCM_CTQM parameter IDs in DECIMAL
HORCM_MON HORCM_MON
#ip_address service poll(10ms) timeout(10ms) #ip_address service poll(10ms) timeout(10ms)
10.17.105.4 horcm6 1000 3000 10.17.105.5 horcm7 1000 3000

HORCM_CMD HORCM_CMD
#dev_name dev_name dev_name #dev_name dev_name dev_name
\\.\CMD-10145 \\.\CMD-10156

HORCM_LDEV HORCM_LDEV
#dev_group dev_name Serial# CU:LDEV MU# #dev_group dev_name Serial# CU:LDEV MU#
mxngrp pair1 10145:00 00:00 1 mxngrp pair1 10156:01 02:00 1
mxngrp pair2 10145:00 00:01 1 mxngrp pair2 10156:01 02:01 1

mxngrp pair2 10145:02 00:02 1 mxngrp pair2 10156:03 02:02 1


mxngrp pair3 10145:02 00:03 1 mxngrp pair3 10156:03 02:03 1

HORCM_INST HORCM_INST
#dev_group ip_address service #dev_group ip_address service
mxngrp 10.17.105.5 horcm7 mxngrp 10.17.105.4 horcm6

HORCM_CTQM HORCM_CTQM
#dev_group Interval mode #dev_group Interval Mode
mxngrp 300 mxngrp 300

Specifying different JID for MxN Consistency Group


Specify Journal ID along with Serial Number in HORCM_LDEV parameter.
Note: If JID (Journal ID) is specified in horcm.conf as mentioned above, then the
paircreate command need not specify Journal ID (-jp <jid> -js <jid>) option. This
allows an MxN Consistency Group to contain multiple Journal Groups (up to four).
Those Journal Groups can be in separate storage systems if desired.
If JID (Journal ID) is not specified in horcm.conf, then Journal ID (-jp <jid> -js <jid>)
option of the paircreate command is used. In that case, one Journal Group can
contain multiple CCI groups with one Consistency Group ID. Again, the
Consistency group can span storage systems.

Page 12-8 HDS Confidential: For distribution only to authorized parties.


Universal Replicator MxN Consistency Groups
Managing Extended Consistency Group

Managing Extended Consistency Group

Create Extended Consistency Group

paircreate -g mxngrp -f async 00 -vl -IH6

Creates group mxngrp with MxN Consistency Group ID 00


Journal Group IDs are defined in the HORCM files:
First Journal Group: M-JNL 00, R-JNL 01
Second Journal Group: M-JNL 02, R-JNL 03

HDS Confidential: For distribution only to authorized parties. Page 12-9


Universal Replicator MxN Consistency Groups
Managing

Managing

Consistency Group ID

Best practice: Select unique M-JNL and R-JNL group IDs across
all participating storage systems
• This eliminates Journal Group ID conflicts when creating MxN
Consistency Groups because Journal Group IDs within a MxN
Consistency Group must be unique

However, associated M-JNL and R-JNL IDs can be the same


Inflow Control and other Journal Options should be the same for all
Journal Groups participating in an Extended Consistency group

Notes:
HUR supports Consistency Group IDs 00-FF
ShadowImage and TrueCopy support Consistency Group IDs 00-7F

Page 12-10 HDS Confidential: For distribution only to authorized parties.


Universal Replicator MxN Consistency Groups
Managing

Storage Navigator Display

Example shows two sets of Journal Groups in Consistency Group 00


(created with CCI)

Sample pairdisplay Results

Enhanced pairdisplay command


• -v jnl parameter displays the MxN Consistency Group and CTQ-
Marker

Q-Marker: Displays the sequence number


• P-JNL, latest sequence number generated
• S-JNL, latest sequence number in the remote cache
Q-CNT: Displays the number of remaining Q-Markers
• P-JNL - amount of sequence numbers waiting to be sent
• S-JNL - amount of sequence numbers waiting to settle to S-VOL

HDS Confidential: For distribution only to authorized parties. Page 12-11


Universal Replicator MxN Consistency Groups
Managing

Sample pairdisplay Results

Enhanced pairdisplay command


• -v jnl with -fxce parameter displays the MxN Consistency group and

CTQ-Marker

• LDEV # column shows the LDEV number of the first Journal Volume in
the group
• CTGID is always shown as a decimal number, while JID (Journal group
ID) displays as a hex number

Sample pairdisplay Results

Enhanced pairdisplay command: -v jnlt


Shows timer settings for HUR
• DOW: Data Overflow Watch setting in Journal Group Options panel
• PBW: Path Blockade Watch: Not settable
• APW: Path Watch Time setting in Journal Group Options Panel

Page 12-12 HDS Confidential: For distribution only to authorized parties.


Universal Replicator MxN Consistency Groups
Managing

Sample pairdisplay Results

-v ctg Option: Displays Inflow Control and timer settings of each


group (see note)

CTG: Displays the MxN Consistency Group ID


P/S: The attribute of a volume in first LDEV of the specified group
Status: The status of the paired volume in first LDEV of the specified group
AP: Displays the number of Active Path in HORC links on PVOL, also displays the
number of active path in HUR links on PVOL and SVOL, ‘Unknown’ is shown as ‘-’
U(%): For HUR: The usage rate of the current journal data
Q-Marker: In P-VOL, the latest sequence # of the MCU P-VOL when the write
command was received. In S-VOL, the latest sequence # on RCU. This item is valid at
PAIR state.
QM-Cnt: The number of Q-Markers within Consistency Group (or MxN Consistency
Group if defined) of the Unit. If no data is being replicated, HUR/TrueCopy sends a
dummy recordset at regular interval; therefore QM-Cnt always shows “2” or “3” even if
Host has NO replication data. This item is valid at PAIR state.
SF(%): The usage of cache setting as the sidefile regardless of UR and TC Async
Seq#: The serial number of the RAID storage system
IFC: Displays INFLOW CONTROL Setting in Storage Navigator Journal Group Options
OT/s: The “offloading timer” (in unit of Sec.) setting to Consistency group for UR / TC
Async. In UR, this is the same as “DOW” item shown by raidvchkscan -v jnlt or
pairdisplay -v jnlt
CT/m: The “Copy Pending timer” (TC Async only)
RT/m: The “RCU Ready timer” (TC Async only)

HDS Confidential: For distribution only to authorized parties. Page 12-13


Universal Replicator MxN Consistency Groups
Managing

pairsplit Command

All pairsplit commands — Internal behavior is different from 1x1


HUR
• Issues Freeze to MxN Consistency group on each MCU
• Issues Suspend & Run to create CTQ-Markers for MxN Consistency
group on each MCU
• Commits a minimum matching point of the CTQ-Marker on RCU (in other
words, do Journal Restore with CTQ-Marker)
• Repeats above until detecting an End Of Marker (EOM) on all RCU via
MCU
• Issues End of Suspend to terminate a suspending state
• [Exception] If link failure is detected during pairsplit, then the commit
operation is aborted, keeping current CTQ-Marker level, and a
suspending state terminates without waiting for the EOM. Results in
PSUE status for the group

Page 12-14 HDS Confidential: For distribution only to authorized parties.


Universal Replicator MxN Consistency Groups
Restrictions

Restrictions

Four journal groups per MxN Consistency Group


32K LDEVs per MxN Consistency Group
Inflow Control and other Journal Options should be the same for all
Journal Groups participating in an Extended Consistency group
Journal IDs in an Extended Consistency Group must be unique
• CCI Instance delivers the CTQ-Marker to the defined MxN Consistency
Group. If you shut down CCI, suspend the MxN Consistency groups first
• If primary instance is shut down while groups are in PAIR status, M-DKC
stops Journal Obtain, resulting in host I/O write elongation
• If remote CCI Instance is shut down, then HUR will stop committing
changes to S-VOL

HDS Confidential: For distribution only to authorized parties. Page 12-15


Universal Replicator MxN Consistency Groups
Module Review

Module Review

1. Define the purpose of the Open HUR MxN Consistency group


function.
2. Identify the licenses needed.
3. Identify supported MxN configurations.
4. State the “best practice rule” for MxN Consistency group
configurations.
5. State the Preliminary Conditions required for MxN Consistency
group management with CCI.

Page 12-16 HDS Confidential: For distribution only to authorized parties.


Your Next Steps

Validate your
knowledge and r Collaborate and
skills with progress in the share with fellow
certification learning paths HDS colleagues

Register, enroll and Get the latest


view additional course and
course offerings Academy updates

Review the course Hitachi Data Get practical


Systems
description for Check your advice and insight
Academy Open
supplemental personalized with HDS white
@HDS Academy
courses learning path papers

Learning Center:
http://learningcenter.hds.com
LinkedIn:
http://www.linkedin.com/groups?home=&gid=3044480&trk=anet_ug_hm&
goback=%2Emyg%2Eanb_3044480_*2
Twitter:
http://twitter.com/#!/HDSAcademy
White Papers:
http://www.hds.com/corporate/resources/
Certification:
http://www.hds.com/services/education/certification

HDS Confidential: For distribution only to authorized parties. Page N-1


Your Next Steps

Learning Paths:
APAC:
http://www.hds.com/services/education/apac/?_p=v#GlobalTabNavi

Americas:
http://www.hds.com/services/education/north-
america/?tab=LocationContent1#GlobalTabNavi

EMEA:
http://www.hds.com/services/education/emea/#GlobalTabNavi

theLoop:
http://loop.hds.com/index.jspa ― HDS internal only

Page N-2 HDS Confidential: For distribution only to authorized parties.


Training Course Glossary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

—A— AIX — IBM UNIX.


AaaS — Archive as a Service. A cloud computing AL — Arbitrated Loop. A network in which nodes
business model. contend to send data, and only 1 node at a
time is able to send data.
AAMux — Active-Active Multiplexer.
AL-PA — Arbitrated Loop Physical Address.
ACC — Action Code. A SIM (System Information
Message). AMS — Adaptable Modular Storage.
ACE — Access Control Entry. Stores access rights APAR — Authorized Program Analysis Reports.
for a single user or group within the APF — Authorized Program Facility. In IBM z/OS
Windows security model. and OS/390 environments, a facility that
ACL — Access Control List. Stores a set of ACEs, permits the identification of programs that
so that it describes the complete set of access are authorized to use restricted functions.
rights for a file system object within the
API — Application Programming Interface.
Microsoft Windows security model.
ACP ― Array Control Processor. Microprocessor
APID — Application Identification. An ID to
mounted on the disk adapter circuit board
identify a command device.
(DKA) that controls the drives in a specific
disk array. Considered part of the back end; Application Management — The processes that
it controls data transfer between cache and manage the capacity and performance of
the hard drives. applications.
ACP Domain ― Also Array Domain. All of the ARB — Arbitration or request.
array-groups controlled by the same pair of ARM — Automated Restart Manager.
DKA boards, or the HDDs managed by 1
Array Domain — Also ACP Domain. All
ACP PAIR (also called BED).
functions, paths, and disk drives controlled
ACP PAIR ― Physical disk access control logic. by a single ACP pair. An array domain can
Each ACP consists of 2 DKA PCBs to contain a variety of LVI or LU
provide 8 loop paths to the real HDDs. configurations.
Actuator (arm) — Read/write heads are attached Array Group — Also called a parity group. A
to a single head actuator, or actuator arm, group of hard disk drives (HDDs) that form
that moves the heads around the platters. the basic unit of storage in a subsystem. All
AD — Active Directory. HDDs in a parity group must have the same
physical capacity.
ADC — Accelerated Data Copy.
Array Unit — A group of hard disk drives in 1
Address — A location of data, usually in main
memory or on a disk. A name or token that RAID structure. Same as parity group.
identifies a network component. In local area ASIC — Application specific integrated circuit.
networks (LANs), for example, every node has ASSY — Assembly.
a unique address.
Asymmetric virtualization — See Out-of-band
ADP — Adapter. virtualization.
ADS — Active Directory Service. Asynchronous — An I/O operation whose
initiator does not await its completion before

HDS Confidential: For distribution only to authorized parties. Page G-1


proceeding with other work. Asynchronous or Yottabyte (YB). Note that variations of
I/O operations enable an initiator to have this term are subject to proprietary
multiple concurrent I/O operations in trademark disputes in multiple countries at
progress. Also called Out-of-band the present time.
virtualization.
BIOS — Basic Input/Output System. A chip
ATA —Advanced Technology Attachment. A disk located on all computer motherboards that
drive implementation that integrates the governs how a system boots and operates.
controller on the disk drive itself. Also BLKSIZE — Block size.
known as IDE (Integrated Drive Electronics)
BLOB — Binary Large OBject.
Advanced Technology Attachment.
BP — Business processing.
ATR — Autonomic Technology Refresh.
BPaaS —Business Process as a Service. A cloud
computing business model.
Authentication — The process of identifying an
individual, usually based on a username and BPAM — Basic Partitioned Access Method.
password. BPM — Business Process Management.
AUX — Auxiliary Storage Manager. BPO — Business Process Outsourcing. Dynamic
Availability — Consistent direct access to BPO services refer to the management of
information over time. partly standardized business processes,
-back to top- including human resources delivered in a
pay-per-use billing relationship or a self-
—B— service consumption model.
B4 — A group of 4 HDU boxes that are used to BST — Binary Search Tree.
contain 128 HDDs.
BSTP — Blade Server Test Program.
BA — Business analyst.
BTU — British Thermal Unit.
Back end — In client/server applications, the
Business Continuity Plan — Describes how an
client part of the program is often called the
organization will resume partially or
front end and the server part is called the
completely interrupted critical functions
back end.
within a predetermined time after a
Backup image—Data saved during an archive disruption or a disaster. Sometimes also
operation. It includes all the associated files, called a Disaster Recovery Plan.
directories, and catalog information of the
-back to top-
backup operation.
BADM — Basic Direct Access Method. —C—
BASM — Basic Sequential Access Method. CA — (1) Continuous Access software (see
HORC), (2) Continuous Availability or (3)
BATCTR — Battery Control PCB.
Computer Associates.
BC — (1) Business Class (in contrast with EC,
Cache — Cache Memory. Intermediate buffer
Enterprise Class). (2) Business coordinator.
between the channels and drives. It is
BCP — Base Control Program. generally available and controlled as two
BCPii — Base Control Program internal interface. areas of cache (cache A and cache B). It may
be battery-backed.
BDW — Block Descriptor Word.
Cache hit rate — When data is found in the cache,
BED — Back end director. Controls the paths to
it is called a cache hit, and the effectiveness
the HDDs.
of a cache is judged by its hit rate.
Big Data — Refers to data that becomes so large in
Cache partitioning — Storage management
size or quantity that a dataset becomes
software that allows the virtual partitioning
awkward to work with using traditional
of cache and allocation of it to different
database management systems. Big data
applications.
entails data capacity or measurement that
requires terms such as Terabyte (TB), CAD — Computer-Aided Design.
Petabyte (PB), Exabyte (EB), Zettabyte (ZB)

Page G-2 HDS Confidential: For distribution only to authorized parties.


CAGR — Compound Annual Growth Rate. Centralized management — Storage data
management, capacity management, access
Capacity — Capacity is the amount of data that a
security management, and path
storage system or drive can store after
configuration and/or formatting. management functions accomplished by
software.
Most data storage companies, including HDS,
calculate capacity based on the premise that CF — Coupling Facility.
1KB = 1,024 bytes, 1MB = 1,024 kilobytes, CFCC — Coupling Facility Control Code.
1GB = 1,024 megabytes, and 1TB = 1,024 CFW — Cache Fast Write.
gigabytes. See also Terabyte (TB), Petabyte
CH — Channel.
(PB), Exabyte (EB), Zettabyte (ZB) and
Yottabyte (YB). CH S — Channel SCSI.
CAPEX — Capital expenditure — the cost of CHA — Channel Adapter. Provides the channel
developing or providing non-consumable interface control functions and internal cache
parts for the product or system. For example, data transfer functions. It is used to convert
the purchase of a photocopier is the CAPEX, the data format between CKD and FBA. The
and the annual paper and toner cost is the CHA contains an internal processor and 128
OPEX. (See OPEX). bytes of edit buffer memory. Replaced by
CAS — (1) Column Address Strobe. A signal sent CHB in some cases.
to a dynamic random access memory CHA/DKA — Channel Adapter/Disk Adapter.
(DRAM) that tells it that an associated
CHAP — Challenge-Handshake Authentication
address is a column address. CAS-column Protocol.
address strobe sent by the processor to a
CHB — Channel Board. Updated DKA for Hitachi
DRAM circuit to activate a column address.
Unified Storage VM and additional
(2) Content-addressable Storage.
enterprise components.
CBI — Cloud-based Integration. Provisioning of a
Chargeback — A cloud computing term that refers
standardized middleware platform in the
to the ability to report on capacity and
cloud that can be used for various cloud
utilization by application or dataset,
integration scenarios.
charging business users or departments
An example would be the integration of based on how much they use.
legacy applications into the cloud or CHF — Channel Fibre.
integration of different cloud-based
CHIP — Client-Host Interface Processor.
applications into one application.
Microprocessors on the CHA boards that
CBU — Capacity Backup. process the channel commands from the
CBX —Controller chassis (box). hosts and manage host access to cache.
CCHH — Common designation for Cylinder and CHK — Check.
Head. CHN — Channel adapter NAS.
CCI — Command Control Interface. CHP — Channel Processor or Channel Path.
CCIF — Cloud Computing Interoperability CHPID — Channel Path Identifier.
Forum. A standards organization active in CHSN or C-HSN— Cache Memory Hierarchical
cloud computing. Star Network.
CDP — Continuous Data Protection. CHT — Channel tachyon. A Fibre Channel
CDR — Clinical Data Repository protocol controller.
CDWP — Cumulative disk write throughput. CICS — Customer Information Control System.
CE — Customer Engineer. CIFS protocol — Common internet file system is a
platform-independent file sharing system. A
CEC — Central Electronics Complex.
network file system accesses protocol
CentOS — Community Enterprise Operating primarily used by Windows clients to
System. communicate file access requests to
Windows servers.

HDS Confidential: For distribution only to authorized parties. Page G-3


CIM — Common Information Model. • Data discoverability
CIS — Clinical Information System. • Data mobility
CKD ― Count-key Data. A format for encoding • Data protection
data on hard disk drives; typically used in • Dynamic provisioning
the mainframe environment.
• Location independence
CKPT — Check Point.
• Multitenancy to ensure secure privacy
CL — See Cluster.
• Virtualization
CLI — Command Line Interface.
Cloud Fundamental —A core requirement to the
CLPR — Cache Logical Partition. Cache can be deployment of cloud computing. Cloud
divided into multiple virtual cache fundamentals include:
memories to lessen I/O contention.
• Self service
Cloud Computing — “Cloud computing refers to
• Pay per use
applications and services that run on a
distributed network using virtualized • Dynamic scale up and scale down
resources and accessed by common Internet Cloud Security Alliance — A standards
protocols and networking standards. It is organization active in cloud computing.
distinguished by the notion that resources are CLPR — Cache Logical Partition.
virtual and limitless, and that details of the
Cluster — A collection of computers that are
physical systems on which software runs are
abstracted from the user.” — Source: Cloud interconnected (typically at high-speeds) for
the purpose of improving reliability,
Computing Bible, Barrie Sosinsky (2011)
availability, serviceability or performance
Cloud computing often entails an “as a
(via load balancing). Often, clustered
service” business model that may entail one
computers have access to a common pool of
or more of the following:
storage and run special software to
• Archive as a Service (AaaS) coordinate the component computers'
• Business Process as a Service (BPaas) activities.
• Failure as a Service (FaaS) CM ― Cache Memory, Cache Memory Module.
• Infrastructure as a Service (IaaS) Intermediate buffer between the channels
and drives. It has a maximum of 64GB (32GB x
• IT as a Service (ITaaS)
2 areas) of capacity. It is available and
• Platform as a Service (PaaS) controlled as 2 areas of cache (cache A and
• Private File Tiering as a Service (PFTaas) cache B). It is fully battery-backed (48 hours).
• Software as a Service (Saas) CM DIR — Cache Memory Directory.
• SharePoint as a Service (SPaas) CME — Communications Media and
Entertainment.
• SPI refers to the Software, Platform and
Infrastructure as a Service business model. CM-HSN — Control Memory Hierarchical Star
Network.
Cloud network types include the following:
• Community cloud (or community CM PATH ― Cache Memory Access Path. Access
network cloud) Path from the processors of CHA, DKA PCB
to Cache Memory.
• Hybrid cloud (or hybrid network cloud)
CM PK — Cache Memory Package.
• Private cloud (or private network cloud)
CM/SM — Cache Memory/Shared Memory.
• Public cloud (or public network cloud)
• Virtual private cloud (or virtual private CMA — Cache Memory Adapter.
network cloud) CMD — Command.
Cloud Enabler —a concept, product or solution CMG — Cache Memory Group.
that enables the deployment of cloud CNAME — Canonical NAME.
computing. Key cloud enablers include:

Page G-4 HDS Confidential: For distribution only to authorized parties.


CNS — Cluster Name Space or Clustered Name CSTOR — Central Storage or Processor Main
Space. Memory.
CNT — Cumulative network throughput. C-Suite — The C-suite is considered the most
CoD — Capacity on Demand. important and influential group of
individuals at a company. Referred to as
Community Network Cloud — Infrastructure “the C-Suite within a Healthcare provider.”
shared between several organizations or
CSV — Comma Separated Value or Cluster Shared
groups with common concerns.
Volume.
Concatenation — A logical joining of 2 series of
CSVP — Customer-specific Value Proposition.
data, usually represented by the symbol “|”.
In data communications, 2 or more data are CSW ― Cache Switch PCB. The cache switch
often concatenated to provide a unique (CSW) connects the channel adapter or disk
name or reference (e.g., S_ID | X_ID). adapter to the cache. Each of them is
Volume managers concatenate disk address connected to the cache by the Cache Memory
spaces to present a single larger address Hierarchical Star Net (C-HSN) method. Each
space. cluster is provided with the 2 CSWs, and
Connectivity technology — A program or device's each CSW can connect 4 caches. The CSW
switches any of the cache paths to which the
ability to link with other programs and
devices. Connectivity technology allows channel adapter or disk adapter is to be
programs on a given computer to run connected through arbitration.
routines or access objects on another remote CTG — Consistency Group.
computer.
CTL — Controller module.
Controller — A device that controls the transfer of CTN — Coordinated Timing Network.
data from a computer to a peripheral device
CU — Control Unit (refers to a storage subsystem.
(including a storage system) and vice versa.
The hexadecimal number to which 256
Controller-based virtualization — Driven by the
LDEVs may be assigned).
physical controller at the hardware
microcode level versus at the application CUDG — Control Unit Diagnostics. Internal
software layer and integrates into the system tests.
infrastructure to allow virtualization across CUoD — Capacity Upgrade on Demand.
heterogeneous storage and third party
CV — Custom Volume.
products.
CVS ― Customizable Volume Size. Software used
Corporate governance — Organizational
to create custom volume sizes. Marketed
compliance with government-mandated
under the name Virtual LVI (VLVI) and
regulations.
Virtual LUN (VLUN).
CP — Central Processor (also called Processing
Unit or PU). CWDM — Course Wavelength Division
Multiplexing.
CPC — Central Processor Complex.
CXRC — Coupled z/OS Global Mirror.
CPM — Cache Partition Manager. Allows for
-back to top-
partitioning of the cache and assigns a
partition to a LU; this enables tuning of the —D—
system’s performance. DA — Device Adapter.
CPOE — Computerized Physician Order Entry
DACL — Discretionary access control list (ACL).
(Provider Ordered Entry).
The part of a security descriptor that stores
CPS — Cache Port Slave. access rights for users and groups.
CPU — Central Processing Unit. DAD — Device Address Domain. Indicates a site
CRM — Customer Relationship Management. of the same device number automation
CSS — Channel Subsystem. support function. If several hosts on the
CS&S — Customer Service and Support. same site have the same device number
system, they have the same name.

HDS Confidential: For distribution only to authorized parties. Page G-5


DAP — Data Access Path. Also known as Zero virtual disk data addresses are mapped to
Copy Failover (ZCF). sequences of member disk addresses in a
regular rotating pattern.
DAS — Direct Attached Storage.
DASD — Direct Access Storage Device. Data Transfer Rate (DTR) — The speed at which
data can be transferred. Measured in
Data block — A fixed-size unit of data that is kilobytes per second for a CD-ROM drive, in
transferred together. For example, the X- bits per second for a modem, and in
modem protocol transfers blocks of 128 megabytes per second for a hard drive. Also,
bytes. In general, the larger the block size, often called data rate.
the faster the data transfer rate.
DBL — Drive box.
Data Duplication — Software duplicates data, as
in remote copy or PiT snapshots. Maintains 2 DBMS — Data Base Management System.
copies of data. DBX — Drive box.
Data Integrity — Assurance that information will DCA ― Data Cache Adapter.
be protected from modification and
DCTL — Direct coupled transistor logic.
corruption.
DDL — Database Definition Language.
Data Lifecycle Management — An approach to
DDM — Disk Drive Module.
information and storage management. The
policies, processes, practices, services and DDNS — Dynamic DNS.
tools used to align the business value of data DDR3 — Double data rate 3.
with the most appropriate and cost-effective
DE — Data Exchange Software.
storage infrastructure from the time data is
created through its final disposition. Data is Device Management — Processes that configure
aligned with business requirements through and manage storage systems.
management policies and service levels DFS — Microsoft Distributed File System.
associated with performance, availability,
DFSMS — Data Facility Storage Management
recoverability, cost, and what ever
Subsystem.
parameters the organization defines as
critical to its operations. DFSM SDM — Data Facility Storage Management
Subsystem System Data Mover.
Data Migration — The process of moving data
from 1 storage device to another. In this DFSMSdfp — Data Facility Storage Management
context, data migration is the same as Subsystem Data Facility Product.
Hierarchical Storage Management (HSM). DFSMSdss — Data Facility Storage Management
Data Pipe or Data Stream — The connection set up Subsystem Data Set Services.
between the MediaAgent, source or DFSMShsm — Data Facility Storage Management
destination server is called a Data Pipe or Subsystem Hierarchical Storage Manager.
more commonly a Data Stream.
DFSMSrmm — Data Facility Storage Management
Data Pool — A volume containing differential Subsystem Removable Media Manager.
data only.
DFSMStvs — Data Facility Storage Management
Data Protection Directive — A major compliance
Subsystem Transactional VSAM Services.
and privacy protection initiative within the
DFW — DASD Fast Write.
European Union (EU) that applies to cloud
computing. Includes the Safe Harbor DICOM — Digital Imaging and Communications
Agreement. in Medicine.
Data Stream — CommVault’s patented high DIMM — Dual In-line Memory Module.
performance data mover used to move data Direct Access Storage Device (DASD) — A type of
back and forth between a data source and a storage device, in which bits of data are
MediaAgent or between 2 MediaAgents. stored at precise locations, enabling the
Data Striping — Disk array data mapping computer to retrieve information directly
technique in which fixed-length sequences of without having to scan a series of records.

Page G-6 HDS Confidential: For distribution only to authorized parties.


Direct Attached Storage (DAS) — Storage that is DKU — Disk Array Frame or Disk Unit. In a
directly attached to the application or file multi-frame configuration, a frame that
server. No other device on the network can contains hard disk units (HDUs).
access the stored data.
DKUPS — Disk Unit Power Supply.
Director class switches — Larger switches often
DLIBs — Distribution Libraries.
used as the core of large switched fabrics.
DKUP — Disk Unit Power Supply.
Disaster Recovery Plan (DRP) — A plan that
describes how an organization will deal with DLM — Data Lifecycle Management.
potential disasters. It may include the DMA — Direct Memory Access.
precautions taken to either maintain or
DM-LU — Differential Management Logical Unit.
quickly resume mission-critical functions. DM-LU is used for saving management
Sometimes also referred to as a Business information of the copy functions in the
Continuity Plan.
cache.
Disk Administrator — An administrative tool that DMP — Disk Master Program.
displays the actual LU storage configuration.
DMT — Dynamic Mapping Table.
Disk Array — A linked group of 1 or more
DMTF — Distributed Management Task Force. A
physical independent hard disk drives
standards organization active in cloud
generally used to replace larger, single disk
computing.
drive systems. The most common disk
arrays are in daisy chain configuration or DNS — Domain Name System.
implement RAID (Redundant Array of DOC — Deal Operations Center.
Independent Disks) technology.
Domain — A number of related storage array
A disk array may contain several disk drive
groups.
trays, and is structured to improve speed
and increase protection against loss of data. DOO — Degraded Operations Objective.
Disk arrays organize their data storage into DP — Dynamic Provisioning (pool).
Logical Units (LUs), which appear as linear
DP-VOL — Dynamic Provisioning Virtual Volume.
block paces to their clients. A small disk
array, with a few disks, might support up to DPL — (1) (Dynamic) Data Protection Level or (2)
8 LUs; a large one, with hundreds of disk Denied Persons List.
drives, can support thousands. DR — Disaster Recovery.
DKA ― Disk Adapter. Also called an array control DRAC — Dell Remote Access Controller.
processor (ACP). It provides the control
DRAM — Dynamic random access memory.
functions for data transfer between drives
and cache. The DKA contains DRR (Data DRP — Disaster Recovery Plan.
Recover and Reconstruct), a parity generator DRR — Data Recover and Reconstruct. Data Parity
circuit. Replaced by DKB in some cases. Generator chip on DKA.
DKB — Disk Board. Updated DKA for Hitachi DRV — Dynamic Reallocation Volume.
Unified Storage VM and additional
DSB — Dynamic Super Block.
enterprise components.
DSF — Device Support Facility.
DKC ― Disk Controller Unit. In a multi-frame
DSF INIT — Device Support Facility Initialization
configuration, the frame that contains the
front end (control and memory (for DASD).
components). DSP — Disk Slave Program.
DKCMN ― Disk Controller Monitor. Monitors DT — Disaster tolerance.
temperature and power status throughout DTA —Data adapter and path to cache-switches.
the machine.
DTR — Data Transfer Rate.
DKF ― Fibre disk adapter. Another term for a
DVE — Dynamic Volume Expansion.
DKA.
DW — Duplex Write.

HDS Confidential: For distribution only to authorized parties. Page G-7


DWDM — Dense Wavelength Division ERP — Enterprise Resource Planning.
Multiplexing.
ESA — Enterprise Systems Architecture.
DWL — Duplex Write Line or Dynamic ESB — Enterprise Service Bus.
Workspace Linking.
ESC — Error Source Code.
-back to top-
ESD — Enterprise Systems Division (of Hitachi)
—E— ESCD — ESCON Director.
EAL — Evaluation Assurance Level (EAL1 ESCON ― Enterprise Systems Connection. An
through EAL7). The EAL of an IT product or input/output (I/O) interface for mainframe
system is a numerical security grade computer connections to storage devices
assigned following the completion of a developed by IBM.
Common Criteria security evaluation, an ESD — Enterprise Systems Division.
international standard in effect since 1999.
ESDS — Entry Sequence Data Set.
EAV — Extended Address Volume.
ESS — Enterprise Storage Server.
EB — Exabyte.
ESW — Express Switch or E Switch. Also referred
EC — Enterprise Class (in contrast with BC, to as the Grid Switch (GSW).
Business Class).
Ethernet — A local area network (LAN)
ECC — Error Checking and Correction. architecture that supports clients and servers
ECC.DDR SDRAM — Error Correction Code and uses twisted pair cables for connectivity.
Double Data Rate Synchronous Dynamic ETR — External Time Reference (device).
RAM Memory. EVS — Enterprise Virtual Server.
ECM — Extended Control Memory. Exabyte (EB) — A measurement of data or data
ECN — Engineering Change Notice. storage. 1EB = 1,024PB.
E-COPY — Serverless or LAN free backup. EXCP — Execute Channel Program.
EFI — Extensible Firmware Interface. EFI is a ExSA — Extended Serial Adapter.
specification that defines a software interface -back to top-
between an operating system and platform
firmware. EFI runs on top of BIOS when a —F—
LPAR is activated.
FaaS — Failure as a Service. A proposed business
EHR — Electronic Health Record. model for cloud computing in which large-
scale, online failure drills are provided as a
EIG — Enterprise Information Governance.
service in order to test real cloud
EMIF — ESCON Multiple Image Facility.
deployments. Concept developed by the
EMPI — Electronic Master Patient Identifier. Also College of Engineering at the University of
known as MPI. California, Berkeley in 2011.
Emulation — In the context of Hitachi Data Fabric — The hardware that connects
Systems enterprise storage, emulation is the workstations and servers to storage devices
logical partitioning of an Array Group into in a SAN is referred to as a "fabric." The SAN
logical devices. fabric enables any-server-to-any-storage
EMR — Electronic Medical Record. device connectivity through the use of Fibre
Channel switching technology.
ENC — Enclosure or Enclosure Controller. The
units that connect the controllers with the Failback — The restoration of a failed system
share of a load to a replacement component.
Fibre Channel disks. They also allow for
For example, when a failed controller in a
online extending a system by adding RKAs.
redundant configuration is replaced, the
EOF — End of Field.
devices that were originally controlled by
EOL — End of Life. the failed controller are usually failed back
EPO — Emergency Power Off. to the replacement controller to restore the
I/O balance, and to restore failure tolerance.
EREP — Error REPorting and Printing.

Page G-8 HDS Confidential: For distribution only to authorized parties.


Similarly, when a defective fan or power transmitting data between computer devices; a
supply is replaced, its load, previously borne set of standards for a serial I/O bus
by a redundant component, can be failed capable of transferring data between 2 ports.
back to the replacement part. FC RKAJ — Fibre Channel Rack Additional.
Failed over — A mode of operation for failure- Module system acronym refers to an
tolerant systems in which a component has additional rack unit that houses additional
failed and its function has been assumed by hard drives exceeding the capacity of the
a redundant component. A system that core RK unit.
protects against single failures operating in FC-0 ― Lowest layer on fibre channel transport.
failed over mode is not failure tolerant, as This layer represents the physical media.
failure of the redundant component may FC-1 ― This layer contains the 8b/10b encoding
render the system unable to function. Some scheme.
systems (e.g., clusters) are able to tolerate
FC-2 ― This layer handles framing and protocol,
more than 1 failure; these remain failure
frame format, sequence/exchange
tolerant until no redundant component is
management and ordered set usage.
available to protect against further failures.
FC-3 ― This layer contains common services used
Failover — A backup operation that automatically
by multiple N_Ports in a node.
switches to a standby database server or
network if the primary system fails, or is FC-4 ― This layer handles standards and profiles
temporarily shut down for servicing. Failover for mapping upper level protocols like SCSI
is an important fault tolerance function of an IP onto the Fibre Channel Protocol.
mission-critical systems that rely on constant FCA ― Fibre Adapter. Fibre interface card.
accessibility. Also called path failover. Controls transmission of fibre packets.
Failure tolerance — The ability of a system to FC-AL — Fibre Channel Arbitrated Loop. A serial
continue to perform its function or at a data transfer architecture developed by a
consortium of computer and mass storage
reduced performance level, when 1 or more
of its components has failed. Failure device manufacturers, and is now being
standardized by ANSI. FC-AL was designed
tolerance in disk subsystems is often
achieved by including redundant instances of for new mass storage devices and other
components whose failure would make the peripheral devices that require very high
bandwidth. Using optical fiber to connect
system inoperable, coupled with facilities that
allow the redundant components to devices, FC-AL supports full-duplex data
assume the function of failed ones. transfer rates of 100MBps. FC-AL is
compatible with SCSI for high-performance
FAIS — Fabric Application Interface Standard.
storage systems.
FAL — File Access Library.
FCC — Federal Communications Commission.
FAT — File Allocation Table. FCIP — Fibre Channel over IP, a network storage
Fault Tolerant — Describes a computer system or technology that combines the features of
component designed so that, in the event of a Fibre Channel and the Internet Protocol (IP)
component failure, a backup component or to connect distributed SANs over large
procedure can immediately take its place with distances. FCIP is considered a tunneling
no loss of service. Fault tolerance can be protocol, as it makes a transparent point-to-
provided with software, embedded in point connection between geographically
hardware or provided by hybrid combination. separated SANs over IP networks. FCIP
FBA — Fixed-block Architecture. Physical disk relies on TCP/IP services to establish
sector mapping. connectivity between remote SANs over
FBA/CKD Conversion — The process of LANs, MANs, or WANs. An advantage of
converting open-system data in FBA format FCIP is that it can use TCP/IP as the
transport while keeping Fibre Channel fabric
to mainframe data in CKD format.
FBUS — Fast I/O Bus. services intact.
FC ― Fibre Channel or Field-Change (microcode
update) or Fibre Channel. A technology for

HDS Confidential: For distribution only to authorized parties. Page G-9


FCoE - Fibre Channel over Ethernet. An FPGA — Field Programmable Gate Array.
encapsulation of Fibre Channel frames over
Frames — An ordered vector of words that is the
Ethernet networks.
basic unit of data transmission in a Fibre
FCP — Fibre Channel Protocol. Channel network.
FC-P2P — Fibre Channel Point-to-Point.
Front end — In client/server applications, the
FCSE — Flashcopy Space Efficiency. client part of the program is often called the
FC-SW — Fibre Channel Switched. front end and the server part is called the
FCU— File Conversion Utility. back end.
FD — Floppy Disk or Floppy Drive. FRU — Field Replaceable Unit.
FDDI — Fiber Distributed Data Interface. FS — File System.
FDR — Fast Dump/Restore. FSA — File System Module-A.
FE — Field Engineer.
FSB — File System Module-B.
FED — (Channel) Front End Director.
FSI — Financial Services Industries.
Fibre Channel — A serial data transfer
FSM — File System Module.
architecture developed by a consortium of
computer and mass storage device FSW ― Fibre Channel Interface Switch PCB. A
manufacturers and now being standardized board that provides the physical interface
by ANSI. The most prominent Fibre Channel (cable connectors) between the ACP ports
standard is Fibre Channel Arbitrated Loop and the disks housed in a given disk drive.
(FC-AL). FTP ― File Transfer Protocol. A client-server
FICON — Fiber Connectivity. A high-speed protocol that allows a user on 1 computer to
input/output (I/O) interface for mainframe transfer files to and from another computer
computer connections to storage devices. As over a TCP/IP network.
part of IBM's S/390 server, FICON channels FWD — Fast Write Differential.
increase I/O capacity through the
-back to top-
combination of a new architecture and faster
physical link rates to make them up to 8 —G—
times as efficient as ESCON (Enterprise GA — General availability.
System Connection), IBM's previous fiber
GARD — General Available Restricted
optic channel standard.
Distribution.
FIPP — Fair Information Practice Principles.
Gb — Gigabit.
Guidelines for the collection and use of
personal information created by the United GB — Gigabyte.
States Federal Trade Commission (FTC). Gb/sec — Gigabit per second.
FISMA — Federal Information Security
GB/sec — Gigabyte per second.
Management Act of 2002. A major
compliance and privacy protection law that GbE — Gigabit Ethernet.
applies to information systems and cloud Gbps — Gigabit per second.
computing. Enacted in the United States of
GBps — Gigabyte per second.
America in 2002.
GBIC — Gigabit Interface Converter.
FLGFAN ― Front Logic Box Fan Assembly.
GCMI — Global Competitive and Marketing
FLOGIC Box ― Front Logic Box.
Intelligence (Hitachi).
FM — Flash Memory. Each microprocessor has
GDG — Generation Data Group.
FM. FM is non-volatile memory that contains
microcode. GDPS — Geographically Dispersed Parallel
Sysplex.
FOP — Fibre Optic Processor or fibre open.
GID — Group Identifier within the UNIX security
FQDN — Fully Qualified Domain Name.
model.
FPC — Failure Parts Code or Fibre Channel
gigE — Gigabit Ethernet.
Protocol Chip.

Page G-10 HDS Confidential: For distribution only to authorized parties.


GLM — Gigabyte Link Module. HDDPWR — Hard Disk Drive Power.
Global Cache — Cache memory is used on demand HDU ― Hard Disk Unit. A number of hard drives
by multiple applications. Use changes (HDDs) grouped together within a
dynamically, as required for READ subsystem.
performance between hosts/applications/LUs. Head — See read/write head.
GPFS — General Parallel File System. Heterogeneous — The characteristic of containing
GSC — Global Support Center. dissimilar elements. A common use of this
GSI — Global Systems Integrator. word in information technology is to
describe a product as able to contain or be
GSS — Global Solution Services.
part of a “heterogeneous network,"
GSSD — Global Solutions Strategy and consisting of different manufacturers'
Development. products that can interoperate.
GSW — Grid Switch Adapter. Also known as E Heterogeneous networks are made possible by
Switch (Express Switch). standards-conforming hardware and
software interfaces used in common by
GUI — Graphical User Interface.
different products, thus allowing them to
GUID — Globally Unique Identifier.
communicate with each other. The Internet
-back to top- itself is an example of a heterogeneous
—H— network.
HiCAM — Hitachi Computer Products America.
H1F — Essentially the floor-mounted disk rack
(also called desk side) equivalent of the RK. HIPAA — Health Insurance Portability and
(See also: RK, RKA, and H2F). Accountability Act.
H2F — Essentially the floor-mounted disk rack HIS — (1) High Speed Interconnect. (2) Hospital
(also called desk side) add-on equivalent Information System (clinical and financial).
similar to the RKA. There is a limitation of HiStar — Multiple point-to-point data paths to
only 1 H2F that can be added to the core RK cache.
Floor Mounted unit. See also: RK, RKA, and
H1F. HL7 — Health Level 7.
HLQ — High-level Qualifier.
HA — High Availability.
HLS — Healthcare and Life Sciences.
HANA — High Performance Analytic Appliance,
a database appliance technology proprietary HLU — Host Logical Unit.
to SAP.
H-LUN — Host Logical Unit Number. See LUN.
HBA — Host Bus Adapter — An I/O adapter that HMC — Hardware Management Console.
sits between the host computer's bus and the
Fibre Channel loop and manages the transfer Homogeneous — Of the same or similar kind.
of information between the 2 channels. In Host — Also called a server. Basically a central
order to minimize the impact on host computer that processes end-user
processor performance, the host bus adapter applications or requests.
performs many low-level interface functions Host LU — Host Logical Unit. See also HLU.
automatically or with minimal processor
Host Storage Domains — Allows host pooling at
involvement.
the LUN level and the priority access feature
HCA — Host Channel Adapter.
lets administrator set service levels for
HCD — Hardware Configuration Definition. applications.
HD — Hard Disk. HP — (1) Hewlett-Packard Company or (2) High
HDA — Head Disk Assembly. Performance.
HDD ― Hard Disk Drive. A spindle of hard disk HPC — High Performance Computing.
platters that make up a hard drive, which is HSA — Hardware System Area.
a unit of physical storage within a HSG — Host Security Group.
subsystem.

HDS Confidential: For distribution only to authorized parties. Page G-11


HSM — Hierarchical Storage Management (see —I—
Data Migrator).
I/F — Interface.
HSN — Hierarchical Star Network.
I/O — Input/Output. Term used to describe any
HSSDC — High Speed Serial Data Connector.
program, operation, or device that transfers
HTTP — Hyper Text Transfer Protocol. data to or from a computer and to or from a
HTTPS — Hyper Text Transfer Protocol Secure. peripheral device.
Hub — A common connection point for devices in IaaS —Infrastructure as a Service. A cloud
a network. Hubs are commonly used to computing business model — delivering
connect segments of a LAN. A hub contains computer infrastructure, typically a platform
multiple ports. When a packet arrives at 1 virtualization environment, as a service,
port, it is copied to the other ports so that all along with raw (block) storage and
segments of the LAN can see all packets. A networking. Rather than purchasing servers,
switching hub actually reads the destination software, data center space or network
address of each packet and then forwards equipment, clients buy those resources as a
the packet to the correct port. Device to fully outsourced service. Providers typically
which nodes on a multi-point bus or loop are bill such services on a utility computing
physically connected. basis; the amount of resources consumed
Hybrid Cloud — “Hybrid cloud computing refers (and therefore the cost) will typically reflect
to the combination of external public cloud the level of activity.
computing services and internal resources IDE — Integrated Drive Electronics Advanced
(either a private cloud or traditional Technology. A standard designed to connect
infrastructure, operations and applications) hard and removable disk drives.
in a coordinated fashion to assemble a
IDN — Integrated Delivery Network.
particular solution.” — Source: Gartner
Research. iFCP — Internet Fibre Channel Protocol.
Hybrid Network Cloud — A composition of 2 or Index Cache — Provides quick access to indexed
more clouds (private, community or public). data on the media during a browse\restore
Each cloud remains a unique entity but they operation.
are bound together. A hybrid network cloud IBR — Incremental Block-level Replication or
includes an interconnection. Intelligent Block Replication.
Hypervisor — Also called a virtual machine ICB — Integrated Cluster Bus.
manager, a hypervisor is a hardware
ICF — Integrated Coupling Facility.
virtualization technique that enables
multiple operating systems to run ID — Identifier.
concurrently on the same computer. IDR — Incremental Data Replication.
Hypervisors are often installed on server
iFCP — Internet Fibre Channel Protocol. Allows
hardware then run the guest operating
an organization to extend Fibre Channel
systems that act as servers.
storage networks over the Internet by using
Hypervisor can also refer to the interface TCP/IP. TCP is responsible for managing
that is provided by Infrastructure as a Service congestion control as well as error detection
(IaaS) in cloud computing. and recovery services.
Leading hypervisors include VMware iFCP allows an organization to create an IP SAN
vSphere Hypervisor™ (ESXi), Microsoft® fabric that minimizes the Fibre Channel
Hyper-V and the Xen® hypervisor. fabric component and maximizes use of the
-back to top- company's TCP/IP infrastructure.
IFL — Integrated Facility for LINUX.
IHE — Integrating the Healthcare Enterprise.
IID — Initiator ID.
IIS — Internet Information Server.

Page G-12 HDS Confidential: For distribution only to authorized parties.


ILM — Information Life Cycle Management. ILO ISL — Inter-Switch Link.
— (Hewlett-Packard) Integrated Lights-Out. IML iSNS — Internet Storage Name Service.
— Initial Microprogram Load. ISOE — iSCSI Offload Engine.
IMS — Information Management System. ISP — Internet service provider.
ISPF — Interactive System Productivity Facility.
In-band virtualization — Refers to the location of
the storage network path, between the ISPF/PDF — Interactive System Productivity
application host servers in the storage Facility/Program Development Facility.
systems. Provides both control and data ISV — Independent Software Vendor.
along the same connection path. Also called
ITaaS — IT as a Service. A cloud computing
symmetric virtualization. business model. This general model is an
INI — Initiator. umbrella model that entails the SPI business
Interface —The physical and logical arrangement model (SaaS, PaaS and IaaS — Software,
supporting the attachment of any device to a Platform and Infrastructure as a Service).
connector or to another device. ITSC — Informaton and Telecommunications
Internal bus — Another name for an internal data
bus. Also, an expansion bus is often referred Systems Companies.
to as an internal bus. -back to top-
Internal data bus — A bus that operates only
within the internal circuitry of the CPU,
—J—
communicating among the internal caches of Java — A widely accepted, open systems
memory that are part of the CPU chip’s programming language. Hitachi’s enterprise
design. This bus is typically rather quick and software products are all accessed using Java
is independent of the rest of the computer’s applications. This enables storage
operations. administrators to access the Hitachi
enterprise software products from any PC or
IOC — I/O controller.
workstation that runs a supported thin-client
IOCDS — I/O Control Data Set.
internet browser application and that has
IODF — I/O Definition file. TCP/IP network access to the computer on
which the software product runs.
IOPH — I/O per hour.
IOS — I/O Supervisor. Java VM — Java Virtual Machine.
IOSQ — Input/Output Subsystem Queue. JBOD — Just a Bunch of Disks.
JCL — Job Control Language.
IP — Internet Protocol. The communications
protocol that routes traffic across the JMP —Jumper. Option setting method.
Internet. JMS — Java Message Service.
IPv6 — Internet Protocol Version 6. The latest JNL — Journal.
revision of the Internet Protocol (IP).
JNLG — Journal Group.
IPL — Initial Program Load.
JRE —Java Runtime Environment.
IPSEC — IP security.
JVM — Java Virtual Machine.
IRR — Internal Rate of Return.
J-VOL — Journal Volume.
ISC — Initial shipping condition or Inter-System -back to top-
Communication.
iSCSI — Internet SCSI. Pronounced eye skuzzy. —K—
An IP-based standard for linking data KSDS — Key Sequence Data Set.
storage devices over a network and
kVA— Kilovolt Ampere.
transferring data by carrying SCSI
commands over IP networks. KVM — Kernel-based Virtual Machine or
ISE — Integrated Scripting Environment. Keyboard-Video Display-Mouse.
iSER — iSCSI Extensions for RDMA. kW — Kilowatt.
-back to top-

HDS Confidential: For distribution only to authorized parties. Page G-13


—L— networks where it is difficult to predict the
number of requests that will be issued to a
LACP — Link Aggregation Control Protocol.
server. If 1 server starts to be swamped,
LAG — Link Aggregation Groups. requests are forwarded to another server
LAN — Local Area Network. A communications with more capacity. Load balancing can also
refer to the communications channels
network that serves clients within a
themselves.
geographical area, such as a building.
LBA — Logical block address. A 28-bit value that LOC — “Locations” section of the Maintenance
Manual.
maps to a specific cylinder-head-sector
address on the disk. Logical DKC (LDKC) — Logical Disk Controller
Manual. An internal architecture extension
LC — Lucent connector. Fibre Channel connector
that is smaller than a simplex connector (SC). to the Control Unit addressing scheme that
allows more LDEVs to be identified within 1
LCDG — Link Processor Control Diagnostics.
Hitachi enterprise storage system.
LCM — Link Control Module.
Longitudinal record —Patient information from
LCP — Link Control Processor. Controls the birth to death.
optical links. LCP is located in the LCM. LPAR — Logical Partition (mode).
LCSS — Logical Channel Subsystems.
LR — Local Router.
LCU — Logical Control Unit.
LRECL — Logical Record Length.
LD — Logical Device.
LRP — Local Router Processor.
LDAP — Lightweight Directory Access Protocol.
LRU — Least Recently Used.
LDEV ― Logical Device or Logical Device
LSS — Logical Storage Subsystem (equivalent to
(number). A set of physical disk partitions LCU).
(all or portions of 1 or more disks) that are
combined so that the subsystem sees and LU — Logical Unit. Mapping number of an LDEV.
treats them as a single area of data storage. LUN ― Logical Unit Number. 1 or more LDEVs.
Also called a volume. An LDEV has a Used only for open systems.
specific and unique address within a LUSE ― Logical Unit Size Expansion. Feature used
subsystem. LDEVs become LUNs to an to create virtual LUs that are up to 36 times
open-systems host.
larger than the standard OPEN-x LUs.
LDKC — Logical Disk Controller or Logical Disk LVDS — Low Voltage Differential Signal
Controller Manual.
LVI — Logical Volume Image. Identifies a similar
LDM — Logical Disk Manager.
concept (as LUN) in the mainframe
LDS — Linear Data Set. environment.
LED — Light Emitting Diode. LVM — Logical Volume Manager.
LFF — Large Form Factor. -back to top-
LIC — Licensed Internal Code. —M—
LIS — Laboratory Information Systems.
MAC — Media Access Control. A MAC address is
LLQ — Lowest Level Qualifier. a unique identifier attached to most forms of
LM — Local Memory. networking equipment.
LMODs — Load Modules. MAID — Massive array of disks.
LNKLST — Link List. MAN — Metropolitan Area Network. A
communications network that generally
Load balancing — The process of distributing
covers a city or suburb. MAN is very similar
processing and communications activity
to a LAN except it spans across a
evenly across a computer network so that no
geographical region such as a state. Instead
single device is overwhelmed. Load
of the workstations in a LAN, the
balancing is especially important for

Page G-14 HDS Confidential: For distribution only to authorized parties.


workstations in a MAN could depict Microcode — The lowest-level instructions that
different cities in a state. For example, the directly control a microprocessor. A single
state of Texas could have: Dallas, Austin, San machine-language instruction typically
Antonio. The city could be a separate LAN translates into several microcode
and all the cities connected together via a instructions.
switch. This topology would indicate a Fortan Pascal C
MAN.
High-level Language
MAPI — Management Application Programming
Assembly Language
Interface.
Machine Language
Mapping — Conversion between 2 data
Hardware
addressing spaces. For example, mapping
refers to the conversion between physical Microprogram — See Microcode.
disk block addresses and the block addresses MIF — Multiple Image Facility.
of the virtual disks presented to operating Mirror Cache OFF — Increases cache efficiency
environments by control software. over cache data redundancy.
Mb — Megabit. M-JNL — Primary journal volumes.
MM — Maintenance Manual.
MB — Megabyte. MMC — Microsoft Management Console.
MBA — Memory Bus Adaptor. Mode — The state or setting of a program or
MBUS — Multi-CPU Bus. device. The term mode implies a choice,
MC — Multi Cabinet. which is that you can change the setting and
MCU — Main Control Unit, Master Control Unit, put the system in a different mode.
Main Disk Control Unit or Master Disk MP — Microprocessor.
Control Unit. The local CU of a remote copy
pair. Main or Master Control Unit. MPA — Microprocessor adapter.
MCU — Master Control Unit. MPB - Microprocessor board.
MDPL — Metadata Data Protection Level. MPI — (Electronic) Master Patient Identifier. Also
MediaAgent — The workhorse for all data known as EMPI.
movement. MediaAgent facilitates the MPIO — Multipath I/O.
transfer of data between the data source, the MP PK - MP Package
client computer, and the destination storage
MPU — Microprocessor Unit.
media.
Metadata — In database management systems, MQE — Metadata Query Engine (Hitachi).
data files are the files that store the database MS/SG — Microsoft Service Guard.
information; whereas other files, such as MSCS — Microsoft Cluster Server.
index files and data dictionaries, store
MSS — (1) Multiple Subchannel Set. (2) Managed
administrative information, known as
metadata. Security Services.
MTBF — Mean Time Between Failure.
MFC — Main Failure Code.
MTS — Multitiered Storage.
MG — (1) Module Group. 2 (DIMM) cache
memory modules that work together. (2) Multitenancy — In cloud computing,
Migration Group. A group of volumes to be multitenancy is a secure way to partition the
migrated together. infrastructure (application, storage pool and
network) so multiple customers share a
MGC — (3-Site) Metro/Global Mirror.
single resource pool. Multitenancy is one of
MIB — Management Information Base. A database the key ways cloud can achieve massive
of objects that can be monitored by a economy of scale.
network management system. Both SNMP
M-VOL — Main Volume.
and RMON use standardized MIB formats
that allow any SNMP and RMON tools to MVS — Multiple Virtual Storage.
monitor any device defined by a MIB. -back to top-

HDS Confidential: For distribution only to authorized parties. Page G-15


—N— —O—
NAS ― Network Attached Storage. A disk array OCC — Open Cloud Consortium. A standards
connected to a controller that gives access to organization active in cloud computing.
a LAN Transport. It handles data at the file
OEM — Original Equipment Manufacturer.
level.
OFC — Open Fibre Control.
NAT — Network Address Translation.
OGF — Open Grid Forum. A standards
NDMP — Network Data Management Protocol.
organization active in cloud computing.
A protocol meant to transport data between
NAS devices. OID — Object identifier.

NetBIOS — Network Basic Input/Output System. OLA — Operating Level Agreements.


OLTP — On-Line Transaction Processing.
Network — A computer system that allows
sharing of resources, such as files and OLTT — Open-loop throughput throttling.
peripheral hardware devices. OMG — Object Management Group. A standards
Network Cloud — A communications network. organization active in cloud computing.
The word "cloud" by itself may refer to any On/Off CoD — On/Off Capacity on Demand.
local area network (LAN) or wide area
ONODE — Object node.
network (WAN). The terms “computing"
and "cloud computing" refer to services OPEX — Operational Expenditure. This is an
offered on the public Internet or to a private operating expense, operating expenditure,
network that uses the same protocols as a operational expense, or operational
standard network. See also cloud computing. expenditure, which is an ongoing cost for
NFS protocol — Network File System is a protocol running a product, business, or system. Its
that allows a computer to access files over a counterpart is a capital expenditure (CAPEX).
network as easily as if they were on its local ORM — Online Read Margin.
disks.
NIM — Network Interface Module. OS — Operating System.
NIS — Network Information Service (originally Out-of-band virtualization — Refers to systems
called the Yellow Pages or YP). where the controller is located outside of the
NIST — National Institute of Standards and SAN data path. Separates control and data
Technology. A standards organization active on different connection paths. Also called
in cloud computing. asymmetric virtualization.
NLS — Native Language Support. -back to top-

Node ― An addressable entity connected to an —P—


I/O bus or network, used primarily to refer
to computers, storage devices, and storage P-2-P — Point to Point. Also P-P.
subsystems. The component of a node that PaaS — Platform as a Service. A cloud computing
connects to the bus or network is a port. business model — delivering a computing
platform and solution stack as a service.
Node name ― A Name_Identifier associated with
PaaS offerings facilitate deployment of
a node.
applications without the cost and complexity
NPV — Net Present Value. of buying and managing the underlying
NRO — Network Recovery Objective. hardware, software and provisioning
hosting capabilities. PaaS provides all of the
NTP — Network Time Protocol.
facilities required to support the complete
NVS — Non Volatile Storage. life cycle of building and delivering web
-back to top- applications and services entirely from the
Internet.
PACS - Picture Archiving and Communication
System.

Page G-16 HDS Confidential: For distribution only to authorized parties.


PAN — Personal Area Network. A PDM — Policy based Data Migration or Primary
communications network that transmit data Data Migrator.
wirelessly over a short distance. Bluetooth
PDS — Partitioned Data Set.
and Wi-Fi Direct are examples of personal
PDSE — Partitioned Data Set Extended.
area networks.
Performance — Speed of access or the delivery of
PAP — Password Authentication Protocol.
information.
Parity — A technique of checking whether data
Petabyte (PB) — A measurement of capacity — the
has been lost or written over when it is amount of data that a drive or storage
moved from 1 place in storage to another or system can store after formatting. 1PB =
when it is transmitted between computers. 1,024TB.
Parity Group — Also called an array group. This is PFA — Predictive Failure Analysis.
PFTaaS — Private File Tiering as a Service. A cloud
a group of hard disk drives (HDDs) that form computing business model.
the basic unit of storage in a subsystem. All
HDDs in a parity group must have the same PGP — Pretty Good Privacy (encryption).
physical capacity. PGR — Persistent Group Reserve.
Partitioned cache memory — Separate workloads PI — Product Interval.
in a “storage consolidated” system by PIR — Performance Information Report.
dividing cache into individually managed PiT — Point-in-Time.
multiple partitions. Then customize the
PK — Package (see PCB).
partition to match the I/O characteristics of
assigned LUs. PL — Platter. The circular disk on which the
magnetic data is stored. Also called
PAT — Port Address Translation.
motherboard or backplane.
PATA — Parallel ATA.
PM — Package Memory.
Path — Also referred to as a transmission channel, POC — Proof of concept.
the path between 2 nodes of a network that a
data communication follows. The term can Port — In TCP/IP and UDP networks, an
refer to the physical cabling that connects the endpoint to a logical connection. The port
number identifies what type of port it is. For
nodes on a network, the signal that is
communicated over the pathway or a sub- example, port 80 is used for HTTP traffic.
channel in a carrier frequency. POSIX — Portable Operating System Interface for
UNIX. A set of standards that defines an
Path failover — See Failover.
application programming interface (API) for
PAV — Parallel Access Volumes. software designed to run under
PAWS — Protect Against Wrapped Sequences. heterogeneous operating systems.
PB — Petabyte. PP — Program product.
PBC — Port By-pass Circuit. P-P — Point-to-point; also P2P.
PCB — Printed Circuit Board. PPRC — Peer-to-Peer Remote Copy.
PCHIDS — Physical Channel Path Identifiers. Private Cloud — A type of cloud computing
defined by shared capabilities within a
PCI — Power Control Interface.
single company; modest economies of scale
PCI CON — Power Control Interface Connector
and less automation. Infrastructure and data
Board.
reside inside the company’s data center
PCI DSS — Payment Card Industry Data Security
behind a firewall. Comprised of licensed
Standard. software tools rather than on-going services.
PCIe — Peripheral Component Interconnect
Express. Example: An organization implements its
own virtual, scalable cloud and business
PD — Product Detail.
units are charged on a per use basis.
PDEV— Physical Device.

HDS Confidential: For distribution only to authorized parties. Page G-17


Private Network Cloud — A type of cloud QoS — Quality of Service. In the field of computer
network with 3 characteristics: (1) Operated networking, the traffic engineering term
solely for a single organization, (2) Managed quality of service (QoS) refers to resource
internally or by a third-party, (3) Hosted reservation control mechanisms rather than
internally or externally. the achieved service quality. Quality of
PR/SM — Processor Resource/System Manager. service is the ability to provide different
priority to different applications, users, or
Protocol — A convention or standard that enables
data flows, or to guarantee a certain level of
the communication between 2 computing
performance to a data flow.
endpoints. In its simplest form, a protocol
can be defined as the rules governing the QSAM — Queued Sequential Access Method.
syntax, semantics, and synchronization of -back to top-
communication. Protocols may be
implemented by hardware, software, or a
—R—
RACF — Resource Access Control Facility.
combination of the 2. At the lowest level, a
protocol defines the behavior of a hardware RAID ― Redundant Array of Independent Disks,
connection. or Redundant Array of Inexpensive Disks. A
Provisioning — The process of allocating storage group of disks that look like a single volume
resources and assigning storage capacity for to the server. RAID improves performance by
an application, usually in the form of server pulling a single stripe of data from
multiple disks, and improves fault-tolerance
disk drive space, in order to optimize the
performance of a storage area network either through mirroring or parity checking
(SAN). Traditionally, this has been done by and it is a component of a customer’s SLA.
the SAN administrator, and it can be a RAID-0 — Striped array with no parity.
tedious process. In recent years, automated
storage provisioning (also called auto- RAID-1 — Mirrored array and duplexing.
provisioning) programs have become
RAID-3 — Striped array with typically non-
available. These programs can reduce the
time required for the storage provisioning rotating parity, optimized for long, single-
threaded transfers.
process, and can free the administrator from
the often distasteful task of performing this RAID-4 — Striped array with typically non-
chore manually. rotating parity, optimized for short, multi-
threaded transfers.
PS — Power Supply.
RAID-5 — Striped array with typically rotating
PSA — Partition Storage Administrator .
parity, optimized for short, multithreaded
PSSC — Perl Silicon Server Control.
transfers.
PSU — Power Supply Unit. RAID-6 — Similar to RAID-5, but with dual
PTAM — Pickup Truck Access Method. rotating parity physical disks, tolerating 2
PTF — Program Temporary Fixes. physical disk failures.
PTR — Pointer. RAIN — Redundant (or Reliable) Array of
PU — Processing Unit. Independent Nodes (architecture).
RAM — Random Access Memory.
Public Cloud — Resources, such as applications
and storage, available to the general public RAM DISK — A LUN held entirely in the cache
over the Internet. area.
P-VOL — Primary Volume. RAS — Reliability, Availability, and Serviceability
-back to top- or Row Address Strobe.
RBAC — Role Base Access Control.
—Q—
RC — (1) Reference Code or (2) Remote Control.
QD — Quorum Device
QDepth — The number of I/O operations that can RCHA — RAID Channel Adapter.
run in parallel on a SAN device; also WWN RCP — Remote Control Processor.
QDepth. RCU — Remote Control Unit or Remote Disk
Control Unit.

Page G-18 HDS Confidential: For distribution only to authorized parties.


RCUT — RCU Target. language and development environment,
can write object-oriented programming in
RD/WR — Read/Write.
which objects on different computers can
RDM — Raw Disk Mapped. interact in a distributed network. RMI is the
RDMA — Remote Direct Memory Access. Java version of what is generally known as a
RPC (remote procedure call), but with the
RDP — Remote Desktop Protocol.
ability to pass 1 or more objects along with the
RDW — Record Descriptor Word. request.
Read/Write Head — Read and write data to the RndRD — Random read.
platters, typically there is 1 head per platter
side, and each head is attached to a single ROA — Return on Asset.
actuator shaft. RoHS — Restriction of Hazardous Substances (in
RECFM — Record Format Redundant. Describes Electrical and Electronic Equipment).
the computer or network system ROI — Return on Investment.
components, such as fans, hard disk drives, ROM — Read Only Memory.
servers, operating systems, switches, and
telecommunication links that are installed to Round robin mode — A load balancing technique
which distributes data packets equally
back up primary resources in case they fail.
among the available paths. Round robin
A well-known example of a redundant DNS is usually used for balancing the load
system is the redundant array of of geographically distributed Web servers. It
independent disks (RAID). Redundancy works on a rotating basis in that one server
contributes to the fault tolerance of a system. IP address is handed out, then moves to the
back of the list; the next server IP address is
Redundancy — Backing up a component to help
handed out, and then it moves to the end of
ensure high availability.
the list; and so on, depending on the number
Reliability — (1) Level of assurance that data will of servers being used. This works in a
not be lost or degraded over time. (2) An looping fashion.
attribute of any commuter component
Router — A computer networking device that
(software, hardware, or a network) that
forwards data packets toward their
consistently performs according to its
destinations, through a process known as
specifications.
routing.
REST — Representational State Transfer.
RPC — Remote procedure call.
REXX — Restructured extended executor.
RPO — Recovery Point Objective. The point in
RID — Relative Identifier that uniquely identifies
a user or group within a Microsoft Windows time that recovered data should match.
domain. RPSFAN — Rear Power Supply Fan Assembly.
RRDS — Relative Record Data Set.
RIS — Radiology Information System. RISC
RS CON — RS232C/RS422 Interface Connector.
— Reduced Instruction Set Computer. RIU
RSD — RAID Storage Division (of Hitachi).
— Radiology Imaging Unit.
R-SIM — Remote Service Information Message.
R-JNL — Secondary journal volumes.
RSM — Real Storage Manager.
RK — Rack additional.
RTM — Recovery Termination Manager.
RKAJAT — Rack Additional SATA disk tray.
RTO — Recovery Time Objective. The length of
RKAK — Expansion unit.
time that can be tolerated between a disaster
RLGFAN — Rear Logic Box Fan Assembly. and recovery of data.
RLOGIC BOX — Rear Logic Box. R-VOL — Remote Volume.
RMF — Resource Measurement Facility. R/W — Read/Write.
RMI — Remote Method Invocation. A way that a -back to top-
programmer, using the Java programming

HDS Confidential: For distribution only to authorized parties. Page G-19


—S— SBM — Solutions Business Manager.

SA — Storage Administrator. SBOD — Switched Bunch of Disks.

SA z/OS — System Automation for z/OS. SBSC — Smart Business Storage Cloud.
SBX — Small Box (Small Form Factor).
SAA — Share Access Authentication. The process
SC — (1) Simplex connector. Fibre Channel
of restricting a user's rights to a file system
object by combining the security descriptors connector that is larger than a Lucent
from both the file system object itself and the connector (LC). (2) Single Cabinet.
share to which the user is connected. SCM — Supply Chain Management.
SaaS — Software as a Service. A cloud computing SCP — Secure Copy.
business model. SaaS is a software delivery SCSI — Small Computer Systems Interface. A
model in which software and its associated parallel bus architecture and a protocol for
data are hosted centrally in a cloud and are transmitting large data blocks up to a
typically accessed by users using a thin distance of 15 to 25 meters.
client, such as a web browser via the
SD — Software Division (of Hitachi).
Internet. SaaS has become a common
delivery model for most business SDH — Synchronous Digital Hierarchy.
applications, including accounting (CRM SDM — System Data Mover.
and ERP), invoicing (HRM), content SDSF — Spool Display and Search Facility.
management (CM) and service desk
Sector — A sub-division of a track of a magnetic
management, just to name the most common
disk that stores a fixed amount of data.
software that runs in the cloud. This is the SEL — System Event Log.
fastest growing service in the cloud market
Selectable segment size — Can be set per partition.
today. SaaS performs best for relatively
Selectable Stripe Size — Increases performance by
simple tasks in IT-constrained organizations.
customizing the disk access size.
SACK — Sequential Acknowledge.
SENC — Is the SATA (Serial ATA) version of the
ENC. ENCs and SENCs are complete
SACL — System ACL. The part of a security microprocessor systems on their own and
descriptor that stores system auditing they occasionally require a firmware
information. upgrade.
SAIN — SAN-attached Array of Independent SeqRD — Sequential read.
Nodes (architecture). Serial Transmission — The transmission of data
SAN ― Storage Area Network. A network linking bits in sequential order over a single line.
computing devices to disk or tape arrays and Server — A central computer that processes
other devices over Fibre Channel. It handles end-user applications or requests, also called a
data at the block level. host.
SAP — (1) System Assist Processor (for I/O Server Virtualization — The masking of server
processing), or (2) a German software resources, including the number and identity
company. of individual physical servers, processors,
SAP HANA — High Performance Analytic and operating systems, from server users.
Appliance, a database appliance technology The implementation of multiple isolated
proprietary to SAP. virtual environments in one physical server.
SARD — System Assurance Registration Service-level Agreement — SLA. A contract
Document. between a network service provider and a
customer that specifies, usually in
SAS —Serial Attached SCSI.
measurable terms, what services the network
SATA — Serial ATA. Serial Advanced Technology service provider will furnish. Many Internet
Attachment is a new standard for connecting service providers (ISP) provide their
hard drives into computer systems. SATA is customers with a SLA. More recently, IT
based on serial signaling technology, unlike departments in major enterprises have
current IDE (Integrated Drive Electronics)
hard drives that use parallel signaling.
Page G-20 HDS Confidential: For distribution only to authorized parties.
adopted the idea of writing a service level guidance information. (2) Storage Interface
agreement so that services for their Module. (3) Subscriber Identity Module.
customers (users in other departments SIM RC — Service (or system) Information
within the enterprise) can be measured, Message Reference Code.
justified, and perhaps compared with those
SIMM — Single In-line Memory Module.
of outsourcing network providers.
SLA —Service Level Agreement.
Some metrics that SLAs may specify include:
SLO — Service Level Objective.
• The percentage of the time services will be
available SLRP — Storage Logical Partition.
SM ― Shared Memory or Shared Memory Module.
• The number of users that can be served
Stores the shared information about the
simultaneously
subsystem and the cache control information
• Specific performance benchmarks to
(director names). This type of information is
which actual performance will be used for the exclusive control of the
periodically compared subsystem. Like CACHE, shared memory is
• The schedule for notification in advance of controlled as 2 areas of memory and fully non-
network changes that may affect users volatile (sustained for approximately 7 days).
• Help desk response time for various SM PATH— Shared Memory Access Path. The
classes of problems
• Dial-in access availability Access Path from the processors of CHA,
DKA PCB to Shared Memory.
• Usage statistics that will be provided
SMB/CIFS — Server Message Block
Service-Level Objective — SLO. Individual
performance metrics built into an SLA. Each Protocol/Common Internet File System.
SLO corresponds to a single performance SMC — Shared Memory Control.
characteristic relevant to the delivery of an SME — Small and Medium Enterprise
overall service. Some examples of SLOs SMF — System Management Facility.
include: system availability, help desk
SMI-S — Storage Management Initiative
incident resolution time, and application
response time. Specification.
SMP — Symmetric Multiprocessing. An IBM-
SES — SCSI Enclosure Services. licensed program used to install software
SFF — Small Form Factor. and software changes on z/OS systems.
SFI — Storage Facility Image. SMP/E — System Modification
SFM — Sysplex Failure Management. Program/Extended.
SMS — System Managed Storage. SMTP
SFP — Small Form-Factor Pluggable module Host
connector. A specification for a new — Simple Mail Transfer Protocol. SMU
generation of optical modular transceivers. — System Management Unit.
The devices are designed for use with small Snapshot Image — A logical duplicated volume
form factor (SFF) connectors, offer high (V-VOL) of the primary volume. It is an
speed and physical compactness, and are
internal volume intended for restoration.
hot-swappable.
SNIA — Storage Networking Industry
SHSN — Shared memory Hierarchical Star Association. An association of producers and
Network. consumers of storage networking products,
SID — Security Identifier. A user or group whose goal is to further storage networking
identifier within the Microsoft Windows technology and applications. Active in cloud
security model. computing.
SIGP — Signal Processor. SNMP — Simple Network Management Protocol.
SIM — (1) Service Information Message. A A TCP/IP protocol that was designed for
message reporting an error that contains fix management of networks over TCP/IP,
using agents and stations.
SOA — Service Oriented Architecture.

HDS Confidential: For distribution only to authorized parties. Page G-21


SOAP — Simple object access protocol. A way for SRM — Site Recovery Manager.
a program running in one kind of operating
SSB — Sense Byte.
system (such as Windows 2000) to
SSC — SiliconServer Control.
communicate with a program in the same or
SSCH — Start Subchannel.
another kind of an operating system (such as
SSD — Solid-state Drive or Solid-State Disk.
Linux) by using the World Wide Web's
Hypertext Transfer Protocol (HTTP) and its SSH — Secure Shell.
Extensible Markup Language (XML) as the SSID — Storage Subsystem ID or Subsystem
Identifier.
mechanisms for information exchange.
Socket — In UNIX and some other operating SSL — Secure Sockets Layer.
SSPC — System Storage Productivity Center.
systems, socket is a software object that SSUE — Split SUSpended Error.
connects an application to a network SSUS — Split SUSpend.
protocol. In UNIX, for example, a program
can send and receive TCP/IP messages by SSVP — Sub Service Processor interfaces the SVP
opening a socket and reading and writing to the DKC.
data to and from the socket. This simplifies SSW — SAS Switch.
program development because the Sticky Bit — Extended UNIX mode bit that
programmer need only worry about prevents objects from being deleted from a
manipulating the socket and can rely on the directory by anyone other than the object's
operating system to actually transport owner, the directory's owner or the root user.
messages across the network correctly. Note Storage pooling — The ability to consolidate and
that a socket in this sense is completely soft;
manage storage resources across storage
it is a software object, not a physical system enclosures where the consolidation
component. of many appears as a single view.
SOM — System Option Mode. STP — Server Time Protocol.
SONET — Synchronous Optical Network.
STR — Storage and Retrieval Systems.
SOSS — Service Oriented Storage Solutions. Striping — A RAID technique for writing a file to
SPaaS — SharePoint as a Service. A cloud multiple disks on a block-by-block basis,
computing business model. with or without parity.
SPAN — Span is a section between 2 intermediate Subsystem — Hardware or software that performs
supports. See Storage pool. a specific function within a larger system.
Spare — An object reserved for the purpose of SVC — Supervisor Call Interruption.
substitution for a like object in case of that SVC Interrupts — Supervisor calls.
object's failure.
S-VOL — (1) (ShadowImage) Source Volume for
SPC — SCSI Protocol Controller. In-System Replication, or (2) (Universal
SpecSFS — Standard Performance Evaluation Replicator) Secondary Volume.
Corporation Shared File system. SVP — Service Processor ― A laptop computer
SPECsfs97 — Standard Performance Evaluation mounted on the control frame (DKC) and
Corporation (SPEC) System File Server (sfs) used for monitoring, maintenance and
developed in 1997 (97). administration of the subsystem.
SPI model — Software, Platform and Switch — A fabric device providing full
Infrastructure as a service. A common term bandwidth per port and high-speed routing
to describe the cloud computing “as a service” of data via link-level addressing.
business model. SWPX — Switching power supply.
SRA — Storage Replicator Adapter. SXP — SAS Expander.
SRDF/A — (EMC) Symmetrix Remote Data Symmetric virtualization — See In-band
Facility Asynchronous. virtualization.
SRDF/S — (EMC) Symmetrix Remote Data
Facility Synchronous.
Page G-22 HDS Confidential: For distribution only to authorized parties.
Synchronous — Operations that have a fixed time storage cost. Categories may be based on
relationship to each other. Most commonly levels of protection needed, performance
used to denote I/O operations that occur in requirements, frequency of use, and other
time sequence, i.e., a successor operation does considerations. Since assigning data to
not occur until its predecessor is complete. particular media may be an ongoing and
-back to top- complex activity, some vendors provide
software for automatically managing the
—T— process based on a company-defined policy.
Target — The system component that receives a
Tiered Storage Promotion — Moving data
SCSI I/O command, an open device that
between tiers of storage as their availability
operates at the request of the initiator. requirements change.
TB — Terabyte. 1TB = 1,024GB. TLS — Tape Library System.
TCDO — Total Cost of Data Ownership.
TLS — Transport Layer Security.
TCO — Total Cost of Ownership.
TMP — Temporary or Test Management Program.
TCP/IP — Transmission Control Protocol over
TOD (or ToD) — Time Of Day.
Internet Protocol.
TOE — TCP Offload Engine.
TDCONV — Trace Dump CONVerter. A software
program that is used to convert traces taken Topology — The shape of a network or how it is
on the system into readable text. This laid out. Topologies are either physical or
information is loaded into a special logical.
spreadsheet that allows for further TPC-R — Tivoli Productivity Center for
investigation of the data. More in-depth Replication.
failure analysis. TPF — Transaction Processing Facility.
TDMF — Transparent Data Migration Facility. TPOF — Tolerable Points of Failure.
Telco or TELCO — Telecommunications Track — Circular segment of a hard disk or other
Company. storage media.
TEP — Tivoli Enterprise Portal. Transfer Rate — See Data Transfer Rate.
Terabyte (TB) — A measurement of capacity, data Trap — A program interrupt, usually an interrupt
or data storage. 1TB = 1,024GB. caused by some exceptional situation in the
TFS — Temporary File System. user program. In most cases, the Operating
TGTLIBs — Target Libraries. System performs some action, and then
returns control to the program.
THF — Front Thermostat.
TSC — Tested Storage Configuration.
Thin Provisioning — Thin provisioning allows
storage space to be easily allocated to servers TSO — Time Sharing Option.
on a just-enough and just-in-time basis. TSO/E — Time Sharing Option/Extended.
THR — Rear Thermostat. T-VOL — (ShadowImage) Target Volume for
Throughput — The amount of data transferred In-System Replication.
from 1 place to another or processed in a -back to top-

specified amount of time. Data transfer rates —U—


for disk drives and networks are measured in
UA — Unified Agent.
terms of throughput. Typically,
UBX — Large Box (Large Form Factor).
throughputs are measured in kbps, Mbps
and Gb/sec. UCB — Unit Control Block.
TID — Target ID. UDP — User Datagram Protocol is 1 of the core
protocols of the Internet protocol suite.
Tiered storage — A storage strategy that matches
data classification to storage metrics. Tiered Using UDP, programs on networked
computers can send short messages known
storage is the assignment of different
categories of data to different types of as datagrams to one another.
storage media in order to reduce total UFA — UNIX File Attributes.

HDS Confidential: For distribution only to authorized parties. Page G-23


UID — User Identifier within the UNIX security VLL — Virtual Logical Volume Image/Logical
model. Unit Number.
UPS — Uninterruptible Power Supply — A power VLUN — Virtual LUN. Customized volume. Size
supply that includes a battery to maintain chosen by user.
power in the event of a power outage. VLVI — Virtual Logic Volume Image. Marketing
UR — Universal Replicator. name for CVS (custom volume size).
UUID — Universally Unique Identifier.
VM — Virtual Machine.
-back to top-
VMDK — Virtual Machine Disk file format.
—V— VNA — Vendor Neutral Archive.
vContinuum — Using the vContinuum wizard, VOJP — (Cache) Volatile Jumper.
users can push agents to primary and
secondary servers, set up protection and VOLID — Volume ID.
perform failovers and failbacks. VOLSER — Volume Serial Numbers.
VCS — Veritas Cluster System. Volume — A fixed amount of storage on a disk or
VDEV — Virtual Device. tape. The term volume is often used as a
synonym for the storage medium itself, but it
VDI — Virtual Desktop Infrastructure.
is possible for a single disk to contain more
VHD — Virtual Hard Disk. than 1 volume or for a volume to span more
VHDL — VHSIC (Very-High-Speed Integrated than 1 disk.
Circuit) Hardware Description Language. VPC — Virtual Private Cloud.
VHSIC — Very-High-Speed Integrated Circuit. VSAM — Virtual Storage Access Method.
VI — Virtual Interface. A research prototype that VSD — Virtual Storage Director.
is undergoing active development, and the VTL — Virtual Tape Library.
details of the implementation may change
considerably. It is an application interface VSP — Virtual Storage Platform.
that gives user-level processes direct but VSS — (Microsoft) Volume Shadow Copy Service.
protected access to network interface cards. VTOC — Volume Table of Contents.
This allows applications to bypass IP
processing overheads (for example, copying VTOCIX — Volume Table of Contents Index.
data, computing checksums) and system call VVDS — Virtual Volume Data Set.
overheads while still preventing 1 process V-VOL — Virtual Volume.
from accidentally or maliciously tampering -back to top-
with or reading data being used by another. —W—
Virtualization — Referring to storage
WAN — Wide Area Network. A computing
virtualization, virtualization is the internetwork that covers a broad area or
amalgamation of multiple network storage region. Contrast with PAN, LAN and MAN.
devices into what appears to be a single WDIR — Directory Name Object.
storage unit. Storage virtualization is often
used in a SAN, and makes tasks such as WDIR — Working Directory.
archiving, backup and recovery easier and
WDS — Working Data Set.
faster. Storage virtualization is usually
implemented via software applications. WebDAV — Web-based Distributed Authoring
and Versioning (HTTP extensions).
There are many additional types of
virtualization. WFILE — File Object or Working File.

Virtual Private Cloud (VPC) — Private cloud WFS — Working File Set.
existing within a shared or public cloud (for
WINS — Windows Internet Naming Service.
example, the Intercloud). Also known as a
virtual private network cloud.
Page G-24 HDS Confidential: For distribution only to authorized parties.
WL — Wide Link. —Y—
WLM — Work Load Manager. YB — Yottabyte.
WORM — Write Once, Read Many. Yottabyte — A highest-end measurement of data
WSDL — Web Services Description Language. at the present time. 1YB = 1,024ZB, or 1
quadrillion GB. A recent estimate (2011) is
WSRM — Write Seldom, Read Many. that all the computer hard drives in the
world do not contain 1YB of data.
WTREE — Directory Tree Object or Working Tree.
-back to top-
WWN ― World Wide Name. A unique identifier
for an open-system host. It consists of a 64-
bit physical address (the IEEE 48-bit format —Z—
with a 12-bit extension and a 4-bit prefix). z/OS — z Operating System (IBM® S/390® or
z/OS® Environments).
WWNN — World Wide Node Name. A globally
unique 64-bit identifier assigned to each z/OS NFS — (System) z/OS Network File System.
Fibre Channel node process. z/OSMF — (System) z/OS Management Facility.
WWPN ― World Wide Port Name. A globally zAAP — (System) z Application Assist Processor
unique 64-bit identifier assigned to each (for Java and XML workloads).
Fibre Channel port. A Fibre Channel port’s ZCF — Zero Copy Failover. Also known as Data
WWPN is permitted to use any of several
Access Path (DAP).
naming authorities. Fibre Channel specifies a
Zettabyte (ZB) — A high-end measurement of
Network Address Authority (NAA) to
distinguish between the various name data at the present time. 1ZB = 1,024EB.
registration authorities that may be used to zFS — (System) zSeries File System.
identify the WWPN. zHPF — (System) z High Performance FICON.
-back to top- zIIP — (System) z Integrated Information
Processor (specialty processor for database).
—X—
Zone — A collection of Fibre Channel Ports that
XAUI — "X"=10, AUI = Attachment Unit Interface. are permitted to communicate with each
other via the fabric.
XCF — Cross System Communications Facility.
Zoning — A method of subdividing a storage area
XDS — Cross Enterprise Document Sharing.
network into disjoint zones, or subsets of
nodes on the network. Storage area network
XDSi — Cross Enterprise Document Sharing for
Imaging. nodes outside a zone are invisible to nodes
within the zone. Moreover, with switched
XFI — Standard interface for connecting 10Gb SANs, traffic within each zone may be
Ethernet MAC device to XFP interface. physically isolated from traffic outside the
zone.
XFP — "X"=10Gb Small Form Factor Pluggable.
-back to top-
XML — eXtensible Markup Language.
XRC — Extended Remote Copy.
-back to top-

HDS Confidential: For distribution only to authorized parties. Page G-25


Page G-26 HDS Confidential: For distribution only to authorized parties.
Evaluating this Course
Please use the online evaluation system to help improve our
courses.

Learning Center Sign-in location:


https://learningcenter.hds.com/Saba/Web/Main

HDS Confidential: For distribution only to authorized parties. Page E-1


Evaluating this Course

Page E-2 HDS Confidential: For distribution only to authorized parties.

Das könnte Ihnen auch gefallen