Beruflich Dokumente
Kultur Dokumente
Version 5.0
EMC Corporation
171 South Street
Hopkinton, MA 01748-9103
Corporate Headquarters: (508) 435-1000, (800) 424-EMC2
Fax: (508) 435-5374 Service: (800) SVC-4EMC
Copyright © 2001 EMC Corporation. All rights reserved.
Printed December, 2001
EMC believes the information in this publication is accurate as of its publication date. The information is
subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS
PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR
FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable
software license.
Trademark Information
EMC2, EMC, MOSAIC:2000, CLARiiON, Navisphere, and Symmetrix are registered trademarks and EMC Enterprise Storage, The Enterprise Storage
Company, The EMC Effect, Connectrix, EDM, SDMS, SRDF, TimeFinder, PowerPath, InfoMover, FarPoint, EMC Enterprise Storage Network, EMC
Enterprise Storage Specialist, EMC Storage Logic, Universal Data Tone, E-Infostructure, and Celerra are trademarks of EMC Corporation.
AIX, DB2, ESCON, IBM, and NetView are registered trademarks, and MVS is a trademark of International Business Machines Corporation.
Compaq and the names of Compaq products referenced herein are either trademarks and/or service marks or registered trademarks and/or service
marks of Compaq.
Computer Associates is a trademark of Computer Associates International, Inc.
HP-UX and OpenView are registered trademarks of Hewlett-Packard Company.
Microsoft, Windows, and Windows NT are registered trademarks of Microsoft Corporation.
Novell is a registered trademark of Novell, Inc., in the United States and other countries.
Oracle is a registered trademark of Oracle Corporation.
Solaris is a registered trademark, and Java is a trademark of Sun Microsystems, Inc.
Tivoli is a trademark of Tivoli Systems, Inc., an IBM Company.
Unicenter TNG is a registered trademark of Computer Associates International, Inc.
All other trademarks used herein are the property of their respective owners.
Preface.......................................................................................................................... xiii
Figures
1-1 Enginuity and Symmetrix Software Relationships ................................. 1-2
1-2 Symmetrix 8230 (Interior View) ................................................................. 1-3
1-3 Symmetrix 8530 (Interior View) ................................................................. 1-4
1-4 Symmetrix 8830 With 384 Disk Devices (Interior View) ........................ 1-5
1-5 Symmetrix 8530 Block Diagram ................................................................. 1-7
1-6 Symmetrix 8530 Channel Director Operator Panel ................................. 1-9
1-7 Track Format for 3390 and 3380 DASD ................................................... 1-12
2-1 Host Cache Use ............................................................................................. 2-2
2-2 Symmetrix Cache Management and Data Flow ...................................... 2-3
2-3 LRU and Age Link Chain Data Flow ......................................................... 2-4
2-4 I/O Response Time (Mainframe Environment) ....................................... 2-6
2-5 I/O Response Time (Open Systems Environment) ................................. 2-7
2-6 Symmetrix I/O Operations ......................................................................... 2-8
2-7 Destaging Operation .................................................................................... 2-9
2-8 Read Operations ......................................................................................... 2-10
2-9 Read Hit ....................................................................................................... 2-11
2-10 Read Miss ..................................................................................................... 2-11
2-11 Write Operations ........................................................................................ 2-12
2-12 Fast Write ..................................................................................................... 2-13
2-13 Delayed Fast Write ..................................................................................... 2-13
3-1 Logical Volume Mapping (8:1) ................................................................... 3-6
3-2 Concatenated Volumes ................................................................................ 3-8
3-3 Striped Data ................................................................................................... 3-8
4-1 Dynamic Sparing Process ............................................................................ 4-7
4-2 Parity Protection Logic ................................................................................ 4-9
5-1 Basic SRDF Configuration ........................................................................... 5-3
5-2 Switched SRDF With Multiple Primary and Secondary Devices .......... 5-4
5-3 Two Production Sites and One Recovery Site ........................................ 5-17
5-4 Data Vaulting Solution .............................................................................. 5-18
5-5 Sites Containing Both Primary and Secondary Devices ........................ 5-19
5-6 SRDF Campus Solution .............................................................................. 5-19
5-7 SRDF Campus Connectivity ...................................................................... 5-20
5-8 Switched SRDF Configurations ................................................................ 5-21
5-9 SRDF Extended Distance Solutions .......................................................... 5-22
5-10 SRDF With and Without FarPoint ............................................................ 5-23
5-11 Concurrent RDF Configuration ................................................................ 5-33
5-12 Primary (Source) and Secondary (Target) Relationships ...................... 5-34
5-13 Failed Link Between Source 2 and Target 1 ............................................ 5-35
5-14 Sources 1, 2, and 3 in a Consistency Group ............................................. 5-35
5-15 Failed Link Between Source 2 and Target 1 ............................................ 5-36
5-16 SRDF Business Continuance ...................................................................... 5-41
5-17 Establish Operation ..................................................................................... 5-42
5-18 Restore Operation ....................................................................................... 5-42
5-19 BCV Functioning As a Primary (Source) SRDF Device ......................... 5-44
5-20 SRDF Multihop ............................................................................................ 5-45
6-1 Initial Configuration ..................................................................................... 6-6
6-2 Establishing a BCV Pair ................................................................................ 6-7
6-3 Splitting a BCV Pair ...................................................................................... 6-9
6-4 Differential Split .......................................................................................... 6-11
6-5 Reestablishing a BCV Pair ......................................................................... 6-13
6-6 Restoring a BCV Device ............................................................................. 6-15
6-7 Incrementally Restoring a BCV Device .................................................... 6-17
7-1 ESN Management Levels of Trust .............................................................. 7-6
7-2 Basic ESN Environment ............................................................................... 7-9
7-3 Fibre Channel Zoning ................................................................................. 7-14
7-4 Volume Access Control .............................................................................. 7-17
7-5 VCMBD Volume ......................................................................................... 7-18
7-6 SID Lock Down Feature ............................................................................. 7-19
7-7 SID Values .................................................................................................... 7-20
A-1 Problem Detection and Resolution Process .............................................. A-2
Tables
1-1 IBM DASD Emulation Characteristics .................................................... 1-11
1-2 4x2 Remote Link Channel Director Sample Configurations ................ 1-17
1-3 4x4 Remote Link Channel Director Sample Configurations ................ 1-18
5-1 SRDF Connectivity Types and Distance Limitations .............................. 5-5
5-2 Source (R1) Volume Accessibility ............................................................ 5-12
5-3 Target (R2) Volume Accessibility ............................................................ 5-12
5-4 Logical Volume Attributes ........................................................................ 5-13
Audience This guide is part of the EMC ControlCenter v5.0 documentation set,
and is intended for use by system and data storage administrators.
Readers of this guide are expected to be familiar with the following
topics:
◆ Symmetrix Operation
◆ Host Operating Environments
The companion volume to this guide, EMC ControlCenter User Guide,
provides procedures for implementing many of the concepts covered in this
manual.
Conventions Used in EMC uses the following conventions for notes, cautions, warnings,
This Guide and danger notices.
! CAUTION
A caution contains information essential to avoid data loss or
damage to the system or equipment. The caution may apply to
hardware or software.
Typographical Conventions
EMC uses the following type style conventions in this guide:
Palatino, bold ◆ Dialog box, button, icon, and menu items in text
◆ Selections you can make from the user interface, including
buttons, icons, options, and field names
c:\Program Files\EMC\Symapi\db
Where to Get Help Obtain technical support by calling your local sales office.
For service, call:
United States: (800) 782-4362 (SVC-4EMC)
Canada: (800) 543-4782 (543-4SVC)
Worldwide: (508) 497-7901
Sales and Customer For the list of EMC sales locations, please access the EMC home page
Service Contacts at:
http://www.emc.com/contact/
For additional information on the EMC products and services
available to customers and partners, refer to the EMC Powerlink Web
site at:
http://powerlink.emc.com
Your Comments Your suggestions will help us continue to improve the accuracy,
organization, and overall quality of the user publications. Please send
a message to techpub_comments@emc.com with your opinions of
this guide.
EMC ResourcePAK
EMC Solutions Enabler
for OS/390
Symmetrix Hardware
Hardware Components
EMC provides a variety of hardware and software solutions to meet
your business demands. The current Symmetrix hardware platforms
are represented by the half-bay Symmetrix 8230 shown in Figure 1-2,
the single bay Symmetrix 8530 shown in Figure 1-3, and the 3-bay
Symmetrix 8830 shown in Figure 1-4.
This section provides:
◆ Component Overview
◆ Symmetrix Block Diagram
Fans
Disk Devices
(Rear)
Disk Devices
(Front)
Adapter Cards
(Rear)
Service
Power Supplies
Processor
Anti-Tip Bracket
Battery
Fans
Ethernet
Hub
Adapter Cards,
Rear
Service
Processor
Power Supplies
Battery
Service
Processor
Power Supply
EPO Assembly
Director And
Cache Cards,
Front
Figure 1-4 Symmetrix 8830 With 384 Disk Devices (Interior View)
Symmetrix Block Figure 1-5 illustrates the interconnection of the major components of
Diagram the Symmetrix 8530 system.
The Symmetrix 8230, 8530, and 8830 systems use the same basic architecture
but provide different numbers of host connections and disk drives, cache
capacities, and so on.
Channel b a b a b a b a
Directors
High High
(Front End)
Memory Memory
Top Low
Bottom High
Bottom Low
Primary AC Redundant
Power
Auxiliary AC Subsystem
Backup
Battery
Disk Array
Redundant (Back End)
Fan
Subsystem
Disks Disks Disks Disks
! CAUTION
Do NOT reset the service processor unless under the
supervison of an EMC Customer Engineer.
ENABLE
DISABLE
a b DIR 4
A B A B
ENABLE
DISABLE
a b DIR 5
A B A B
ENABLE
DISABLE
POWER PC RESET
Power LED PC Reset Button
a b DIR 12
A B A B
ENABLE
DISABLE
a b DIR 13
A B A B
Active LED
ENABLE
DISABLE
a b DIR 14
A B A B
ENABLE
DISABLE
Disk Devices
Symmetrix systems use industry-standard SCSI Disk Drive
Assemblies (DDAs) for physical disks. Each DDA is configured with
its own controller consisting of control logic, a microprocessor, and a
device-level buffer.
Open Systems SCSI On open systems hosts, the Symmetrix system logical disk volumes
Disk Emulation appear to the host as physical disk devices at SCSI target ID/Logical
Unit Number (LUN) addresses. All host logical volume manager
software can be used with Symmetrix disk volumes. When using a
SCSI interface to an open systems processor, the Symmetrix system
appears as standard SCSI disk devices with data stored in fixed-block
architecture (FBA) format.
The following paragraphs describe the SCSI disk format and logical
volume structure.
FBA Data and FBA disk devices store data in fixed sized blocks (typically 512 bytes).
Command Format
Disk devices in Symmetrix systems attached to AS/400 hosts are configured
in 520-byte blocks.
Logical Volume The channel directors interact with cache memory. Therefore, there is
Structure (Open no physical meaning to cylinders, tracks, and heads on the
Systems) Symmetrix logical volume from the front end point of view. However,
Symmetrix uses a logical geometry definition for its logical volume
structure. This geometry is reflected in the SCSI mode sense data
available to the host.
Symmetrix uses the following logical volume structure:
◆ Each logical volume has n cylinders
◆ Each cylinder has 15 tracks (heads)
◆ Each track has 64 blocks of 512 bytes
Therefore, a Symmetrix logical volume with n cylinders has a usable
block capacity of:
n * 15 * 64
n for each volume is defined during Symmetrix configuration.
To calculate the size of the logical volume:
Number of cylinders * heads * blocks * 512
(n * 15 * 64 * 512)
IBM DASD Disk The Symmetrix system appears to mainframe operating systems as a
Emulation 3990-6, 3990-3, 3990-2 or 2105 controller. The physical disk devices
can appear to the mainframe operating system as a mix of multiple
3390 and 3380 device types. All models of the 3380 or 3390 volumes
can be emulated up to the physical volume sizes installed. A single
Symmetrix system can simultaneously support both 3380 and 3390
device emulations. Table 1-1 lists the Symmetrix characteristics for
some standard IBM device emulation modes. Symmetrix systems
also support nonstandard device sizes, as long as the cylinder count
does not exceed that of the equivalent IBM device type.
a. Not supported by VM/ESA® 1.2.1 (and 1.2.2). Support of this emulation type depends on the operating
system in use.
Mixed Track You can configure a Symmetrix system with both 3380 and 3390 track
Geometries geometries on the same disk device. A single disk device may contain
up to 128 logical volumes totalling a maximum of 8,000 logical
volumes per Symmetrix system.
IBM/PCM Data and All Symmetrix models support the count-key-data (CKD) and
Command Formats extended count-key-data (ECKD™) format used by IBM 3390 and
3380 DASD. For a full description of the channel command words
(CCW) supported, refer to the IBM 3990 Storage Control Reference or
the IBM 3880 Storage Control Model 13 Description.
Figure 1-7 shows the CKD track format emulated for 3390 and 3380
DASD.
Index Index
Marker Marker
R0 R1
Magnetic HA C K D C K D
Disk
Track Format
All tracks are written with formatted records. The start and end of
each track are defined by the index marker. Each track has the same
basic format as that shown in Figure 1-7. That is, it has an index
marker, home address (HA), record zero (R0), and one or more data
records (R1 through Rn). These track formats are discussed in the
following sections.
Track Capacity Track capacity is the maximum capacity achievable when there is one
physical data record per track formatted without a key. Because the
track can contain multiple data records, additional Address Markers,
Count Areas, and gaps reduce the number of bytes available for data.
The track capacity is the number of bytes left for data records after
subtracting the bytes needed for the home address, record zero,
address marker, count area, cyclic check (for error correction), and the
gaps for one data record.
For 3390 emulations, the track capacity is 56,664 bytes. For 3380
emulations, the track capacity is 47,476 bytes.
Open Systems Symmetrix systems connect to UNIX®, Windows NT®, Linux, and
Connectivity AS/400® systems, with connectivity to these open systems host
interfaces:
◆ FWD SCSI channels
◆ Ultra FWD SCSI channels
◆ Ultra2 FWD SCSI channels
◆ Fibre Channels ( 2-port and 8-port)
Channel Director Symmetrix channel directors are single cards that occupy one slot on
Descriptions the Symmetrix backplane. All channel directors interface to host
channels through interface adapter cards connected to the opposite
side of the backplane.
All channel directors contain two PowerPC 750™ microprocessors,
except the 4-port, 4-processor mainframe serial channel director,
which contains four microprocessors. The channel directors process
data from the host and manage access to cache memory over a 4-bus
memory architecture (Figure 1-5 on page 1-7). Each 4-port,
4-processor, mainframe serial channel director supports 4 concurrent
operations. Each 4-port, 2-processor, mainframe serial channel
director supports 2 concurrent operations, as does each Fibre Channel
director. Ultra SCSI channel directors support 4 concurrent
operations.
The following sections describe the Symmetrix channel directors.
Ultra FWD SCSI High The Ultra SCSI High Voltage Differential (HVD) director has four
Voltage Differential differential-wide interfaces for connection to host systems and one
Directors high-speed path to cache memory.
Ultra SCSI channel directors support data transfer rates up to
40 MB/s when connected to Ultra SCSI channels and 20 MB/s when
connected to FWD channels. Each Ultra SCSI director can support up
to 960 logical volumes (with a maximum of 3,072 logical volumes per
Symmetrix system).
Ultra2 SCSI Low The Ultra2 SCSI Low Voltage Differential (LVD) director has four
Voltage Differential differential-wide interfaces for connection to host systems and one
Directors high-speed path to cache memory.
Ultra2 SCSI channel directors support data transfer rates up to
80 MB/s when connected to Ultra2 SCSI channels. Each Ultra2 SCSI
director can support up to 960 logical volumes with a maximum of
4,096 logical volumes per Symmetrix system. This board has a 4K
CRC capability that boosts performance on writes to cache memory.
This board requires a special back adapter and disk midplane.
Fibre Channel The Fibre Channel director has two FC ANSI compliant, 1 Gigabit
Directors Fibre Channel interfaces for connection to host systems and one
high-speed path to cache memory. The Fibre Channel director
interfaces to the host channels through 2-port, 4-port, 8-port, and
12-port Fibre Channel interface adapters. The Fibre Channel SCSI
ESCON Mainframe The Symmetrix mainframe serial channel director processes frames
Serial Channel from the mainframe host and manages access to cache memory. EMC
Directors offers a 4-port, 2-processor (4x2) serial channel director and a 4-port,
4-processor (4x4) serial channel director. Both serial channel directors
support data transfer rates up to 17 MB/s with the host.
4-Port, 2-Processor Serial Channel Director — The 4x2 serial
channel director contains two processors (CPUs) and four interfaces
to host mainframe systems. For example, with six 4x2 serial channel
directors installed, the Symmetrix system could logically have 12
serial channel engines and therefore support 12 concurrent operations.
4-Port, 4-Processor Serial Channel Director — The 4x4 serial
channel director contains four processors (CPUs) and four interfaces
to host mainframe systems. For example, with six 4x4 serial channel
directors installed, the Symmetrix system could logically have 24
serial channel engines and therefore support 24 concurrent operations.
Remote Link Directors The Remote Link Director (RLD) is an ESCON serial channel director
microcode-configured as the link between Symmetrix units in a
Symmetrix Remote Data Facility (SRDF) or SDMS (Symmetrix Data
Migration Services) configuration. EMC offers the RLD in two
models:
◆ Four-port, 2-processor model
◆ Four-port, 4-processor model
The RLD interfaces to Symmetrix channels via a serial channel
interface adapter. The Remote Link Director supports data transfer
rates up to 17 MB/s. The Symmetrix system requires a minimum of
two and supports up to four RLDs in each Symmetrix unit used in an
SRDF or SDMS configuration.
The Remote Link Director can have some of its four ports configured
for remote link connections and some for serial channel connections
with some restrictions as described in the next paragraphs. Table 1-2
and Table 1-3 on page 1-18 show how each of the RLD models can be
configured.
Four-Port, Two-Processor Configuration — When one of the
processors of a four-port, two-processor (4x2) RLD (two ports on each
processor) is configured as an RLD, only one of the ports on that
processor can be configured as a remote link, while the remaining
port is not used. The second processor can be configured as a serial
director, with both ports being used for serial channels.
If two ports of the four-port, two-processor RLD are needed as
remote links to other Symmetrix units, only one port from each
processor can be used. Table 1-2 shows 3 possible configurations for
the 4x2 RLD model.
Processor a Processor b
Possible
Configurations Port A Port B Port A Port B
Second Configuration Serial Channel Director Serial Channel Director Serial Channel Director RLD
Third Configuration Serial Channel Director Serial Channel Director RLD RLD
Fifth Configuration Serial Channel Director Serial Channel Director Serial Channel Director Serial Channel Director
Remote Fibre Channel The Remote Fibre Director (also called the Fibre RA or RAF) is a
Directors 2-port or 8-port Fibre Channel director microcode-configured as the
link between Symmetrix units in a Symmetrix Remote Data Facility
(SRDF) configuration. The RAF interfaces to Symmetrix channels
through either a 2-port or 8-port Fibre Channel interface adapter. The
RAF Director supports data transfer rates up to 100 MB/s.
The Symmetrix system requires a minimum of two and supports up
to four RAFs in each Symmetrix unit used in an SRDF configuration.
When one of the processors of a RAF is configured as an RAF, the
second processor can be configured as a Fibre Channel director.
Disk Directors Symmetrix system disk directors manage the interface to the disk
drive assemblies (DDAs), and are responsible for data movement
between the DDAs and cache memory over a 4-bus memory
architecture (Figure 1-5). The DDAs are connected to disk directors
through industry-standard SCSI interfaces with two PowerPC 750
microprocessors per disk director.
Each disk director provides an alternate path to the disk devices of its
dual-initiator disk director pair. That is, if the primary path through a
disk director to a disk device fail, the Symmetrix system accesses that
device through the other disk director in the disk director pair
ICDA Operation
Intelligent cache configurations allow Symmetrix systems to transfer
data at electronic memory speeds that are much faster than physical
disk speeds. Symmetrix products are based on the principle that the
working set of data at any given time is relatively small when
compared to the total subsystem storage capacity. When this working
set of data is in cache, there is a significant improvement in
performance. The performance improvement achieved depends on
both of the following principles:
Locality of Reference — If a given piece of information is used, there
is a high probability that a nearby piece of information will be used
shortly thereafter.
Data Reuse — If a given piece of information is used, there is a high
probability that it will be reused shortly thereafter.
These cache principles have been used for years on host systems
(CPU and storage devices). Figure 2-1 illustrates this type of host
cache memory use. The cache memory used in this manner is often a
high-speed, high-cost storage unit that functions as an intermediary
between the CPU and main storage.
Host
Symmetrix Cache Symmetrix systems use the same cache memory principle as host
Management systems, but with enhanced caching techniques. Figure 2-2 illustrates
cache use in Symmetrix systems.
Directory
Cache
Memory
Host Channel Disk
System Director Director
Disk
Symmetrix
LRU Algorithm Figure 2-3 illustrates the data flow with the LRU algorithm. Each time
a read hit or write hit occurs, the Symmetrix system marks that cache
slot as most recently used and promotes it to the top of the LRU list.
For each write, a written-to flag is set on the initial write to each cache
block and is cleared when the cache block is destaged. The LRU cache
slot appears at the bottom of the LRU list. Symmetrix cache can be
subdivided into multiple LRU lists.
Newly Staged
Cache Slot
Promoted
To Top
Oldest Slot
Prefetch Algorithm Symmetrix systems continually monitor I/O activity and look for
access patterns. When the second sequential I/O to a track occurs, the
sequential prefetch process is invoked and the next track of data is
read into cache. The intent of this process is to avoid a read miss.
When the host processor returns to a random I/O pattern, the
Symmetrix system discontinues the sequential process.
I/O Response Time - In the mainframe environment, I/O response time can be divided
Mainframe into a queuing time, a pend time, a connect time, and a disconnect
Environment time, as shown in Figure 2-4.
The Queuing Time is the I/O Supervisor (IOS) queue for next event.
The Pend Time consists of:
◆ Control Unit Busy (CUB)
◆ Device Busy (DB)
◆ Director Port Busy
The Connect Time is the length of time the channel processes
commands and transfers data.
The Disconnect Time is:
◆ The length of time it takes to retrieve data from the physical disk
(device seek and latency)
◆ The length of time it takes to reconnect to the host
◆ SRDF write overhead (protocol, line latency, and so on)
I/O Response Time – In the open systems environment, I/O response time can be divided
Open Systems into a host queuing time, a command connect time, a disconnect time,
Environment and a data connect time, as shown in Figure 2-5.
The Host Queuing Time is the time the request is in the host queue
before it is dispatched on the SCSI bus.
The Command Connect Time is the length of time the channel is
transferring a SCSI command.
The Disconnect Time is the length of time involving device seek and
latency. During this time the SCSI bus can be used by other devices.
Symmetrix I/O There are four basic types of Symmetrix I/O operations (Figure 2-6
Operations on page 2-8):
◆ Read hit
◆ Read miss
◆ Fast write
◆ Delayed fast write
The Symmetrix system performs read operations from cache and
always caches write operations. This cache operation is transparent to
the host operating system. A read operation causes the channel
director to scan the cache directory for the requested data. If the
requested data is in cache, the channel director transfers this data
immediately to the channel with a channel end and device end (or a
SCSI good ending status). If the requested data is not in cache, the
disk director transfers the data from the disk device to cache, and the
channel director transfers the data from cache to the channel.
2 1 3 1
3 Directory 4 Directory
Disk Disk
Disk Disk 2
Director Director
3 Directory 5 Directory
Disk Disk
3
Disk
4 Disk 2, 6
Director
Director
1 Destage blocks
2 Update directory
Channel
Directory
Channel
Director
Cache
Disk Disk
Director
Read Operations There are two types of read operations: read hit and read miss.
Figure 2-8 illustrates the data flow for read operations.
Channel Channel
Channel Channel
Director Director
Cache Cache
Read Hit
Disk
Read Miss
Read Hit In a read hit operation (Figure 2-8), the requested data resides in
cache. The channel director transfers the requested data through the
channel interface to the host and updates the cache directory. Since
the data is in cache, there are no mechanical delays due to seek,
latency, and Rotational Position Sensing (RPS) miss (Figure 2-9).
Connect Time
Overhead
Read Miss In a read miss operation (Figure 2-8), the requested data is not in
cache and must be retrieved from a disk device. While the channel
director creates space in the cache, the disk director reads the data
from the disk device. The disk director stores the data in cache and
updates the directory table. The channel director then reconnects
with the host and transfers the data. If the requested data is in the
process of being prefetched (sequential read ahead), the miss is
considered to be a short miss. If the requested data is not in the
process of being read into cache, the disk director requests the data
from the drive. This miss is considered to be a long miss. Because the
data is not in cache, the Symmetrix system must search for the data
on disk and then transfer it to the channel. This adds seek and latency
times to the operation (Figure 2-10). During the disconnect time,
other commands can be executed on other devices on the bus, or
commands can queue to the same device.
Connect Time
Overhead
Disconnect Time
Write Operations Symmetrix systems write operations occur as either fast write or
delayed fast write operations (Figure 2-6 on page 2-8).
Fast Write A fast write occurs when the percentage of modified data in cache is
less than the fast write threshold. On a host write command, the
channel director places the incoming block(s) directly into cache.
For fast write operations (Figure 2-11), the channel director stores the
data in cache and sends a channel end and device end (or a SCSI
good ending status) to the host computer. The disk director then
asynchronously destages the data from cache to the disk device.
Channel
Channel
Director
Cache
Asynchronous
Destage
Disk
Fast Write
Because Symmetrix systems write the data directly to cache and not
to disk, there are no mechanical delays due to seek, latency, and RPS
miss (Figure 2-12).
Connect Time
Cache
Delayed Fast Write A delayed fast write occurs only when the fast write threshold has
been exceeded. That is, the percentage of cache containing modified
data is higher than the fast write threshold. If this situation occurs,
the Symmetrix system disconnects the channel directors from the
channels.
The disk directors then destage the Least Recently Used data to disk.
When sufficient cache space is available, the channel directors
reconnect to their channels and process the host I/O request as a fast
write (Figure 2-13). The Symmetrix system continues to process read
operations during delayed fast writes. With sufficient cache present,
this type of cache operation rarely occurs.
Connect Time
Overhead
Disconnect Time
Delay Normal
Fast Write
Fast Write Symmetrix systems cache write operations, eliminating the need to
Capabilities write data to the disk immediately. This capability results in faster
response times and improved overall subsystem performance.
Channel directors and disk directors dynamically allocate cache
space between reads and writes, depending on I/O activity.
Multiple Channel The Symmetrix system contains multiple channel directors, each
Directors supplying an independent path to cache from the host system. The
channel directors support connectivity to either mainframe systems
or open systems hosts. Following is a list of the Symmetrix channel
directors:
◆ Serial channel directors (mainframe hosts)
◆ Ultra SCSI or Ultra2 SCSI channel directors (open systems hosts)
◆ Fibre Channel directors (open systems hosts)
◆ Remote link channel directors used with SRDF and SDMS
For detailed information on each Symmetrix channel director, refer to
Directors and Cache Cards on page 1-14.
Symmetrix Host Symmetrix systems support open systems host connectivity to open
Connectivity UNIX, Windows NT and Windows 2000 systems, and AS/400
through FWD SCSI, Ultra SCSI, Ultra2 SCSI, and Fibre Channels.
Symmetrix systems support mainframe systems host connectivity
through ESCON channels.
Parallel Processing Each channel director and disk director contains two resident
microprocessors, and each disk device contains one resident
microprocessor. These microprocessors use advanced parallel
processing to reduce processing time and improve throughput.
RPS Miss Elimination In Symmetrix systems, each disk device contains a dedicated
microprocessor and segmented data buffer that can temporarily store
data until the disk director is ready to read or write data. This
eliminates rotational position sensing (RPS) misses that occur in
conventional DASD when the heads are positioned over the desired
sector, but the channel path is not ready for read or write operations.
The segmented data buffer of the disk device allows multiple
operations to occur to the head/disk assemblies.
Channel Speeds Symmetrix system channel speeds (data transfer rate) and cable
and Cable Lengths lengths vary according to the type of channel director. This section
describes the data transfer rates and supported cable lengths for the
different channel directors.
Ultra SCSI and Ultra2 Ultra SCSI and Ultra2 SCSI channel directors support the following
SCSI Channels data transfer data rates:
◆ Ultra SCSI channel directors — up to 40 MB/s when connected to
Ultra SCSI channels
◆ Ultra2 SCSI channel directors — up to 80 MB/s when connected
to Ultra2 SCSI channels
◆ Ultra2 SCSI channel directors — 20 MB/s when connected to
FWD SCSI channels.
The data transfer rate is host dependent.
Cable Lengths
Symmetrix supports cable lengths of up to 82 feet (25 meters) to
connect to most SCSI host systems, up to 62 feet (19 meters) when
attaching to most Ultra SCSI host systems, and up to 39.37 feet
(12 meters) when attaching to most Ultra2 SCSI host systems.
For information on SCSI host adapters and cable requirements for your
Symmetrix system, consult your EMC sales representative.
Fibre Channels Fibre Channels transfer data at speeds to 100 MB/s. Symmetrix units
support cable lengths from 16 feet (5 meters) to 1,640 feet
(500 meters).
Memory Your system must be configured with more than the minimum (base)
Requirements amount of memory for it to use part of that memory as PermaCache.
To determine the minimum amount of memory for your Symmetrix
configuration, consult your EMC sales representative.
Power Failures If a power failure occurs, records that have been updated in
PermaCache will be destaged to disk.
Symmetrix Hyper-Volumes
Symmetrix Hyper-Volumes provide configuration flexibility by
allowing one physical device to be split into two or more logical
volumes. When splitting a single physical device into multiple logical
volumes, Symmetrix systems allow up to 128 logical volumes to
reside on one physical volume.
Configuration requirements for Symmetrix systems vary according to
the applications used. To configure logical volumes for optimum
Symmetrix system performance, consult your EMC Systems
Engineer.
Example For example, if the logical-to-physical ratio chosen is 8:1, the logical
volume mapping that occurs is similar to that shown in Figure 3-1.
Meta Volume Size Symmetrix meta volumes can contain up to 255 devices and up to
Requirements 3.825 terabytes in size. Meta volumes can be composed of
non-sequential and non-adjacent volumes.
Accessing Data in a You can address data contained in a meta volume in two different
Meta Volume ways:
◆ Concatenated volumes
◆ Striped data
Concatenated Concatenated volumes are volume sets that are organized with the
Volumes first byte of data at the beginning of the first volume (Figure 3-2).
Addressing continues to the end of the first volume before any data
on the next volume is referenced. When writing to a concatenated
volume, the first slice of a physical disk device is filled, then the
second and so on, to subsequent physical disk devices.
Striped Data Meta volume addressing by striping also joins multiple slices to form
a single volume. However, instead of using sequential address space,
striped volumes use addresses that are interleaved between slices
(Figure 3-3). In data striping, equal size stripes of data from each
participating drive are written alternately to each member of the set.
Data Management
This chapter discusses the Symmetrix features and options that affect
data availability and reliability.
The Symmetrix system has many features and options to ensure a
high degree of system and data availability. Many of these features
and options are built into the Symmetrix design. Other availability
options may be purchased separately and implemented into the
Symmetrix system operation.
◆ Symmetrix Reliability and Availability Features...........................4-2
◆ Data Protection Options....................................................................4-5
On-line SCSI-to-Fibre Symmetrix systems configured with SCSI channel directors can be
Channel Migration upgraded to Fibre Channel directors without taking non-SCSI
channels off line and without requiring a backup and restore of data.
This capability allows customers with SCSI channels to take
advantage of the connectivity and distance features offered with
Fibre Channel directors. EMC Customer Engineers use a utility to
perform the migration.
For more information on Fibre Channel migration, consult your EMC sales
representative.
Symmetrix Data The Symmetrix system is designed with these data integrity features:
Integrity Protection ◆ Error checking, correction, and data integrity protection
Features ◆ Disk error correction and error verification
◆ Cache error correction and error verification
◆ Periodic system checks
Error verification prevents temporary errors from accumulating and
resulting in permanent data loss. Symmetrix systems also evaluate
the error verification frequency as a signal of a potentially failing
component.
The periodic system check tests all components as well as microcode
integrity. Symmetrix systems report errors and environmental
conditions to the host system as well as the EMC Customer Support
Center.
Disk Error Correction The disk directors use idle time to read data and check the
and Error polynomial correction bits for validity. If a disk read error occurs, the
Verification disk director reads all data on that track to Symmetrix cache memory.
The disk director writes several worst-case patterns to that track,
searching for media errors.
When the test completes, the disk director rewrites the data from
cache memory to the disk device, verifying the write operation. The
disk microprocessor maps around any bad block (or blocks) detected
during the worst-case write operation, thus skipping defects in the
media. If necessary, the disk microprocessor can reallocate up to 32
blocks of data on that track. To further safeguard the data, each disk
device has several spare cylinders available. If the number of bad
blocks per track exceeds 32 blocks, the disk director rewrites the data
to an available spare cylinder. This entire process is called error
verification.
The disk director increments a soft error counter with each bad block
detected. When the internal soft error threshold is reached, the
Symmetrix service processor automatically dials the EMC Customer
Support Center and notifies the host system of errors through sense
data. The Symmetrix system also invokes Dynamic Sparing (if the
Dynamic Sparing option is enabled). This feature maximizes data
availability by diagnosing marginal media errors before data
becomes unreadable.
Cache Error The disk directors use idle time to periodically read cache memory,
Correction and Error correct single-bit errors (one hard and one soft), and write the
Verification corrected data back to cache memory. This process is called “error
verification.” When the directors detect an uncorrectable error in
cache memory, Symmetrix reads the data from disk and takes the
defective cache memory block offline until an EMC Customer
Engineer can repair it.
Error verification maximizes data availability by significantly
reducing the probability of encountering an uncorrectable error by
preventing bit errors from accumulating in cache memory.
In the mainframe host environment, Symmetrix reports uncorrectable
bit errors as Equipment Checks to the CPU. These errors appear in
the IBM EREP file.
For information about the mirroring and BCV devices, refer to Chapter 6,
EMC TimeFinder Operations.
Symmetrix Remote The Symmetrix Remote Data Facility (SRDF) provides an automatic
Data Facility (SRDF) information protection and business continuance solution for
mainframe and open systems hosts. SRDF offers host-independent
data storage that duplicates production (source) site data at a logical
volume level to a recovery (target) site transparently to users,
applications, databases, and host processors.
When a primary (source) device is down, SRDF enables fast
switchover to the secondary (target) copy data so that critical
information is again available in minutes. SRDF provides complete
business continuance capability during the unlikely event of a data
center disaster or during planned events such as daily backups,
scheduled maintenance, and data center migrations or
consolidations. With SRDF, the Symmetrix systems can be as near as
adjacent to one another or hundreds of miles apart.
In either case, the same information protection capabilities are
provided. After a system event, SRDF can resynchronize data to the
source or to the target system at the user’s discretion, thereby
ensuring information and database consistency.
Dynamic Sparing Dynamic Sparing is another data protection option that you can use
in conjunction with mirroring, RAID-S (36 GB drives only), or SRDF.
Dynamic Sparing limits the exposure after drive failure and before
drive replacement.
Data volume D1
protected by D1 DS
dynamic spare DS
D1 failing, dynamic
spare invoked
D1(M1) D1(M2)
DS mirrors D1
COPY DS
Failed disk
replaced and new D1(M1) D1(M2)
disk restored as D1
COPY DS
DS returns to
spares pool D1 DS
RAID-S Option The RAID-S option (available on 36 GB disk devices only) provides
(Business Online) high availability data with good performance and higher usable
storage capacity. The data protection feature is based on a 3+1 volume
configuration (3 data volumes to 1 parity volume).
RAID-S Technology RAID-S employs the same technique for generating parity
information as many other commercially available RAID solutions,
that is, the Boolean operation EXCLUSIVE OR (XOR). However,
Symmetrix RAID-S reduces the overhead associated with parity
computation by moving the operation from controller microcode to
the hardware on the XOR-capable disk drives. Additional XOR
hardware assist built into the Symmetrix cache memory boards
further distributes the XOR function throughout the system to
improve performance in the regeneration mode of operation
(Figure 4-2 on page 4-9).
The RAID-S data protection feature for all Symmetrix systems
achieved the RAID Advisory Board’s (RAB) second highest data
availability and protection classification — Failure Tolerant Disk
System Plus (FTDS+). All Symmetrix systems with SRDF and RAID-S
achieved the RAID Advisory Board’s highest availability and
protection classification — Disaster Tolerant Disk Systems Plus
(DTDS+).
A B C ABC Parity
1111 1001 1100 1010
0110
A B C ABC Parity
1111 1001 1100 1010
0110
A B C ABC Parity
1111 1001 1100 1010
0110
Data Protection RAID-S offers more usable capacity than a mirrored system
Flexibility containing the same number of disk drives. Like the mirroring or
Dynamic Sparing options, RAID-S parity protection can be
dynamically added or removed. For example, for higher performance
requirements and high availability, parity protection on a RAID-S
group can be turned off and the volumes in the RAID-S group
mirrored. Within the same Symmetrix system, data can be protected
through RAID-S, mirroring, and SRDF. Dynamic Sparing can be
added to any of these data protection options.
RAID-S Components A RAID-S group consists of the physical disk devices within the
Symmetrix unit that are related to one another for common parity
protection. The RAID-S group is defined by the EMC Customer
Engineer at the time Symmetrix is installed, and includes disk
volumes that are designated as either data volumes or parity
volumes.
Introduction to SRDF
What Is SRDF?
Symmetrix Remote Data Facility (SRDF™) is a Symmetrix-based
business continuance and disaster recovery solution sold as a
separate license by EMC.
In simplest terms, SRDF is a configuration of Symmetrix units, the
purpose of which is to maintain multiple, real-time copies of logical
volume data in more than one location. The Symmetrix units can be
in the same room, in different buildings within the same campus, or
hundreds of kilometers apart.
By maintaining real-time copies of data in different physical
locations, SRDF enables you to perform the following operations
with minimal impact on normal business processing:
◆ Disaster recovery
◆ Recovery from planned outages
◆ Remote backup
◆ Data center migration
◆ Data Replication and Mobility
Site A Site B
(Production) (Backup)
Local Remote
Host Host
Symmetrix Symmetrix
A B
SRDF Links
R1 R2
SRDF Configuration Figure 5-2 shows a more complex switched SRDF configuration using
Using Switched Fibre Fibre Channel switches connected through E-Ports. Note that in this
Channel configuration, multiple primary (source) R1 devices are remotely
connected through a storage area network (SAN) to multiple
secondary (target) R2 devices.
R2
R2
Switch Switch Switch
E_Port E_Port E_Port E_Port
R1
R1 R2
Figure 5-2 Switched SRDF With Multiple Primary and Secondary Devices
SRDF over ESCON Direct Fiber, Multimode (62.5/125 µm cable) 3 km/cable segment
(50/125 µm cable) 2 km/cable segment
SRDF over Fibre Channel Direct Fiber, Multimode(50/125 µm cable) 500 m/cable segment
(62.5/125 µm cable) 300 m/cable segment
Monitoring and An EMC representative installs and initially configures SRDF at your
Controlling SRDF site using the Symmetrix service processor.
After SDRF is up and running, you can monitor and control its
operation by purchasing the appropriate EMC ControlCenter Base
Component software and the appropriate SRDF or TimeFinder™
Control Option software from EMC.
For information about EMC ControlCenter software, contact your
EMC representative.
Primary (Source) Primary (source) volumes contain production data that is mirrored in
Volumes a different Symmetrix unit. Primary or source volumes are also
referred to as R1 volumes. Updates to a primary volume are
automatically copied to a secondary (target) volume in the other
Symmetrix unit.
Primary volumes can be locally protected by:
◆ A dynamic spare (refer to Dynamic Spares on page 5-7 for more
information)
◆ Conventional mirroring (the primary volume is then referred to
as a mirrored pair)
◆ RAID-S protection (the primary volume is a RAID-S data volume
array)
In addition, a primary volume can be paired with a Business
Continuance Volume (BCV) to provide an additional working copy of
the data at the same location.
Secondary (Target) Secondary (target) volumes contain a mirrored copy of data from a
Volumes primary volume. Secondary (target) volumes are also referred to as
R2 volumes.
As with primary volumes, secondary volumes can be locally
protected by:
◆ A dynamic spare
◆ Conventional mirroring (the secondary volume is then referred to
as a mirrored pair)
◆ RAID-S protection (the secondary volume is actually a RAID-S
data volume array)
A secondary volume can also be paired with a BCV to provide an
additional working copy of the data at the same location.
Dynamic Spares Typically, when a disk drive fails, it does not fail suddenly — it fails
over a period of time during which it gives clues that it is failing. The
Symmetrix system monitors the performance of all its disk drives and
can detect when a disk is failing.
A Symmetrix unit can be configured with physical disk devices
known as dynamic spares, or hot spares. As the name suggests, the
purpose of a dynamic spare is to take the place of a failed or failing
disk device. If a disk drive begins to fail, the Symmetrix system
automatically invokes the dynamic spare.
When a dynamic spare is invoked, it contains no data; the data from
the failed disk must be transferred to the dynamic spare. To do this, a
mirror disk (a disk containing a copy of the data on the failed disk) is
used. The Symmetrix system copies the data from the next available
mirror — in other words, if there is a local mirror (a mirror disk within
the same Symmetrix unit), the Symmetrix system uses the local
mirror. If there is only a remote mirror, the Symmetrix system uses
the remote mirror.
Meta Devices In Enginuity microcode version 5265 and later, you can create a
logical device that spans multiple physical devices. This device is
known as a meta device.
Windows NT® supports only logical drives A through Z. If you have
more than 23 physical devices (drives A and B are typically reserved
and C is the boot drive), you can create logical drives that span
multiple physical devices to take advantage of all potential storage.
For example, you could create a drive F that spans two or more
physical devices.
In an SRDF configuration, you can create meta devices for both
primary and secondary volumes.
! CAUTION
Make sure you fully comprehend this section before attempting
any SRDF operations.
Internal State Layer This section lists the substates in which a source (R1) or target (R2)
volume can be for SRDF operations.
Source (R1) Volume A source (R1) volume can be in one of the states listed below for
States SRDF operations:
◆ Read/Write (RW)
In this state, the source (R1) volume is available for read/write
operations. This is the default source (R1) volume state.
◆ Not Ready (NR)
If the source (R1) volume fails, the host continues to see that
volume as available for read/write operations as all reads and/or
writes continue uninterrupted with the target (R2) volume in that
remotely mirrored pair.
Target (R2) Volume A target (R2) volume can be in one of the three states listed below for
States SRDF operations:
◆ Not Ready (NR)
In this state, the target (R2) volume responds intervention
required/unit not ready to the host for all read and write
operations to that volume. The target (R2) volume is unable to
perform any SRDF operations when in this state.
◆ Write-Disabled (RO)
In this state, the target (R2) volume is available for read-only
operations. This is the default target (R2) volume state.
◆ Read/Write (RW)
In this state, the target (R2) volume is available for read/write
operations.
In normal SRDF operation, a target (R2) volume should only change state
between read-only and read/write.
External State Layer This section lists the substates in which a source (R1) or target (R2)
volume can be for host operations. This represents the channel
interface states.
Source (R1) Volume A source (R1) volume can be in one of the three states listed below.
States This state is seen by the host connected to the Symmetrix unit in
which that volume resides.
◆ Write-Enabled (RW)
In this state, the source (R1) volume is online to the host and
available for read/write operations. This is the default source
(R1) volume state.
◆ Write-Disabled (RO)
In this state, the source (R1) volume responds device
write-protected to the host for all write operations to that
volume.
◆ Not Ready (NR)
In this state, the source (R1) volume responds intervention
required/unit not ready to the host for all host accesses to that
volume.
Target (R2) Volume A target (R2) volume can be in one of the following three states. This
States state is seen by the host connected to the Symmetrix unit in which
that volume resides.
◆ Write-Enabled (RW)
In this state, the target (R2) volume is available for read/write
operations. This is the default target (R2) volume state.
Note that since the default SRDF substate is read-only (RO), the overall
default target (R2) volume device state is read-only (RO).
◆ Write-Disabled (RO)
In this state, the target (R2) volume is not available for write
operations from any host that can access that volume.
◆ Not Ready (NR)
In this state, the target (R2) volume responds intervention
required/unit not ready to the host for all host accesses to that
volume.
Host Accessibility The tables in this section describe the accessibility state of the source
(R1) and target (R2) volumes to the host connected to the Symmetrix
system containing the source (R1) volumes.
Host access to a particular Symmetrix volume depends on both the
host and SRDF view of that volume’s state. Table 5-2 lists the
accessibility for a source (R1) volume. Table 5-3 lists the accessibility
for a target (R2) volume.
Source (R1) X – – – –
Source (R1) X – X – –
Source (R1) X – – X –
Source (R1) X – – – X
Source (R1) – X – – –
Source (R1) – X X – –
Source (R1) – X – X –
SYNC This attribute sets the remotely mirrored pair to the synchronous state.
In this state, the Symmetrix unit informs the host with access to that
volume that an I/O sequence has successfully completed only after
the Symmetrix unit containing the target (R2) volume acknowledges
that it has received and checked the data.
In an open systems environment, the Symmetrix unit containing the
source (R1) volume handles each I/O command separately and
informs the host of successful completion when the Symmetrix unit
containing the target (R2) volume checks and acknowledges the
receipt of the data.
This state ensures full synchronization between the source (R1) and
target (R2) volumes prior to the start of a new I/O sequence.
SEMI-SYNC This attribute sets the remotely mirrored pair to the semi-synchronous
state. In this state, the Symmetrix unit containing the source (R1)
volume informs the host of successful completion of the I/O
operation when it receives the data. The remote link director (RLD)
transfers each write to the target (R2) volume as the link path
becomes available. The Symmetrix unit containing the target (R2)
volume checks and acknowledges the receipt of each write. If a new
write is started for a source (R1) volume before the previous write has
completed to the target (R2) volume, the Symmetrix unit containing
the source (R1) volume temporarily disconnects from the host. Once
the previous write operation is completed and acknowledged from
the other Symmetrix unit, it then reconnects to the host and continues
processing.
This state is typically used in Extended Distance configurations to
minimize the impact on performance. It is also used in situations
where the Symmetrix unit containing the source (R1) volume needs
high performance but can tolerate a one I/O gap in data
synchronization.
Adaptive Copy - Write This attribute, when set, affects the synchronized state of the remotely
Pending (AW) mirrored pair. When this attribute is enabled, the Symmetrix unit
acknowledges all writes to the source (R1) volume as if it was a local
volume. The new data accumulates in cache until it is successfully
written to the source (R1) volume and the RLD has transferred the
write to the target (R2) volume. This attribute also has a
user-configurable skew (write-pending) threshold, that, when
exceeded, causes the remotely mirrored pair to operate in the
predetermined SRDF state (synchronous or semi-synchronous) when
this mode is in effect. As soon as the number of write pendings drops
below this value, the remotely mirrored pair reverts back to the AW
state. The skew is configured at the volume level and may be set to a
value between 1 and 65,535.
Adaptive Copy - Disk This attribute when set, affects the synchronized state of the remotely
(AD) mirrored pair. When this attribute is enabled, the Symmetrix unit
acknowledges all writes to source (R1) volumes as if they were local
volumes. New data accumulates as invalid tracks on the source (R1)
volume for subsequent transfer to the target (R2) volume. The RLD
transfers each write to the target (R2) volume whenever a link path
becomes available. This attribute also has a user-configurable skew
(maximum number of invalid tracks threshold) that, when exceeded,
causes the remotely mirrored volume to operate in the predetermined
SRDF state (synchronous or semi-synchronous) when this mode is in
effect. As soon as the number of invalid tracks drops for a volume
below this value, the remotely mirrored pair reverts back to the AD
mode. The skew is configured at the Symmetrix level and may be set
to a value between 1 and 9,999.
Domino Effect This attribute, when set along with the SYNC attribute, ensures that
the data on the source (R1) and target (R2) volumes are fully
synchronized. When this attribute is enabled, the Symmetrix unit
forces the other SRDF (source (R1) or target (R2)) volume to the not
ready state and responds intervention required/unit not ready
to the host whenever it detects that one volume in a remotely
mirrored pair is unavailable or a link failure has occurred. After the
problem has been corrected, the not ready volume must be made
ready again to the host using the SRDF utilities. If the failed volume
or link is still not available when the SRDF volume is made ready, the
volume remains not ready.
Under normal operating conditions (domino effect not enabled), a
remotely mirrored volume continues processing I/Os with its host
even when an SRDF volume or link failure occurs. New data written
to the source (R1) or target (R2) volume while its pair is unavailable
or link paths are out of service are marked for later transfer. When a
link path is re-established or the volume becomes available,
resynchronization begins between the source (R1) and target (R2)
volumes. Each source (R1) volume notifies the host when
synchronization completes on that volume.
ESCON Remote ESCON remote adapters, known as RAs, are board sets that can
Adapters provide the link connections, fiber optic protocol support, and
communications control between two Symmetrix units in an SRDF
configuration.
The RA board set that sends data across an SRDF link is known as an
RA-1. An RA-1 functions like an IBM ESCON host channel interface.
The RA board set that receives data sent across an SRDF link is
known as an RA-2. An RA-2 functions like an IBM ESCON storage
director interface.
An ESCON SRDF link consists of an RA-1 board set in one Symmetrix
unit, an RA-2 board set in another Symmetrix unit, and a fiber cable
connecting them.
An RA-1 and its corresponding RA-2 are known as an RA pair. With
Symmetrix 4.x models, there can be multiple RA pairs in an SRDF
configuration, up to a maximum of 16 pairs. With Symmetrix 5
models, an optional four-processor ESCON board can be used for
SRDF, providing a maximum of 32 pairs.
Fibre Channel Enginuity microcode versions 5x66 and later support SRDF Fibre
Remote Adapter Channel emulation. The Remote Adapter for Fibre (RAF or RF) is a
two-port Fibre Channel board set that is Enginuity-configured as the
link between Symmetrix units in an SRDF or Open Systems SDMS
configuration.
SRDF Configurations
This section provides examples of typical SRDF configurations.
In Figure 5-3, two production sites (A and C) send data across SRDF
links to one recovery site (B).
R1 R2 R2 R1
In Figure 5-4, one recovery site (G) provides a data vaulting solution
for six production sites (A through F).
Figure 5-5 illustrates the versatility of SRDF — some sites have either
primary (R1) or secondary (R2) devices, while other sites have both
primary (R1) and secondary (R2) devices.
Symmetrix D
Symmetrix E Symmetrix C
R2 R2 R1 R2 R1 R2
R1 R1 R2 R1
Symmetrix F Symmetrix B
R1 = Source Volumes
R2 R1
R2 = Target Volumes
Symmetrix G Symmetrix A
Two Concurrent I/O ESCON Two Concurrent I/O Fibre Channel (FC)
Point-to-Point Connections Point-to-Point Connections
14 Host Connections and 2 SRDF Connections 14 Host Connections and 2 SRDF Connections
Hosts Hosts
R1 R1 R1 R1 R2 R2
R2 R2
R1 R1 R1 R1 R2 R2
R2 R2
R2 R1
Switch
R2 R1 Switch
R1 R1
R2 R2
R1 R1
R2 R2
Symmetrix 8430/8730 R2 R1 Symmetrix 8430/8730
R2
R1
R1 = Primary (Source) Volumes
R2 = Secondary (Target) Volumes
Extended Distance The extended distance solution is used most often for distances over
Solution 60 km, but it can also be used for distances under 60 km if private or
leased ESCON fiber cable is too expensive or not available.
Figure 5-9 illustrates non-networked and networked extended
distance solutions.
Extended Distance The ESCON SRDF extended distance solution uses leased T1, E1, T3,
Solution Over ESCON E3, or ATM high-speed data lines instead of ESCON fiber cables. An
architectural converter (ESCON multimode-to-network converter)
delivers all data frames. Two architectural extender units, one local
and one remote, can support multiple SRDF links based on the type
of extended distance equipment. Consult with your extended
distance vendor or EMC Systems Engineer for details.
3
Campus A Campus B
ESCON
Channel
R1
Extension
R2 1 R1
R2
R1 SRDF over IP R2
R1 ED-1032
FC Switch
WAN/ATM
DS-16
R1 FC Switch
R1 - or - R2
R2
R2 R2 BCV
2
R1
DWDM - Fibre Channel R2
R2 Extended Distance
R2
BCV
Multi-Hop Extended Distance Solutions
1. SRDF Over IP
2. DWDM (ESCON or Fibre Channel - ISL only)
R2 3. ESCON Channel Extension
SRDF FarPoint SRDF FarPoint™ is an SRDF feature used with ESCON extended
distance solutions (and ESCON campus solutions with links greater
than 15 km) to optimize the performance of the SRDF links. This
feature works by allowing each RA to transmit multiple I/Os, in
series, over each SRDF link.
Data
Response
Symmetrix Symmetrix
SRDF with FarPoint
Preserving Synchronization
From the point of view of the host, FarPoint does not change the
SRDF protocol — the Symmetrix system still returns a completion
status to the host only after the write operation is performed on the
remote machine. Without FarPoint, the Symmetrix unit waits for one
write operation to complete before sending the next one. With
FarPoint, the Symmetrix unit, while waiting for the status of the first
write operation, uses the free link bandwidth to send the next write
operation. Interaction with the host remains unchanged, so the
Journal Zero condition is fully preserved, and the data on the
remote RDF device is 100 percent consistent from the host's point of
view.
SRDF Unidirectional If all primary volumes reside in one Symmetrix unit and all
Link Protocol secondary (secondary) volumes reside in another Symmetrix unit,
write operations move in one direction, from primary to secondary. A
unidirectional link protocol, in which data moves in the same direction
over every link in the link group, satisfies this scenario.
SRDF Bidirectional If each Symmetrix unit contains both primary and secondary
Link Protocol volumes, write operations move in both directions over the SRDF
link group. For an SRDF configuration in which the Symmetrix units
are relatively close together (less than 60 km apart), a bidirectional link
protocol can be used. With a bidirectional protocol, data moves in
two directions over the same link.
SRDF For extended-distance SRDF configurations (using E1, E3, T1, T3, and
Dual-Directional ATM links and/or network connections) that require data to move in
Link Protocol two directions, a dual-directional link protocol is required. With a
dual-directional protocol, multiple links are used; some links send
data in one direction, while other links send data in the opposite
direction.
The link is enabled only when the Symmetrix remote adapter is online and
the link is operational.
Synchronous Mode Used mainly for the campus solution, synchronous mode maintains a
(Real-Time or real-time mirror image of data between the primary and secondary
Journal 0 Mode) volumes. Data must be successfully stored in both the local and
remote Symmetrix units before an acknowledgment is sent to the
local host.
Synchronous mode provides real-time mirroring of data between the
local Symmetrix system and the remote Symmetrix system. Data is
written simultaneously to the cache of both systems — in real time —
before the application I/O is completed, ensuring the highest
possible data availability as shown in the following steps:
1. The local Symmetrix system containing the primary or source
volume receives a write operation from the user application.
2. The write operation immediately moves to the remote Symmetrix
system (containing the secondary or target volume); the local
Symmetrix system does not accept additional write operations to
the primary volume.
3. The remote Symmetrix system sends an acknowledgment to the
local Symmetrix system.
4. The local Symmetrix system sends an I/O complete message to
the local host; the local Symmetrix system now accepts additional
write operations to the primary volume.
Because the I/O is completed before synchronizing data with the remote
system, the semi-synchronous mode provides an added performance
advantage.
Adaptive Copying Adaptive copying modes facilitate data sharing and migration. These
Modes modes allow the primary and secondary volumes to be more than
one I/O out of synchronization. The maximum number of I/Os that
can be out of synchronization is known as the maximum skew value.
The default value is equal to the entire logical volume. The maximum
skew value for a volume can be set using the SRDF monitoring and
control software.
There are two adaptive copying modes: Adaptive Copy-Write
Pending (AW) mode and Adaptive Copy-Disk (AD) mode. Both
modes allow write tasks to accumulate on the local system before
being sent to the remote system.
Adaptive Copy-Write With Adaptive Copy-Write Pending mode, write tasks accumulate in
Pending Mode local cache. A background process moves, or destages, the
write-pending tasks to the primary volume and its corresponding
secondary volume on the other side of the SRDF link. When the
maximum skew value is reached, the source volume reverts to its
primary mode of operation, either synchronous or semi-synchronous,
whichever is currently specified. The device remains in the primary
mode until the number of tracks to remotely copy becomes less than
the maximum skew value.
! CAUTION
Adaptive Copy Disk mode should not be used if the source
volumes are not protected by local mirroring or RAID-S.
Domino Modes Domino modes effectively stop all write operations to both primary
and secondary volumes if all mirrors in a primary or secondary
mirror group fail, or if all SRDF links in a link group become
unavailable. While such a shutdown temporarily halts production
processing, domino modes can prevent so-called rolling disasters.
There are two types of domino modes: device domino mode and link
domino mode.
Device Domino Mode You set Device Domino mode at the device level. If this mode is set to
Yes for a secondary (target) volume, and the secondary volume
becomes unavailable to its primary volume for any reason, the
primary volume becomes unavailable to its host.
Link Domino Mode You set Link Domino mode at the link group level. If this mode is set
to Yes for an SRDF link group, and the last remaining link in the link
group fails, all primary (source) volumes in the link group become
unavailable to their host.
After the problem is corrected, you must reenable the volumes using
one of the SRDF control utilities (the Symmetrix service processor or
EMC ControlCenter software).
With either domino mode, the appropriate primary (source) volumes are
forced off line and all related applications stop. This is an extreme measure. A
more moderate measure (if you are using SRDF in a mainframe environment,
or an Open Systems environment with EMC PowerPath software) is to
implement consistency groups (refer to Write Operations on page 5-37 for
more information).
Invalid Tracks When enabled for a secondary (target) volume, SRDF issues a
Warning warning if you attempt to recover data from that secondary volume
when it is not synchronized with its primary (source) volume. In such
cases, another form of data recovery (for example, a tape restore) is
more appropriate.
Concurrent RDF
Enginuity 5567 and later supports the ability for a single primary
volume to be remotely mirrored to two secondary volumes
concurrently. This feature is called Concurrent RDF and is supported
in both ESCON and Fibre Channel RDF configurations.
Concurrent RDF requires that each secondary volume operate in the
same primary mode, either both synchronous or both
semi-synchronous, but allows either (or both) volumes to be placed
into Adaptive Copy mode.
Figure 5-11 shows a concurrent RDF configuration in which one
secondary volume is in synchronous mode, while the other is in
Adaptive Copy. Any combination of synchronous/semi-synchronous
and Adaptive Copy is allowed with the exception of one volume
operating in synchronous mode and the other operating in
semi-synchronous mode.
Symmetrix
R2
Secondary
Symmetrix 8430/8730 Synchronous
R1
Primary
Symmetrix
Adaptive Copy
R2
Secondary
Consistency Groups
A consistency group is a group of Symmetrix devices specially
configured to act in unison to maintain the integrity of a database
distributed across multiple SRDF units controlled by a mainframe
host computer or open systems host computer using EMC PowerPath
software.
How a Consistency Assume that you have an SRDF configuration in which three
Group Works Symmetrix units contain primary (source) devices, and two
additional Symmetrix units contain secondary (target) devices. The
units with primary devices send data to the units with secondary
devices as shown in Figure 5-12.
Target 1 Target 2
Next, assume that the links between Source 2 and Target 1 fail.
Without a consistency group, Sources 1 and 3 continue to copy data
to the target devices while Source 2 does not (refer to Figure 5-13).
Target 1 Target 2
The result is that the copy of the database spread across Targets 1
and 2 becomes inconsistent.
However, if Sources 1, 2, and 3, belong to a consistency group, as
shown in Figure 5-14, and the link between Source 2 and secondary
Target 1 fails, the consistency group automatically stops Sources 1
and 3 from sending data to Targets 1 and 2, as shown in Figure 5-15.
Thus, the consistency of the database copy (spanning Targets 1 and 2)
remains intact.
Consistency Group
Target 1 Target 2
Consistency Group
Target 1 Target 2
Continuous I/O to the primary (source) devices in the consistency group can still
Processing occur even when the devices are suspended on the links. Such
updates are not immediately sent to the remote side. However, they
are propagated after the affected links are again operational, and data
transfer resumes from the primary devices to the secondary devices.
MVS Technical You use host-based EMC software (the CGroup utility) to manage
Considerations and monitor consistency groups within an MVS® environment. The
host software identifies the consistency group devices by VOLSER,
CUU, or SMS storage group. When the devices are initialized, all
related volumes and controllers for the database are grouped to form
the consistency group. If a write operation on a primary volume fails
to reach the corresponding secondary volume, the Symmetrix system
performs a unit check and returns the results to the controlling host
computer.
A suspend channel program must be prebuilt for each controller in a
consistency group. The host software runs the suspend channel
program for each controller in the group, and SRDF suspends the
links for all devices in the group.
For more information about consistency groups and the CGroup utility, refer
to the EMC Symmetrix Consistency Group for MVS Product Guide.
Write Operations
The write task is the most common SRDF operation. This section
describes the write operations for unidirectional, dual-directional,
and bidirectional protocol.
A copy task, rather than a write task, is used for updates to volumes running
in an Adaptive Copy mode.
Read Operations
Read operations, in which the local host reads data from the remote
Symmetrix unit, are performed only to recover from a local data
availability problem. Several events can cause a read operation to
take place:
◆ If data is not in local cache and all of the primary (source) devices
are in a Not Ready state.
◆ If a primary device is in a Ready state but the requested track is
invalid.
◆ If a disk adapter has a problem accessing a primary device, as in
the case of a drive timeout or cyclic redundancy check (CRC)
error. In these cases, the local Symmetrix unit requests the data
from the remote Symmetrix unit.
◆ If a track on a primary device is currently available only from a
RAID rebuild, the local Symmetrix unit requests the track data
from the remote Symmetrix unit. This method is faster than
accessing the data from the RAID rebuild. The Symmetrix system
always performs a full-track read for count-key-data (CKD) and
at least a contiguous-block read for fixed block architecture (FBA)
data.
In a read operation, the remote Symmetrix unit reads data from the
secondary device and sends the data across the link to the local
Symmetrix unit.
In addition to the requested data, the remote Symmetrix unit sends
the corresponding track ID from the secondary device to describe the
transferred data accurately. The remote Symmetrix unit uses the CRC
byte to ensure data integrity.
Recovery Operations
! CAUTION
This section explains recovery in the context of hardware failure.
For adequate protection against data corruption, EMC recommends
that you back up your data regularly.
If the local host or local Symmetrix unit fails, and the primary
volumes are operating in synchronous mode, the remote Symmetrix
unit can be ready for operations in minutes. When the failure occurs,
the primary and secondary volumes are synchronized within one
I/O and there are no invalid tracks on either the primary or
secondary side.
If the remote host has read-only access to the secondary device, it does not
interfere with SRDF operations. SRDF uses cache slot locks on the secondary
side to ensure that data reaches the cache on the secondary side.
Returning Control to After the local host and the Symmetrix unit containing the primary
the Local Host volumes are restored, production processing can resume on the local
host. Several steps are required to transfer processing from the
remote host back to the local host:
1. Halt processing on the remote host and change the states of the
secondary devices to read only.
2. Bring to a Ready state the Symmetrix unit on which the primary
volumes reside. Make sure that the SRDF links are suspended so
that data movement across the links does not occur.
Recovery for a If the recovery site (the remote host and the Symmetrix unit
Large Number of containing the secondary volumes) has handled production
Invalid Tracks (Open processing for a long period of time, there might be a large number of
Systems Sites Only) invalid tracks (for example, 500 GB) on the primary volumes. In this
case, you can resynchronize the primary and secondary volumes
while the remote host continues production processing. Then, when
there is a relatively small number of invalid tracks on the primary
volumes (for example, 50 GB), you can shut down the remote host
and restart the local host.
Symmetrix 1 Symmetrix 2
Business
Production
Continuance
Host SRDF LInk
R1 R2 Host
Concurrent You can temporarily suspend the SRDF links so that you can read and
Operations write data on both the primary and secondary volumes concurrently.
This enables you, for example, to run backups on the secondary
volumes while production processing continues on the primary
volumes (a business continuance practice).
You can then resume the links and copy data from the primary
volumes to the secondary volumes.
This last operation, known as an establish, propagates any updates
made to the primary volumes while the links were suspended,
bringing the secondary volumes current with the most recent data
R1
Source
Data
After completion of concurrent operations, the
links between source and target are reestablished,
Updates and updates are propagated from the source volumes
to the target volumes.
R2
Target
Data
R1
Source
Data
R2
Target
Data
Using a BCV as a A BCV device may also function as an SRDF primary device if the
Primary (Source) BCV is not locally mirrored (a BCV R1 device). The SRDF remote
Device mirror connection for the BCV R1 device is suspended while the BCV
is paired with a local device.
When a BCV is paired with a primary device, the corresponding
secondary device is disabled. When the BCV is split, the secondary
device is reenabled and resynchronized with the primary device
(Figure 5-19).
Using a BCV as a primary device is useful if you want to duplicate a
system remotely.
You can use a BCV as a primary device in Enginuity microcode version 5x64
and later.
Hop 1 Hop 2
R1 R2
SRDF
X Link
Host
2
SRDF
Link
3 4
R1 R2
BCV
1. Data is remotely mirrored to the R2 and R1 BCV devices on the Symmetrix unit at Site
3. The R2 on the Symmetrix unit at Site C is resynchronized with the R1 BCV at Site B.
Performing Remote You can issue SRDF and BCV control commands from a host or server
Backups Without a connected to the local Symmetrix unit. The commands move across
Remote Host the SRDF link to the secondary Symmetrix unit. A host or server does
not have to be located at the secondary location, allowing you to
create a point-in-time copy of your data at the remote site without
needing a host computer or server at the remote site.
SRDF Multihop SRDF multihop enables a BCV on the local side of an SRDF
configuration to perform an incremental synchronization with a
secondary volume on the remote side, copying only changed tracks
from the primary BCV to the secondary volume. This process
eliminates the performance lag caused by a full synchronization
across great distances, making a multitiered SRDF configuration
feasible.
SDDF
Session
Production Data Mirrored
to Remote Symmetrix BCV Differential Synchronization Copy
Production Production
R2 Data Production Production
R1 Data Data Data
BCV/R1 R2
Reestablish/Differential Split
Overview
EMC TimeFinder is a business continuance solution that allows
customers to use special devices that contain a copy of Symmetrix
units from one or more attached hosts while the standard Symmetrix
units are online for regular I/O operation from their host(s). Uses for
these copies can include backup, restore, decision support, and
applications testing.
System Setup One or more hosts can be attached to a Symmetrix unit containing the
BCV devices. Any Symmetrix system, including one configured for
RAID 1 or RAID-S protection mode, sparing, or SRDF, supports the
TimeFinder option.
Components
The main components of TimeFinder are:
◆ Standard devices
◆ BCV devices
BCV devices and standard devices both reside in the same cabinet.
Standard Devices Standard Symmetrix units are configured for normal Symmetrix
operation under a desired protection method (such as RAID 1,
RAID-S, SRDF).
The standard device can have any mirror structure (normal, RAID,
RAID with SRDF), as long as the number of mirrors does not exceed
three. This constraint is in place because establishing a BCV pair
requires assigning the BCV device as the next available mirror of the
standard device. Since there is a maximum of four mirrors allowed
per device in a Symmetrix unit, a device already having four mirrors
is not able to accommodate another one.
BCV Devices A BCV device is a standard device used for dynamic mirroring. It has
additional attributes that allow it to independently support host
applications and processes. It may be configured as a single mirror, a
locally mirrored device (requires at least microcode level 5264), or an
SRDF source (R1) device (requires at least microcode level 5264). A
BCV device can be RAID 1 or RDF protected, but it cannot be RAID-S
protected.
Standard Device Each standard device mirror contains a copy of the data contained in
Mirrors the standard device. There can be up to three standard device
mirrors.
BCV Mirror A BCV mirror is a standard device mirror that is assigned upon
creation of the BCV pair.
Components 6-3
EMC TimeFinder Operations
6
Operations Overview
Business continuance operations make use of the components
described in the previous section to provide a foundation for various
host business continuance processes.
TimeFinder offers the following business continuance operations,
which are available through host commands:
◆ Establish a BCV pair
TimeFinder assigns the BCV as the next available mirror of a
standard Symmetrix unit and copies the entire contents of the
standard device to the BCV.
◆ Split a BCV pair
TimeFinder splits the BCV mirror from the standard Symmetrix
unit and makes it available (with the data from the standard
device with which it was paired) to hosts through its separate
device address.
◆ Perform a Differential Split
This variation on the split process is available for locally or
remotely mirrored BCVs with microcode level 5265 or higher.
TimeFinder enables only the updated tracks to be copied to a
BCV’s local or remote mirror on second and subsequent
differential splits.
All mirrors of the BCV are rapidly synchronized to the associated
standard device to the point in time that the differential split
command was issued.
The differential split can significantly reduce the time required for
the split process because only changed tracks need to be
synchronized.
◆ Reestablish a BCV device
TimeFinder reassigns the BCV as the next available mirror of the
standard device to which it was assigned before it was split. Any
data written to the BCV while it was split from the standard
device is replaced on the BCV. The BCV receives its updates from
the standard device.
Establish
After configuration and initialization of a Symmetrix unit, BCV
devices contain no data. The BCV devices, like the standard devices,
have unique host addresses and are online and ready to the host(s) to
which they are attached. Figure 6-1 illustrates the initial Symmetrix
configuration prior to performing any TimeFinder operations. In this
figure the host views the Symmetrix M1/M2 mirrored pair as a single
device (Vol A). The host views the BCV device as Vol B.
Host
Vol A Vol B
M1 M2 BCV
Symmetrix
Host
Vol A Vol B
Establish
BCV
Not Ready
M1 M2 M3
Copy
Symmetrix
The BCV device is not available for host use when it is assigned as a BCV
mirror on a standard device. However, any new data written to the standard
device is copied to the BCV device while the BCV pair exists.
Establish 6-7
EMC TimeFinder Operations
6
Split
After an establish operation and after the standard device mirrors are
synchronized (Figure 6-2 on page 6-7), the BCV device contains a
copy of the data from the standard device. The BCV copy remains
valid until the point in time when a split command is issued. After a
split operation, business continuance processes can be executed with
the BCV device. Figure 6-3 below shows the result of the split
operation.
Host
Vol A Vol B
Split
M1 M2 BCV
Symmetrix
When the Symmetrix unit receives a split command from the host, it
does the following:
◆ Checks command validity. For example, the Symmetrix unit
makes sure that the standard device has an active BCV mirror, the
standard and BCV devices comprise a BCV pair, and the BCV
device is fully synchronized with the standard device.
◆ Suspends I/O to the standard device until the split operation
completes.
◆ Destages any write pendings to the standard device and the BCV
device.
◆ Splits the BCV device from the BCV pair.
◆ Changes the BCV device state to ready, enabling host access
through its separate address (Vol B).
Split 6-9
EMC TimeFinder Operations
6
◆ Resumes operation with the standard device and flags any new
writes to the standard device. (This is necessary for updating the
BCV device if it is re-established with the same standard device at
a later time.)
Once you finish running any business continuance processes on the
BCV device, the following options are available:
◆ Reestablish the BCV pair
◆ Restore data from the BCV device to its standard device
◆ Incrementally restore data from the BCV device to its standard
device
◆ Establish a new BCV pair (consisting of the same BCV device but
with a different standard device)
These TimeFinder operations are described in this chapter.
Differential Split
After an establish operation and after the standard device mirrors are
synchronized (Figure 6-2 on page 6-7), the BCV device contains a
copy of the data from the standard device. This copy is valid until the
point in time when a split or differential split command is
issued. Figure 6-4 shows the result of the differential split operation.
Data Data
Volume Volume
00 01
Split
Internal SDDF
Session
Symmetrix
◆ Splits the BCV device from the BCV pair. In the case of a
differential split, this automatically opens an internal Symmetrix
differential data facility (SDDF) session on the standard device.
The SDDF session begins to log track changes on the standard
device once the standard and BCV devices are again paired.
Second and subsequent differential splits benefit from the SDDF
session because it enables only those changed tracks to be copied
to the BCV local or remote mirrors (differential split).
The SDDF session is removed on the first nondifferential split
operation.
◆ Changes the BCV device state to Ready, enabling host access
through its separate address (Vol B).
◆ Resumes operation with the standard device, and flags any new
writes to the standard device. (This is necessary for updating the
BCV device if it is reestablished with the same standard device at
a later time.)
Once you finish running any business continuance processes on the
BCV device, the following options are available:
◆ Reestablish the BCV pair
◆ Restore data from the BCV device to its standard device
◆ Incrementally restore data from the BCV device to its standard
device
◆ Establish a new BCV pair (consisting of the same BCV device but
with a different standard device)
Each of these TimeFinder operations are described in the following
sections.
Reestablish
Reestablishing a BCV pair (Figure 6-5) accomplishes the same thing
as the establish process, with one time-saving exception: the standard
device (Vol A) copies to the BCV device only the new data that was
updated on the standard device while the BCV pair was split. Any
changed tracks on the BCV are also overwritten by the data on the
corresponding track on the standard device. This process maximizes
the efficiency of the synchronization.
Host
Vol A Vol B
Re-establish
BCV
Not Ready
M1 M2 M3
Sync
Symmetrix
Reestablish 6-13
EMC TimeFinder Operations
6
The BCV device is not available for host use when it is assigned as a BCV
mirror on a standard device. However, any new data written to the standard
device is copied to the BCV device while the BCV pair exists.
Restore
The restore operation differs from the establish or reestablish
operation in that the entire contents of the BCV device are copied to
the standard device.
The Symmetrix unit performs the following functions when it
receives a restore command from the host:
◆ Checks command validity. For example, reject the command if the
BCV device and the standard device are not the same size.
◆ Sets the BCV device not ready to the host.
◆ Assigns the BCV as the next available mirror of the standard
device.
◆ Copies the contents of the BCV device to the standard device and
all its mirrors. For example, in Figure 6-6 the Symmetrix unit
copies the contents of M3 to both M1 and M2, overwriting the
data present on those devices.
The restoration process (Figure 6-6) is complete when the standard
device and the BCV device contain identical data.
Host
Vol A Vol B
Restore
BCV
Not Ready
M1 M2 M3
Copy
Symmetrix
The BCV device is not available for host use when it is assigned as a BCV
mirror on a standard device. However, any new data written to the standard
device is copied to the BCV device while the BCV pair exists.
Restore 6-15
EMC TimeFinder Operations
6
Incremental Restore
The incremental restore process (Figure 6-7) accomplishes the same
thing as the restore process with one time-saving exception: the BCV
(Vol B) copies to the standard device (Vol A) only the new data that
was updated on the BCV device while the BCV pair was split. Any
changed tracks on the standard device are also overwritten by the
data on the corresponding track on the BCV device. This maximizes
the efficiency of the synchronization process.
Host
Vol A Vol B
Incr Restore
BCV
Not Ready
M1 M2 M3
Sync
Symmetrix
The BCV device is not available for host use when it is assigned as a BCV
mirror on a standard device. However, any new data written to the standard
device is copied to the BCV device while the BCV pair exists.
ESN Management
Introduction
Enterprise Storage Networks (ESNs) provide a flexible, scalable
infrastructure for managing, sharing, and protecting information.
These networks eliminate connectivity and distance limitations and
allow businesses to consolidate their servers and storage. In order to
cope with today’s explosion of information, Enterprise Storage
Networks have had to dramatically increase in size and complexity.
Without proper storage network management and access control
strategies in place, maintaining the security and integrity of
Enterprise Storage Networks is a difficult task.
Information has become a valuable strategic corporate asset, and IT
administrators need to ensure that it is protected. Businesses must be
able to manage and control access to their data, especially when it is
connected to a network environment. The ability to control access at
the storage subsystem, as well as the storage network, is critical to
creating a secure ESN. EMC understands the importance of ESN
security and has developed products and techniques to facilitate this
level of integrity. This chapter provides an overview of ESN
management and best practices for managing a secure ESN
environment.
The purpose of this chapter is to:
◆ Present a brief overview of ESN management components and
techniques
◆ List and describe the current ESN management tools available
today
◆ Provide some best practices for managing a secure ESN
environment
◆ Demonstrate how to manage a secure Enterprise Storage
Network using access control methods such as zoning and
volume access control
◆ Describe methods for controlling management functions
◆ Provide a list of reference materials for more detailed information
Manage
Can manage all
aspects of the storage
network.
Granted full
read/Write access to
Monitor the Volume Configuration
Management Database.
Can monitor and
discover some aspects
Ability to create and
Access of the storage network,
modify zones and
but cannot make
Only sees volumes volume access
configuration changes.
that have been privileges for all
assigned to it. members of the
Granted "read-only"
storage network.
access to SAN
Can only
management repositories.
communicate with Can be designated as
members assigned to the Control Station.
Can only communicate
its zones.
with members
Hardware and software
assigned to its zones.
No monitor or should be physically
manage capabilities. secure.
Although they are members of the ESN, they do not have any
monitoring or management capabilities. These systems are
configured by management to access a subset of the storage
network’s devices; they are unable to communicate with members
outside of their virtual domain.
Monitor — Represents systems that can discover and examine certain
aspects of the storage network environment, but are unable to modify
it. This level of manageability is similar to read-only permission. In
addition to basic access, these systems are able to collect information
about other elements within the ESN topology.
Manage — Represents systems with the highest level of
manageability. These systems are under the full control of IT
management, which means that their hardware and software
components are physically secure and user access is restricted.
Similar to read/write permission, they have the ability to manage
and control access to any device that is a member of the storage
network. They also have the highest trust rating within the
environment, and are typically designated as Control Stations.
Control Stations, also known as Administrator Hosts, are the most
secure systems in an ESN environment. For this reason, ESN
management functions should be performed from these systems. To
further limit accessibility to critical management applications, EMC
recommends installing the management and control software only on
one or two Control Stations.
Corporate LAN
Secure VPN
or Firewall
Control
Station
Servers
A
B C
In-band (FC)
Out-band (IP)
Symmetrix
Storage
A B C VCM
Hard Zoning Hard zoning creates zones by using the Source ID of the switch. A
SID, otherwise known as a port number or Port ID (PID), is a
combination of the unique domain identifier of a switch and the
physical port number. Although somewhat limited in terms of
flexibility, there are several benefits to using hard zoning:
Security — Hard zoning is sometimes considered more secure than
soft zoning because zoning configuration changes must be performed
at the switch. To modify an existing zone, a user has to physically
move fiber cables from one port on the switch to another. If physical
access to the switch is restricted (for example, the switch is located in
a secure data center), the potential for unauthorized configuration
changes is greatly reduced.
HBA Replacement — In some situations, hard zoning can also
simplify the process of replacing HBAs. When zones are created
using SIDs, HBAs can be replaced without having to modify zone
configurations.
Although hard zoning does have some positive attributes, EMC does
not recommend creating zones based solely on port numbers for the
following reasons:
◆ Switch port replacement and the use of spare ports require
manual changes to the zone configuration.
◆ If the domain ID changes (for example when you reconfigure a set
of independent switches to form a fabric), the zone configuration
will be invalid, increasing the chance of data corruption.
◆ Replacing an HBA also requires reconfiguration of the volume
access control settings on the storage subsystem. This minimizes
the benefit of hard zoning because manual configuration changes
will still be necessary.
◆ Managing ISL congestion by relocating high-traffic port pairs to a
common switch is not handled automatically.
Soft Zoning Soft zoning creates zones by using the WWNs of HBAs and storage
subsystems. A WWN is a unique 64-bit identifier that is factory-set on
HBAs and generated on the FA directors in the Symmetrix system. A
list of all WWNs is maintained by the switch’s Name Server, which is
a database service provided for every switch in the fabric. Two main
advantages to using soft zoning are:
Flexibility — The zone member identification will not change if the
fiber cable connections to ports are rearranged. Flexibility is increased
because devices can be moved from port to port without affecting
zoning configurations.
Fabric Reconfiguration — One of the strategies for correcting ISL
congestion in fabrics includes moving pairs of high-traffic ports into a
common domain (typically a switch). When soft zoning is used, these
connections can be moved to a common domain without effecting
device driver configurations, switch-zoning configurations, or
Symmetrix configurations.
Soft zoning increases the flexibility of the ESN environment and
allows zoning management tasks to be handled at the software level.
However, there are some disadvantages to identifying zone members
exclusively by their WWNs:
◆ Zoning is not enforced at the hardware level. If an unauthorized
user has access to the zoning management software, that user can
change existing zone configurations even without physical access
to ESN hardware components.
Smart Zoning Smart zoning combines hard and soft zoning techniques to identify
zone members by their Source ID and WWN. Smart zoning offers a
secure method for partitioning a fabric without completely sacrificing
flexibility. Every HBA that is a member of a smart zone has a SID, or
switch port number, associated with its WWN. Once an association
between the SID and WWN is created, it is stored in a database. The
database prevents users with a spoofed WWN from gaining access to
the original member’s devices. Smart zoning offers the following
advantages over conventional hard zoning or soft zoning techniques:
Security — Zoning is enforced at both the hardware and software
levels, eliminating the potential for WWN spoofing.
Zone Flexibility — Smart zoning allows administrators to create and
modify zones at the hardware level using management software.
Smart zoning requires a feature called SID Lock Down to be enabled on the
Symmetrix system. Details regarding the SID Lock Down feature are
included later in this chapter.
Single HBA Zoning Another zoning technique that increases the stability and reliability of
the ESN environment is single-HBA zoning. Single-HBA zoning
specifies that in each individual zone there should be one and only
one initiator (HBA) participating in that zone. Initiators as well as
target devices (storage ports) can be members of more than one zone.
While it is possible to create zones with more that one initiator,
single-HBA zoning is preferred because it helps isolate
communications and interaction between initiators. This is
particularly useful for reducing the impact of state changes to the
fabric, such as the HBA discovery/login process. EMC recommends
using single-HBA zoning to maximize the performance, efficiency,
and dependability of the Enterprise Storage Network.
Zoning Example Figure 7-3 shows the basic ESN environment from Figure 7-2 on
page 7-9 separated into zones. Two zones have been created, one for
Server A and one for Server B, preventing both hosts from accessing
any of the devices outside of their respective zones. Each zone has a
single HBA that is allowed to access one port on the Symmetrix unit.
Zone A is unable to access any of the ports in Zone B, and vice versa.
Both zones are smart zones that were created using the WWN of the
HBA in each server, the WWN of the Symmetrix Fibre Channel
adapter port, and the SID of the switch.
Corporate LAN
Secure VPN
or Firewall
Control
Station
A
Servers B C
WWN
WWN
SID
SID
Management LAN
Connectrix (Private)
Connectrix
Switch Switch
WWN
In-band (FC)
Out-band (IP)
Symmetrix
Storage
A B C VCM
Volume Access Figure 7-4 illustrates how volume access control provides another
Control Example layer of security to storage networks. EMC manages access control at
the storage level using ESN Manager. When Server A and Server B
log in to the storage network, the WWNs of their HBAs are passed to
the Symmetrix FA ports that are in their respective zones. The
Symmetrix unit records the connection, stores the WWNs in a Login
History table, and builds a filter. This filter, known as the volume
configuration management database (VCMDB), contains records that
specify which volumes may be accessed by an HBA through a
particular FA port (Figure 7-5 on page 7-18). Using the ESN Manager
GUI or CLI, administrators can configure and manage the VCMDB
from the Control Station.
Corporate LAN
Secure VPN
or Firewall
Control
Station
A
Servers B C
Management LAN
Connectrix (Private)
Connectrix
Switch Switch
In-band (FC)
Out-band (IP)
VCMDB
Symmetrix
Storage
A B ...
VCMDB Access Since the VCMDB manages all access to volumes in the Symmetrix
Feature unit, user access to the database must be controlled. Installing ESN
Manager on only one or two secure Control Stations helps restrict
user access to the VCMDB by limiting the accessibility of the
management software. However, if an unauthorized user was able to
obtain a copy of the ESN Manager software or the Volume Logix
Administrator CLI, the user could potentially gain access to the
VCMDB from another host on the storage network.
VCMDB
WWN FA Port SID Volumes
10:00:00:00:c9:21:69:67 2a 2206:13 00
VCMDB
10:00:00:00:c9:25:24:70 3b 29, 2a, 2b
Volume
50:06:04:82:bc:61:22:8 4a 31, 32, 33
The VCMDB Access feature allows only authorized HBAs with valid
records in the database to access and manage the VCMDB. This is
particularly important in an xSP environment where there may be
multiple customers storing data on the same Symmetrix unit.
The VCMDB contains records specifying which devices may be
accessed by a particular HBA through a Symmetrix FA port. By
default, access to the VCMDB is granted to all HBAs that log in to an
FA port. As a result, any host with access to the FA port can alter the
configuration of the database if it has the ESN Manager management
software.
By enabling the VCMDB Access feature, access to the database can be
restricted to the host designated as the Control Station. Only HBAs
with valid records in the database (indicating they have permission to
access the VCMDB) will be given management privileges.
SID Lock Down The VCMDB Access feature checks the WWNs of HBAs to confirm
Feature that the requesting host has management privileges. However, a
WWN can be spoofed to match the current WWN of another HBA,
thus gaining access to that HBA’s volumes (which may include the
VCMDB). The SID Lock Down feature shown in Figure 7-6 prevents
an unauthorized user from spoofing the WWN of an HBA.
VCMDB
WWN FA Port SID Volumes
10:00:00:00:c9:21:69:67 2a 2206:13 00
VCMDB
10:00:00:00:c9:25:24:70 3b 29, 2a, 2b Volume
50:06:04:82:bc:61:22:8d 4a 31, 32, 33
Switch - SID
When the SID Lock Down feature is enabled, the Source ID (SID) of
the switch port that the protected HBA is connected to is added to the
VCMDB record. Once an association between the HBA’s WWN, the
SID, and the FA port is created, the HBA is considered locked.
When a SID is locked, no user with a spoofed WWN can log in. If a
user with a spoofed WWN is already logged in, that user loses all
access through that HBA.
Explanation of the SID As mentioned previously, the SID value is a combination of the
Value unique domain identifier of a switch and the physical port number.
Figure 7-7 provides examples of the SID value.
Domain Port
021300
Domain Port
Obtaining the SID To find the SID value, run the fpath lshosts command, which
Value displays the contents of the Login History table. The table lists all the
hosts and HBAs that are logged in to Symmetrix FA ports. If the
VCMDB Access feature is enabled, this command must be run from
the ESN Manager Control Station.
If administrators no longer have a path from the locked HBA, they
cannot use ESN Manager to find the SID value. Instead, the SID value
must be obtained from the switch:
McDATA switches — Through the hardware view of EFC Manager
(for example, Connectrix Manager), select a board and then one of the
switch ports. After selecting a port, right-click to display the port
properties window. The SID value is the Fibre Channel ID.
Brocade switches — Telnet to the switch and run nsShow. Look for
the PID value (address ID of the port in hexadecimal) of the WWN of
the locked HBA. The PID value is the SID value.
Enabling both VCMDB Access and SID Lock Down allows access
control to be administered at the hardware and software level. While
enabling both features provides the highest level of security, it may
also increase the number of steps required when making
configuration changes to the environment. For example, if a fiber
cable is moved one port on a switch to another, the SID value
associated with the HBA changes. If SID Lock Down is enabled, the
administrator must manually reset the SID value. If the feature is
disabled, this extra step is not required.
Security requirements are different for every customer. For some
environments, completely restricting access to management functions
is unnecessary. However, some customers, such as a service provider,
may need the highest level of access control available, even if it
means decreasing the flexibility of the storage network. IT
administrators should determine the appropriate security levels for
their operational needs before activating features such as VCMDB
Access and SID Lock Down.
Symmetrix Access Symmetrix Access Controls provide security controls that limit
Controls configuration and management of Symmetrix resources in an ESN
environment. Symmetrix Access Controls allow a storage
administrator to restrict management control to specific device pools
for various systems. When enabled, they limit functions such as
SRDF, TimeFinder, SDR, and Optimizer. Independent of the physical
array, they can be established for an entire Symmetrix unit or a subset
of Symmetrix devices.
Creating Symmetrix Access Controls is a three-step process. The
configuration operations are available through the Symmetrix
command line interface (SYMCLI) and included as part of Symmetrix
Manager, which is a component of ECC. All security definitions are
stored internal to the Symmetrix system, so enforcement is
independent of any single host.
Creating Device Pools — The first step to creating Symmetrix Access
Controls is to define device pools. Device pools represent the lowest
level of granularity for which management security is established.
Device pools contain one or more Symmetrix volumes, located in a
single Symmetrix array. These pools can be defined by specifying
each of the Symmetrix logical volumes or an existing SYMCLI device
group.
Conclusion
Access control strategies are essential to creating secure ESN
environments. Not only do IT administrators have to manage access
to ESN hardware and software components, but they must also
control the management privileges of individual user groups.
Protecting the environment from internal misuse is as important as
protecting it from external hackers. The best practices described in
this chapter represent the principal methods for building and
managing a secure storage network. When applied, these methods
provide the discovery, access control, and path management
capabilities necessary to effectively perform management functions.
EMC provides a single storage management framework for
efficiently managing and sharing storage resources. The EMC
ControlCenter suite of products allows administrators to implement
access control strategies in an organized, efficient manner. ECC
eliminates the need for individual management tools from different
vendors by providing an end-to-end management application for
storage networks. From a central location, administrators can plan,
configure, control, tune, and monitor storage resources in the ESN
environment. Reducing complex management tasks allows
administrators to focus IT resources on implementing access control
strategies and creating a secure storage network.
Customer Support
This appendix reviews the EMC process for detecting and resolving
software problems, and provides essential questions that you should
answer before contacting the EMC Customer Support Center.
This appendix covers the following topics:
◆ Overview of Detecting and Resolving Problems .........................A-2
◆ Troubleshooting the Problem ..........................................................A-3
◆ Before Calling the Customer Support Center ...............................A-4
◆ Documenting the Problem...............................................................A-5
◆ Reporting a New Problem ...............................................................A-6
◆ Sending Problem Documentation...................................................A-7
Problem
Detection
Refer to
Technical Support
Appendix in this Manual
Collect Problem
Information as
Directed
Problem is
Tracked and
Managed to
Resolution
Please do not request a specific support representative unless one has already
been assigned to your particular system problem.
A
active alert An alert for which one or more trigger values have been met. Active
alerts display in the Active Alerts view in the Console. Respond to an
active alert by right-clicking it and selecting from a list of available
commands. An active alert displays until you remove or reset it or the
conditions that caused the alert to trigger are alleviated.
Administration folder A folder in the tree panel that contains a hierarchical tree of ECC
administration objects, organized by task: alert management, data
collection policies, security management, and so on. Drilling down
through the hierarchy displays increasing levels of detail, such as
specific definitions and templates.
alert definition An alert for which keys, trigger values, and a schedule have been
defined, and optionally autofixes and a management policy. You
create alert definitions using the alert templates or by copying
existing alerts. See also management policy.
alert type - count A count alert monitors values that can be calculated, such as the
percentage of free space in a file system or the size of a file. Count
alerts have numeric values for the triggers.
alert type - state A state alert evaluates whether a condition is true or false, such as
whether a subsystem or database is active or whether an important
file was backed up. State alerts have TRUE or FALSE as the trigger
value.
B
back-end The configuration of disk directors and physical disk devices in a
configuration Symmetrix system. Back-end components are responsible for staging
data from the physical disk devices to cache and the subsequent
destaging of data from cache back to the physical devices.
Backup Agent for TSM A ControlCenter agent that allows you to monitor your TSM backup
(Tivoli Storage systems, looking for events that you define, such as failed backups
Manager) and increasing space utilization. TSM is a storage management
system to manage business information in an enterprise-wide Storage
Area Network (SAN) and traditional network environment.
C
Celerra File Server An EMC high-end network file server composed of two cabinets--
one cabinet contains the Symmetrix system, and the other cabinet
contains the Celerra File Server components.
Command History A function within the ControlCenter Console monitoring group that
provides a tabular view of the ControlCenter active commands
associated with managed objects. It shows who executed which
command on which object when.
Common Array The ControlCenter product that encompasses support for storage
Manager systems from other manufacturers such as Compaq StorageWorks,
HDS/HP, RVA/SVA, and IBM ESS.
configured capacity The amount of storage capacity configured into Symmetrix devices. It
is the usable capacity (the amount of storage a host can use to build a
file system or database). See also unconfigured capacity.
connected In regard to fabrics, a state where all the units comprising the fabric
have physically intact links between them to conduct I/O
transactions from any unit to any other unit in the fabric. A fabric
could have some physical links down and still be connected if there
are sufficient physical links to allow I/O to and from all of the units.
Connectivity Agent for A ControlCenter agent that monitors the VCM database on each
SDM Symmetrix system for storage access control configuration changes.
The SDM (storage device masking) Agent discovers volume access
control information for each Symmetrix system in a storage network
and updates the Repository with configuration changes.
Connectivity Agent for A ControlCenter agent that manages information gathered by generic
SNMP SNMP agents in the storage network, updates the Console with
current connectivity settings, executes data collection policies, and
generates alerts. It is generally installed on the same host as the ECC
Server.
Connectivity Agent for A ControlCenter agent that monitors switch status through
Switches vendor-supplied software. The Connectivity Agent for Switches
discovers topology and fabric information for switches, runs data
collection policies to monitor topology and fabric status, and updates
the Repository with switch connection data. Typically, a single
instance of the Switch Agent is installed.
connectivity device Any device that indirectly connects hosts with storage arrays. They
may also be devices that allow connections to other connectivity
devices, but with the ultimate purpose of connecting hosts to storage
arrays. Examples include: switches, hubs, bridges, and patch panels.
Connectivity folder A folder in the tree panel that contains a hierarchical tree of
connectivity objects, organized by connectivity device, links, or
unknown ports. Drilling down through the hierarchy displays
increasing levels of detail, such as specific switches and ports.
D
database agents A collective term for the ControlCenter agents that monitor or
manage host databases. Database agents are provided for Oracle and
MVS DB2. The agent typically gathers configuration, status, and
performance data from the database and may support control actions.
Database Tuner An EMC application that analyzes database and storage objects (for
(Symmetrix) example, an Oracle database and Symmetrix system) from a single
location. It monitors and tunes the database object for improved
database performance, optimizes it for the storage object, and
identifies storage devices that are causing bottlenecks for database
access.
data collection policy A formal set of statements used to manage the data collected by
ControlCenter agents. A policy specifies the data to collect and the
collection frequency. Most agents have associated predefined
collection policies and collection policy templates that can be
configured through the ECC Administration task.
data collection policy A template that provides default values for the creation of new
template collection policies. ControlCenter provides one or more policy
template for each agent. You can configure your own policies by
modifying the collection policy templates.
device group A group of devices that can be managed using a single device group
name. ControlCenter allows you to view, create, and modify SYMCLI
device groups and to perform operations on the device group. Device
groups (unlike ControlCenter) are associated with a particular host
and Symmetrix. For example, you might set up a group of all devices
used by a particular host. Another group might be all devices used in
a particular database. See also group.
discoverable object A connectivity device in the storage network that can be identified by
an agent. The following attributes of the object must be identified: IP
address, world wide name (WWN), ports, neighboring switches,
type, management information base (MIB), Fibre Channel adapter
(FA) port, director, and serial number.
E
ECC Server The primary interface between the Console(s), Repository, and
agents. It also provides many of the common services to the
ControlCenter infrastructure.
EMC ControlCenter A family of products that enables you to discover, monitor, automate,
provision, and report on host storage resources, networks, and
storage across your entire information environment from a single
Console.
Enterprise System A set of IBM and vendor products that interconnect S/390 computers
Connection (ESCON) with each other, with attached storage, and with other devices using
optical fiber technology and dynamically modifiable ESCON
directors.
F
Framework ControlCenter software that consists of a component that integrates
Integration package ControlCenter into various third-party framework applications such
(ControlCenter) as HP OpenView or CA Unicenter, and an agent (the Integration
Gateway) that uses SNMP to monitor events and interface to the
third-party application.
front-end The configuration of host channels, SCSI, ESCON, and Fibre Channel
configuration directors, ports, and Symmetrix logical devices. Front-end
components are responsible for handling I/O requests from hosts and
serving the data from cache.
functional device A unique ID that the RVA or SVA uses for an MVS device.
identifier (FDID)
G
gatekeeper A Symmetrix device accessible by the host through which the
ControlCenter agent communicates with the Symmetrix. The
gatekeeper routes low-level SYMAPI commands to the Symmetrix.
H
host agents A group of ControlCenter agents that monitor or manage the host
environment. Host agents are provided for AIX, HP-UX, Novell,
Solaris, Windows, and various MVS subsystems. The agent typically
gathers configuration, status, and performance data from the host on
which it is running and may support control actions.
host bus adapter An I/O adapter that sits between the host computer’s bus and the
(HBA) Fibre Channel loop, and manages the transfer of information between
the two channels.
Hosts folder A folder in the tree panel that contains a hierarchical tree of host
objects, organized by operating system: Solaris, Windows, MVS, and
so on. Drilling down through the hierarchy displays increasing levels
of detail, such as specific hosts, databases, and tablespaces.
hot spare device A powered up physical disk drive that a Symmetrix system can use in
situations such as the failure of a standard (STD), R1, or R2 device.
hyper device The splitting of a physical disk into two or more devices. The host
views hyper volumes as individual physical devices. Also called
Symmetrix device (logical device).
Integration Gateway A ControlCenter agent that provides an interface from the ECC
Server to management framework applications such as HP
OpenView Network Node Manager, Tivoli NetView, or the CA
Unicenter TNG Framework, enabling those applications to display
ControlCenter information. See also Framework Integration package.
L
login history table A table residing on Symmetrix systems that contains the current and
(LHT) historical login information of host HBAs logging into each FA in a
Symmetrix system. The information in the table can be used to track
changes in a configuration.
log file A file (one for each ControlCenter component) that contains output
messages from component execution. Log files contain messages
with different levels of output that indicate message severity. When a
log file reaches a maximum size, a new log is created. The number of
log files per component is configurable; the default is five. Once the
maximum number of logs are created, the first log is reused.
logical unit number An addressing model for devices in which each separately
(LUN) addressable logical unit in a storage system has a unique LUN ID,
which is a hexadecimal number. The default ID for the first new LUN
is the smallest available one; for the next new LUN, it is the next
smallest available, and so on.
M
managed object Hosts, databases, file systems, storage systems, switches, and other
connectivity devices in the storage network that can be managed by
ControlCenter.
management policy A formal set of statements that defines the users (scripts, SNMP trap,
and so on) that ControlCenter should notify when an alert triggers,
and how those users should be notified. Notification options include:
a message through the Console, an e-mail, and a message to a
management framework such as Hewlett Packard’s OpenView.
mapped capacity Devices that are mapped to front-end ports on a Symmetrix system.
Host systems cannot access device unless they are mapped to
front-end ports. See also unmapped capacity.
Master Agent A ControlCenter agent that manages the installation, starting, and
stopping of other agents on the host. Required on every host running
an agent (except for the Connectivity Agent for SNMP).
media repository An area on the ECC Server to which ControlCenter components are
downloaded before they are installed. Installation and licensing
information is captured and shared through the media repository
tables.
meta device Meta devices are Symmetrix devices concatenated together to form a
larger device. The Symmetrix devices forming the meta device are all
accessed through the same target/LUN value. The SDR component
reports the Symmetrix meta device number as the device number of
the first device in the group, which is also known as the meta head.
The remaining members of the group are known as meta members.
MVS (OS/390) An operating system from IBM that is installed on most of its
mainframe and large server computers. Payroll, accounts receivable,
transaction processing, database management, and other programs
critical to large businesses typically run on an MVS system.
N
Navisphere An EMC application that manages storage for CLARiiON storage
systems. It configures, monitors, and tunes CLARiiON disk arrays,
provides the Console with the status of CLARiiON systems, and
indicates when an alert occurs by changing the color of the
CLARiiON icon.
net capacity load In an RVA/SVA storage array, a statistic measuring the percentage of
(NCL) actual total physical capacity of the subsystem or one of its partitions
(test or production). A value of 70 to 80 is considered normal for
NCL.
O
OnAlert An EMC application that provides remote support functionality and,
optionally, dial-home capability to Symmetrix systems.
Open Integration The ControlCenter product encompassing the set of common services
Components available to all EMC ControlCenter/Open Edition products,
including centralized install, agents for all managed entities, login
and access control administration, a Repository, and so on.
P
Performance view A view displaying Symmetrix performance statistics about various
objects available within ControlCenter. For each object, ongoing
real-time data can be displayed in chart form, or point-in-time data
can be displayed in table form.
port flags Settings assigned to front-end ports that tell a Symmetrix system how
to communicate with different host types and how to behave in
certain situations.
proxy agent A Symmetrix agent that provides indirect access to a host running the
SYMAPI Server and directly connected to a Symmetrix system. The
SYMAPI Server host receives requests for data or commands from the
Symmetrix Agent, acts upon the data request or command, and
replies to the Symmetrix Agent. A proxy host is typically used in a
situation where the SYMAPI Server host is running on a platform not
supported for the Symmetrix Agent, such as MVS.
Q
quality of service A function within the ControlCenter Console that reduces the
(QoS) resources allocated for Business Continuance Volumes (BCVs) or
SRDF copy operations on selected devices. QoS allows you to control
the balance between standard and BCV/SRDF operations.
R
RA group A logical grouping of source (R1) or target (R2) devices associated
with a remote link director. Up to 16 RA groups may exist in an SRDF
configuration.
Reports folder A folder in the tree panel that contains a hierarchical tree of
ControlCenter and user-defined reports, organized by type. Drilling
down through the hierarchy displays specific reports.
Repository A central, relational database that contains the aggregation of all the
data about your installation’s managed environment.
S
schedule A set of instructions that defines when ControlCenter events should
occur, such as the evaluation of an alert or the collection of statistics.
In a schedule, you can define the interval at which an event occurs
(every 10 seconds, minutes, hours, and so on), the days of the week,
and the days of the year. ControlCenter provides several predefined
schedules, and you can define additional ones.
Solutions Enabler An EMC product included with ControlCenter that can manage and
retrieve configuration, status, and performance information from
Symmetrix systems. The Solutions Enabler components include
SYMCLI, SYMAPI, and SYMAPI Server.
spares partition In the RVA or SVA subsystem, the classification assigned to drives
that have passed media acceptance testing but are currently unused
for data storage. Such drives are called spares.
Status Acknowledged A folder in the tree panel in which you can store managed objects that
folder have a warning or error status. If you drag an object that has a
warning or error status on it (yellow caution or red X) into this folder,
then the object’s status will not propagate up the tree causing a parent
icon to show the warning or error icon when that parent is collapsed.
The purpose is to acknowledge that you saw the status condition
without being reminded of it through the parent object.
storage allocation The process of finding and configuring suitable storage space for use
by a host.
storage area network A special-purpose network (or sub network) that interconnects
(SAN) different kinds of data storage devices with associated data servers
on behalf of a larger network of users. Typically, a SAN is part of the
overall network of computing resources for an enterprise. A SAN is
usually clustered in close proximity to other computing resources,
but may also extend to remote locations for backup and archival
storage, using wide area network carrier technologies such as
asynchronous transfer mode or synchronous optical networks.
storage device An access control mechanism for Symmetrix systems that regulates
masking (SDM) which host bus adapters in a Fibre Channel environment can access
specific Symmetrix volumes. Synonymous with LUN masking.
Storage folder A folder in the tree panel that contains a hierarchical tree of storage
system objects, organized by type: Celerra, CLARiiON, HDS,
StorageWorks, Symmetrix, and so on. Drilling down through the
hierarchy displays increasing levels of detail, such as specific storage
systems, host directors, and individual storage devices.
Symmetrix An integrated cache disk array (ICDA) storage array that provides
centralized, sharable enterprise storage. It helps create an information
infrastructure capable of managing large, complex ultra-dynamic
storage area network (SAN) environments by consolidating storage
from multiple heterogeneous hosts onto a single system.
Symmetrix Data An EMC application that allows you to configure, monitor, and
Mobility Manager manage the replication of data between Symmetrix devices.
(SDMM)
Symmetrix Device A function within the ControlCenter Console storage allocation task
Reallocation (SDR) that allows you to map Symmetrix devices to the front-end director
ports of the Symmetrix system. You must map a device to one or
more front-end ports to make it available to a host.
Symmetrix Manager A set of functions accessed from within the ControlCenter Console
that allows the user to monitor the status, performance, and
configuration of Symmetrix systems and to perform active
Symmetrix commands such as TimeFinder, SRDF and configuration
changes. The Symmetrix Manager also allows you to modify the
configuration of a Symmetrix system. The controllable areas include
logical device allocation, device type definition, meta device
configuration, SDR, and port flag settings.
T
tape agent A ControlCenter agent for MVS that manages tape systems. It has
functions for a Virtual Tape Server (VTS) tape system, a CA-1 tape
software package by CA, a StorageTek tape silo environment, and
RMM (Removable Media Manager—a software product by IBM).
target panel The right panel in the Console, displaying one or more views of task
data for the managed object(s) currently selected in the tree panel.
taskbar A blue bar, located by default below the menu bar, providing access
to five task buttons: Storage Allocation, Monitoring, Performance
Mgt, ECC Administration, and Data Protection. Clicking a task
button opens a drop-down menu offering a selection of views.
template A set of default values for the creation of new alerts. ControlCenter
provides templates for every alert. You can specify your own default
settings by modifying the alert templates.
time window In the context of Optimizer, a period in time during which an aspect
of Optimizer’s behavior is controlled. Performance time windows
allow you to specify which samples (past or future) Symmetrix
Optimizer should consider when running its swap generation
algorithm. Swap time windows allow you to specify when
Symmetrix Optimizer should or should not perform swap activity.
topology editing A ControlCenter feature that allows display in the topology map of
objects that the Console cannot discover automatically. For example,
some elements do not have software-based management interfaces
(they are entirely hardware entities). Topology editing can depict
those elements by providing the desired placement in the map and
some basic properties of the object. The Topology Editing feature
allows you to display this object in the map, and manually supply
missing connectivity information. In addition to properties, user
editing allows you to manually define an object’s relationship to
other entities in the topology map.
tree panel The left panel in the Console, displaying all the objects in the storage
environment, organized by type.
trigger The logical operator that evaluates an alert condition (for example,
>90% storage utilization).
U
unallocated capacity The amount of storage formatted into Symmetrix devices but not yet
allocated to a host. Unallocated capacity includes both mapped and
unmapped devices but does not include unconfigured capacity. See
also allocated capacity.
uncollected free The space for deleted data that the RVA or SVA has been informed of
space (by Deleted Data Space Release (DDSR) processes) but has not yet
freed.
unmapped capacity Devices or capacity that have been configured, but not mapped to
front-end ports on a Symmetrix system. Host systems cannot access
volumes unless they are mapped to front-end ports. See also mapped
capacity.
V
VCM database A database, residing on a Symmetrix system, that contains host access
information for Symmetrix volumes. Each Symmetrix system has its
own VCM database.
W
WLA Archiver A ControlCenter agent that retrieves and archives collections of data
from individual agents, and organizes (rolls up) collected data into
summaries for the reports. The summarized data is saved to a data
archive, separate from the Repository.
world wide name A unique 48- or 64-bit number assigned by a recognized naming
(WWN) authority (often via block assignment to a manufacturer) that
identifies a connection or set of connections to a network. A WWN is
assigned for the life of the connection or device.
Symbols Attributes
3-5 adaptive copy disk mode 5-14
adaptive copy write pending 5-14
domino effect 5-15
Numerics SEMI-SYNC 5-13
3390 DASD track format 1-12 SYNC 5-13
4-port, 2-processor serial channel director 1-16 system-level 5-32
4-port, 4-processor serial channel director 1-16 Availability features 4-2
cache error correction and verification 4-4
A disk error correction and verification 4-3
Access control Availability guidelines
assigning actions 7-23 dynamic sparing 4-6
creating access groups 7-23 mirroring 4-5
creating device pools 7-22 SRDF option 4-6
of volumes 7-16
within ESN 7-4 B
Access groups Backplane 1-5
for access control 7-23 Backups without a remote host 5-44
Active LED Battery backup 1-6
channel director operator panel 1-8 BCV
Adaptive copy as a primary (source) device 5-43
disk mode 5-14 concurrent operations 5-41
disk mode attribute 5-14 mirror 6-3
write pending attribute 5-14 pairs 6-3
Adaptive copying mode 5-30 performing remote backups 5-44
copy disk mode 5-31 SRDF multi-hop 5-45
write-pending mode 5-30 Bidirectional link protocol 5-26
Administrator hosts Block capacity 1-11
within ESN 7-7 Block format sizes 1-10
Asset discovery Brocade switches 7-20
within ESN 7-4 Business continuance operations 5-41 to 6-18
Assigning actions differential split 6-11
for access control 7-23 establish 6-6
HBA replacement L
benefit of hard zoning 7-11 LAN/WAN distance limitations 5-5
Health monitoring Laptop PC 1-6
within ESN 7-4 Link domino mode 5-31
Help xvii Link protocols
Home Address (HA) bidirectional 5-26
track format 1-13 dual-directional 5-26
Host accessibility unidirectional 5-26
source volumes 5-12 Links
target volumes 5-12 possible states 5-27
Hosts channels 1-14 Local mirroring 5-2
Hyper-volumes Local volumes 5-7
open systems 3-6 Logical volume attributes 5-13 to 5-15
per physical device 3-6 adaptive copy 5-14
domino effect 5-15
I source volumes 5-13 to 5-15
I/O operations Logical volume states 5-9 to 5-12
delayed fast write 2-13 host accessibility 5-12
fast write 2-12 host view 5-11
overview 2-6 to 2-9 source volume 5-10, 5-11
read hit 2-11 SRDF view 5-10
read miss 2-11 target volume 5-10, 5-11
I/O response time Logical volumes
connect time 2-7 capacity calculation 1-11
connect time (mainframe) 2-6 Logical-to-physical volume relationships 3-6
disconnect time 2-6, 2-7
pend time (mainframe) 2-6 M
queuing time 2-7 Main components 1-3 to 1-7
queuing time (mainframe) 2-6 backplane 1-5
ICDA operation 2-2 to 2-5 battery subsystem 1-6
Incremental restore 6-17 card cage 1-5
Index marker cooling modules 1-5
track format 1-13 disk devices 1-6
Integrated cached disk array (ICDA) 2-2 Ethernet Hub 1-6
Invalid tracks power subsystem 1-6
threshold 5-14 service processor 1-6
warning 5-32 Manage systems
ISL congestion within ESN 7-7
correcting 7-12 Management controls
within ESN 7-22
J McData switches 7-20
Journal 0 mode 5-28 Media Types
Journal 1 mode 5-28 ESCON 5-4
SRDF campus 5-4
SRDF extended distance 5-4
SRDF over IP 5-5