Beruflich Dokumente
Kultur Dokumente
VNX
Series
Release 7.1
Using SRDF
/A with VNX
VNX
Enginuity
/A active/passive
configuration.
x Similar source and destination VNX models.
Hardware
IP data network for communication between Control Stations of source and destination VNX for File sys-
tems.
Network
Two attached VNX/Symmetrix DMX system pairs. Storage
Restrictions
x SRDF/A requires a Symmetrix SRDF/A license.
x SRDF/A does not work with Symmetrix 3xxx, 5xxx, or 8xxx versions. Only Symmetrix
DMX systems (5670 or later microcode) are supported.
Note: The Solutions Enabler Symmetrix SRDF family documentation, which includes the EMC
Symmetrix Remote Data Facility (SRDF) Product Guide, provides additional restrictions that apply
to the SRDF/A configuration with Symmetrix DMX. This documentation is available at
http://Support.EMC.com, the EMC Online Support website.
x The Symmetrix Enginuity version 5670 microcode supports only one SRDF/A group,
which can be dedicated to either an open host or a VNX, but not a mix of both. The 5671
release eases this restriction.
x SRDF/A source and destination sites support only one VNX/Symmetrix system pair.
x SRDF/A does not support partial failovers. When a failover occurs, all file systems
associated with SRDF-protected Data Movers fail over. To avoid failover issues, it is
critical that SRDF-protected Data Movers mount only the file systems consisting of SRDF
12 Using SRDF/A with VNX 7.1
Introduction
volumes mirrored at the destination. Local standard (STD) and business continuance
(BCV) volumes should have dedicated, locally protected Data Movers and SRDF volumes
should have dedicated, SRDF-protected Data Movers. Chapter 4 provides information
on potential failover and restore issues.
x IP Alias cannot be set up or used on the Control Stations when SRDF is configured for
disaster recovery purposes.
Management restrictions
x SRDF/A cannot be managed with the EMC Unisphere
ControlCenter
Local production Data Mover paired with SRDF/A remote standby Data Mover
30 Using SRDF/A with VNX 7.1
Concepts
Local standby Data Mover paired with SRDF/A remote standby Data Mover
Ensure that the Data Movers on the source and destination VNX systems are of a
similar model type, that is, the local SRDF/A Data Mover and its corresponding remote
standby Data Mover should be the same model or a supported superset. In addition,
the source and destination cabinets must be a similar model type. For example, they
should both be NS series gateway cabinets or CNS-14 cabinets, but not a mix of NS
series gateway and CNS-14 cabinets.
Planning considerations 31
Concepts
Ensure that network interfaces for the SRDF/A primary and SRDF/A standby Data
Movers are identical, and the same set of network clients can access the SRDF/A
primary and the SRDF/A standby Data Movers.
Ensure that each local standby Data Mover, providing local standby coverage for an
SRDF-protected Data Mover, has a corresponding remote SRDF/A standby. This is
a requirement for active/passive configurations.
/nas/log/dr_log.al
/nas/log/dr_log.al.rll
/nas/log/dr_log.al.err
/nas/log/dr_log.al.trace*
/nas/log/symapi.log*
These log files can also be viewed individually.
78 Using SRDF/A with VNX 7.1
Troubleshooting
x To monitor these logs while the nas_rdf command is running, check the file in the /tmp
directory. After the command completes, the logs appear in the /nas/log directory.
To gather more data after a failure, such as a failed restore, access the following sources:
x Disaster recovery (dr*) files Provide state changes, as well as other key informational
messages
x The symapi.log file Logs storage-related errors
Resolve initialization failures
This section provides a sample failed initialization scenario in which two destination Data
Movers (server_2 and server_3), intended to serve as SRDF/A standbys for two source
production Data Movers, already have a local standby Data Mover (server_5). This results
in an invalid configuration, that is, Data Movers serving as SRDF/A standbys cannot have
a local standby Data Mover.
This section includes:
x
Example 1 for initialization failure on page 79
x
Resolution for initialization failure example 1 on page 81
x
Example 2 for initialization failure on page 82
Example 1 for initialization failure
Example 1 for initialization failure
[root@cs110_dst nasadmin]# /nas/sbin/nas_rdf -init
Discover local storage devices ...
Discovering storage (may take several minutes)
done
Start R2 dos client ...
done
Start R2 nas client ...
done
Contact cs100_src ... is alive
Please create a new login account to manage RDF site cs100_src
New login: rdfadmin
New password:
BAD PASSWORD: it is based on a dictionary word
Retype new password:
Changing password for user rdfadmin
passwd: all authentication tokens updated successfully
done
Known problems and limitations 79
Troubleshooting
Example 1 for initialization failure
Please enter the passphrase for RDF site cs100_src:
Passphrase:
rdfadmin
Retype passphrase:
rdfadmin
operation in progress (not interruptible)...
id = 1
name = cs100_src
owner = 500
device = /dev/ndj1
channel = rdev=/dev/ndg, off_MB=391; wdev=/dev/nda, off_MB=391
net_path = 192.168.97.140
celerra_id = 000190100582034D
passphrase = rdfadmin
Discover remote storage devices ...done
The following servers have been detected on the system (cs110_dst):
id type acl slot groupID state name
1 1 1000 2 0 server_2
2 1 1000 3 0 server_3
3 1 1000 4 0 server_4
4 4 1000 5 0 server_5
Please enter the id(s) of the server(s) you wish to reserve
(separated by spaces) or "none" for no servers.
Select server(s) to use as standby:1 2
server_2 : Error 4031: server_2 : server_2 has a standby
server: server_5
server_3 : Error 4031: server_3 : server_3 has a standby
server: server_5
operation in progress (not interruptible)...
id = 1
name = cs100_src
owner = 500
device = /dev/ndj1
channel = rdev=/dev/ndg, off_MB=391; wdev=/dev/nda, off_MB=391
net_path = 192.168.97.140
celerra_id = 000190100582034D
passphrase = rdfadmin
Please create a rdf standby for each server listed
server server_2 in slot 2, remote standby in slot [2] (or none):none
server server_3 in slot 3, remote standby in slot [3] (or none):none
server server_4 in slot 4, remote standby in slot [4] (or none):none
server server_5 in slot 5, remote standby in slot [5] (or none):none
80 Using SRDF/A with VNX 7.1
Troubleshooting
Resolution for initialization failure example 1
Action Step
List and verify the servers by typing: 1.
# nas_server -list
Output:
id type acl slot groupID state name
1 1 1000 2 0 server_2
2 1 1000 3 0 server_3
3 1 1000 4 0 server_4
4 4 1000 5 0 server_5
Delete the local standby relationship by typing: 2.
# server_standby server_2 -delete mover=server_5
Output:
server_2 : done
Delete.....by typing: 3.
[root@cs110_dst nasadmin]# /nas/sbin/getreason
Output:
10 - slot_0 primary control station
5 - slot_2 contacted
5 - slot_3 contacted
5 - slot_4 contacted
5 - slot_5 contacted
Delete.....by typing: 4.
[root@cs110_dst nasadmin]# /nas/sbin/getreason
Output:
10 - slot_0 primary control station
5 - slot_2 contacted
5 - slot_3 contacted
5 - slot_4 contacted
5 - slot_5 contacted
Delete the.....by typing: 5.
[root@cs110_dst nasadmin]# /nas/bin/nas_server -list
Output:
id type acl slot groupID state name
1 1 1000 2 0 server_2
2 1 1000 3 0 server_3
3 1 1000 4 0 server_4
4 4 1000 5 0 server_5
Known problems and limitations 81
Troubleshooting
Action Step
Delete.....by typing: 6.
[root@cs110_dst nasadmin]# /nas/bin/nas_server -info -all
Output:
id = 1
name = server_2
acl = 1000, owner=nasadmin, ID=201
type = nas
slot = 2
member_of =
standby =
status :
defined = enabled
actual = online, ready
id = 2
name = server_3
acl = 1000, owner=nasadmin, ID=201
type = nas
slot = 3
member_of =
standby =
status :
defined = enabled
actual = online, ready
id = 3
name = server_4
acl = 1000, owner=nasadmin, ID=201
type = nas
slot = 4
member_of =
standby = server_5, policy=auto
status :
defined = enabled
actual = online, ready
id = 4
name = server_5
acl = 1000, owner=nasadmin, ID=201
type = standby
slot = 5
member_of =
standbyfor= server_4
status :
defined = enabled
actual = online, ready
Example 2 for initialization failure
The following example highlights the error occurred when a file system is mounted on a
local Data Mover, intended to serve as an RDF standby. The example shows new prompts
for the user to change the configuration on another window before proceeding with the
initialization.
82 Using SRDF/A with VNX 7.1
Troubleshooting
Example 2 for initialization failure
[root@cs110_dst nasadmin]# /nas/sbin/nas_rdf -init
Discover local storage devices ...
Discovering storage (may take several minutes)
done
Please create a rdf standby for each server listed
server server_2 in slot 2, remote standby in slot [2] (or none): 2
Error 3122: server_2 : filesystem is unreachable: rl64k
Server server_2 has local file system mounted.
Please unmount those file system in another window and try again.
Do you wish to continue? [yes or no]:
********************
$ nas_server -i server_2
id = 1
name = server_2
acl = 1000, owner=nasadmin, ID=201
type = nas
slot = 2
member_of =
standby = server_3, policy=auto
RDFstandby= slot=2
status :
defined= enabled
actual = online, active
$ server_mount server_2 rl64k /rl64k
server_2 : done
Warning 17716815751: server_2 :has a standby server: rdf, filesystem:
rl64k is local, will not be able to failover
[nasadmin@cs100_src ~]$ server_df server_2
server_2 :
Filesystem kbytes used avail capacity Mounted on
rl64k 230393504 124697440 105696064 54% /rl64
mc1 230393504 45333944 185059560 20% /mc1
mc2 460787024 327825496 132961528 71% /mc2
mc1a_ckpt1 230393504 410408 229983096 0% /mc1a_ckpt1
mc1a 230393504 410424 229983080 0% /mc1a
rl32krdf 230393504 696 230392808 0% /caddata/wdc1/32k
root_fs_common 13624 5288 8336 9% /.etc_common
root_fs_2 231944 6152 225792 3% /
Known problems and limitations 83
Troubleshooting
Resolve activation failures
This section provides a sample failed activation scenario in which a local file system is
mounted on an SRDF-protected standby Data Mover. The error conditions are illustrated
and the corrective commands are listed after the error.
This section includes:
x
Example for activation failure on page 84
x
Resolution for activation failure on page 85
Example for activation failure
Example of activation failure
[root@cs110_dst rdfadmin]# /nas/sbin/nas_rdf -activate
Is remote site cs100_src completely shut down (power OFF)?
Do you wish to continue? [yes or no]: yes
Successfully pinged (Remotely) Symmetrix ID: 000190100582
An RDF 'Failover' operation execution is in progress for device group
'1R2_500_3'.
Please wait...
Write Disable device(s) on SA at source (R1)..............Done.
Suspend RDF link(s).......................................Done.
Read/Write Enable device(s) on RA at target (R2)..........Done.
The RDF 'Failover' operation successfully executed for
device group '1R2_500_3'.
Waiting for nbs clients to die ... done
/net/500 /etc/auto.500 -t 1,ro
/dev/ndj1: recovering journal
/dev/ndj1: clean, 11587/231360 files, 204164/461860 blocks
fsck 1.26 (3-Feb-2002)
Waiting for nbs clients to die ... done
id type acl slot groupID state name
1 1 1000 2 0 server_2
2 4 1000 3 0 server_3
3 1 1000 4 0 server_4
4 4 1000 5 0 server_5
server_2 :
server_2 : going offline
rdf : going active
replace in progress ...failed
failover activity complete
84 Using SRDF/A with VNX 7.1
Troubleshooting
Example of activation failure
replace_storage:
replace_volume: volume is unreachable
d141,d142,d143,d144,d145,d146,d147,d148
server_3 :
server_3 : going offline
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_4 :
Error 4003: server_4 : standby is not configured
server_5 :
Error 4003: server_5 : standby is not configured
An RDF 'Update R1' operation execution is in progress for device
'DEV001' in group '1R2_500_3'. Please wait...
Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Device: 034D in (0557,03)................................ Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
The RDF 'Update R1' operation successfully initiated for device
'DEV001' in group '1R2_500_3'.
Resolution for activation failure
Action Step
List and verify the servers by typing: 1.
# nas_server -list
Output:
id type acl slot groupID state name
1 1 1000 2 2 server_2.faulted.rdf
2 4 0 3 0 server_3
3 1 1000 4 0 server_4
4 4 1000 5 0 server_5
Unmount all non-SRDF (local) file systems from the Data Mover that failed to activate (in this case, server_2)
by typing:
2.
[root@cs110_dst rdfadmin] # server_umount server_2.faulted.rdf - perm fs5
Output:
server_2.faulted.rdf : done
Known problems and limitations 85
Troubleshooting
Action Step
Manually activate SRDF for the Data Mover that originally failed by typing: 3.
[root@cs110_dst rdfadmin]# server_standby server_2.faulted.rdf - activate rdf
Output:
server_2.faulted.rdf :
server_2.faulted.rdf : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
Verify the configuration by typing: 4.
[root@cs110_dst rdfadmin]# /nas/sbin/getreason
Output:
10 - slot_0 primary control station
5 - slot_2 contacted
5 - slot_3 contacted
5 - slot_4 contacted
5 - slot_5 contacted
Delete.....by typing: 5.
[root@cs110_dst rdfadmin] # /nas/bin/nas_server -list
Output:
id type acl slot groupID state name
1 1 0 2 0 server_2
2 4 0 3 0 server_3
3 1 1000 4 0 server_4
4 4 1000 5 0 server_5
Delete.....by typing: 6.
[root@cs110_dst rdfadmin]# /nas/bin/server_mount ALL
Output:
server_2 :
root_fs_2 on / uxfs,perm,rw
root_fs_common on /.etc_common uxfs,perm,ro
fs2 on /fs2 uxfs,perm,rw
fs1a on /fs1a uxfs,perm,rw
fst1 on /fst1 uxfs,perm,rw
server_3 :
root_fs_3 on / uxfs,perm,rw,<unmounted>
root_fs_common on /.etc_common uxfs,perm,ro,<unmounted>
server_4 :
root_fs_4 on / uxfs,perm,rw
root_fs_common on /.etc_common uxfs,perm,ro
server_5 :
root_fs_5 on / uxfs,perm,rw,<unmounted>
root_fs_common on /.etc_common uxfs,perm,ro,<unmounted>
86 Using SRDF/A with VNX 7.1
Troubleshooting
Resolve restore failures
This section provides sample restore failure scenarios.
Note: The source-side Control Station 0 must be operational for the restore, and VNX for file services
on the source site start only after the restore process successfully completes. A Waiting for Disaster
Recovery to complete message appears in the /var/log/messages and the source remains in that state
until the restore completes. This change, which involves use of a DR lock, ensures correct sequential
operation, thereby ensuring that the source-site services come up correctly under RDF control and
that no user commands run until the source site is completely restored. Error messages on page 98
provides more information about the associated errors.
Topics included are:
x
Example 1 for restoration failure on page 87
x
Resolution for restoration failure example 1 on page 91
x
Example 2 for restoration failure (NS series gateway) on page 92
x
Resolution for restoration failure example 2 on page 92
x
Example 3 for restoration failure (database lock error) on page 93
x
Resolution for restoration failure example 3 on page 95
Example 1 for restoration failure
This example shows an active/passive restore operation started by root from rdfadmin on
cs110_dst using the nas_rdf -restore command. The output shows the events leading up to
the error message on the destination VNX, followed by output from the source after taking
corrective action.
Example 1 restore failure
[root@cs110_dst rdfadmin]# /nas/sbin/nas_rdf -restore
Is remote site cs100_src ready for Storage restoration?
Do you wish to continue? [yes or no]: yes
Contact cs100_src ...
Unable to contact node cs100_src at 192.168.96.58.
Do you wish to continue? [yes or no]: yes
Device Group (DG) Name : 1R2_500_11
DG's Type : RDF2
DG's Symmetrix ID : 000187940255
Known problems and limitations 87
Troubleshooting
Example 1 restore failure
Target (R2) View Source (R1) View MODES
-------------------------------- ------------------ ---- --------
ST LI ST
Standard A N A
Logical T R1 Inv R2 Inv K T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MDA STATE
-------------------------------- -- -------- ------- ---- --------
DEV001 0000 RW 12 0 NR 0000 WD 0 0 A.. Failed Over
DEV002 0001 RW 4096 0 NR 0001 WD 0 0 A.. Failed Over
DEV003 000D RW 1 0 NR 000D WD 0 0 A.. Failed Over
DEV004 000E RW 1 0 NR 000E WD 0 0 A.. Failed Over
DEV005 000F RW 0 0 NR 000F WD 0 0 A.. Failed Over
DEV006 0010 RW 0 0 NR 0010 WD 0 0 A.. Failed Over
DEV007 0011 RW 0 0 NR 0011 WD 0 0 A.. Failed Over
DEV008 0012 RW 0 0 NR 0012 WD 0 0 A.. Failed Over
DEV009 0013 RW 0 0 NR 0013 WD 0 0 A.. Failed Over
DEV010 0014 RW 0 0 NR 0014 WD 0 0 A.. Failed Over
DEV011 0015 RW 1 0 NR 0015 WD 0 0 A.. Failed Over
DEV012 0016 RW 1 0 NR 0016 WD 0 0 A.. Failed Over
...
DEV161 02D3 RW 0 0 NR 0253 WD 0 0 A.. Failed Over
DEV162 02D7 RW 0 0 NR 0257 WD 0 0 A.. Failed Over
DEV163 0003 RW 0 0 NR 0003 WD 0 0 A.. Failed Over
DEV164 0004 RW 851 0 NR 0004 WD 0 0 A.. Failed Over
DEV165 0005 RW 0 0 NR 0005 WD 0 0 A.. Failed Over
Total -------- --------- ------ ------
Track(s) 4965 0 0 0
MB(s) 155.2 0.0 0.0 0.0
Legend for MODES:
M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive
Copy
D(omino) : X = Enabled, . = Disabled
A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off
+++++ Setting RDF group 1R2_500_11 to SYNC mode.
An RDF 'Update R1' operation execution is in progress for device group
'1R2_500_11'.
Please wait...
Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Devices: 0000-0001 ...................................... Merged.
Devices: 0003-0005 ...................................... Merged.
Devices: 000D-0015 ...................................... Merged.
Devices: 0016-001E ...................................... Merged.
Devices: 001F-0027 ...................................... Merged.
88 Using SRDF/A with VNX 7.1
Troubleshooting
Example 1 restore failure
Devices: 0028-0030 ...................................... Merged.
Devices: 0031-0039 ...................................... Merged.
Devices: 003A-0042 ...................................... Merged.
Devices: 0043-004B ...................................... Merged.
Devices: 004C-0054 ...................................... Merged.
Devices: 0055-005D ...................................... Merged.
Devices: 005E-0066 ...................................... Merged.
Devices: 0067-006F ...................................... Merged.
Devices: 0070-0078 ...................................... Merged.
Devices: 0079-0081 ...................................... Merged.
Devices: 0082-008A ...................................... Merged.
Devices: 008B-008C ...................................... Merged.
Devices: 01DB-01E1 ...................................... Merged.
Devices: 01E2-01E8 ...................................... Merged.
Devices: 01E9-01EF ...................................... Merged.
Devices: 01F0-01F6 ...................................... Merged.
Devices: 01F7-01FD ...................................... Merged.
Devices: 01FE-0204 ...................................... Merged.
Devices: 0205-020B ...................................... Merged.
Devices: 020C-0212 ...................................... Merged.
Devices: 0213-0219 ...................................... Merged.
Devices: 021A-0220 ...................................... Merged.
Devices: 0221-0227 ...................................... Merged.
Devices: 0228-022E ...................................... Merged.
Devices: 022F-0235 ...................................... Merged.
Devices: 0236-023C ...................................... Merged.
Devices: 023D-0243 ...................................... Merged.
Devices: 0244-024A ...................................... Merged.
Devices: 024B-0251 ...................................... Merged.
Devices: 0252-0258 ...................................... Merged.
Devices: 0259-025A ...................................... Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Done.
The RDF 'Update R1' operation successfully initiated for device group
'1R2_500_11'.
Is remote site cs100_src ready for Network restoration?
Do you wish to continue? [yes or no]: yes
server_2 : done
server_3 : done
server_4 :
Error 4003: server_4 : standby is not configured
server_5 :
Error 4003: server_5 : standby is not configured
/dev/sdj1: clean, 11464/231360 files, 164742/461860 blocks
fsck 1.26 (3-Feb-2002)
/net/500 /etc/auto.500 -t 0,rw,sync
Waiting for 1R2_500_11 access ...done
Known problems and limitations 89
Troubleshooting
Example 1 restore failure
An RDF 'Failback' operation execution is in progress for device group
'1R2_500_11'.
Please wait...
Write Disable device(s) on RA at target (R2)..............Done.
Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Devices: 0000-0001 ...................................... Merged.
Devices: 0003-0005 ...................................... Merged.
Devices: 000D-0015 ...................................... Merged.
Devices: 0016-001E ...................................... Merged.
Devices: 001F-0027 ...................................... Merged.
Devices: 0028-0030 ...................................... Merged.
Devices: 0031-0039 ...................................... Merged.
Devices: 003A-0042 ...................................... Merged.
Devices: 0043-004B ...................................... Merged.
Devices: 004C-0054 ...................................... Merged.
Devices: 0055-005D ...................................... Merged.
Devices: 005E-0066 ...................................... Merged.
Devices: 0067-006F ...................................... Merged.
Devices: 0070-0078 ...................................... Merged.
Devices: 0079-0081 ...................................... Merged.
Devices: 0082-008A ...................................... Merged.
Devices: 008B-008C ...................................... Merged.
Devices: 01DB-01E1 ...................................... Merged.
Devices: 01E2-01E8 ...................................... Merged.
Devices: 01E9-01EF ...................................... Merged.
Devices: 01F0-01F6 ...................................... Merged.
Devices: 01F7-01FD ...................................... Merged.
Devices: 01FE-0204 ...................................... Merged.
Devices: 0205-020B ...................................... Merged.
Devices: 020C-0212 ...................................... Merged.
Devices: 0213-0219 ...................................... Merged.
Devices: 021A-0220 ...................................... Merged.
Devices: 0221-0227 ...................................... Merged.
Devices: 0228-022E ...................................... Merged.
Devices: 022F-0235 ...................................... Merged.
Devices: 0236-023C ...................................... Merged.
Devices: 023D-0243 ...................................... Merged.
Devices: 0244-024A ...................................... Merged.
Devices: 024B-0251 ...................................... Merged.
Devices: 0252-0258 ...................................... Merged.
Devices: 0259-025A ...................................... Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Done.
Read/Write Enable device(s) on SA at source (R1)..........Done.
The RDF 'Failback' operation successfully executed for device group
'1R2_500_11'.
Waiting for 1R2_500_11 sync ...done
Starting restore on remote site cs100_src ...
failed
----------------------------------------------------------------
Please execute /nasmcd/sbin/nas_rdf -restore on remote site cs100_src
----------------------------------------------------------------
[root@cs100_src rdfadmin]#
90 Using SRDF/A with VNX 7.1
Troubleshooting
Note: The failure occurs after the failback operation is executed for the device group, when the restore
is set to begin on the source VNX.
Resolution for restoration failure example 1
Action Step
Delete.....by typing: 1.
[root@cs100_src nasadmin]# /nasmcd/sbin/nas_rdf -restore
server_2 : rdf : reboot in progress ............
server_2 : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_3 : rdf : reboot in progress ............
server_3 : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_4 :
Error 4003: server_4 : standby is not configured
server_5 :
Error 4003: server_5 : standby is not configured
If the RDF device groups were setup to operate in ASYNCHRONOUS (
SRDF/A ) mode, now would
be a good time to set it back to that mode.
Would you like to set device group 1R1_11 to ASYNC Mode ? [yes or
no]: yes
An RDF Set 'Asynchronous Mode' operation execution is in progress
for device group
'1R1_11'. Please wait...
The RDF Set 'Asynchronous Mode' operation successfully executed
for device group '1R1_11'.
If the RDF device groups were setup to operate in ASYNCHRONOUS (
SRDF/A ) mode, now would
be a good time to set it back to that mode.
Would you like to set device group 1R1_12 to ASYNC Mode ? [yes or
no]: no
Starting Services ...done
[root@cs100_src nasadmin]#
Exit as root. This concludes the device-group failback phase. 2.
Known problems and limitations 91
Troubleshooting
Action Step
Log in to and manage the source VNX directly from the nasadmin account on the source VNX (cs100_src).
If the restoration is still unsuccessful, gather SRDF/A logging information using the script /nas/tools/collect_sup-
port_materials.
3.
Example 2 for restoration failure (NS series gateway)
The following example on an NS600G or NS700G shows a failed restore operation at the
destination.
Example 2 restore failure
[root@Celerra2 nasadmin]# /nas/sbin/nas_rdf restore
...
Starting restore on remote site Celerra1 ...
Waiting for nbs clients to start ... WARNING: Timed out
Waiting for nbs clients to start ... done
CRITICAL FAULT:
Unable to mount /nas/dos
Starting Services on remote site Celerra1 ...done
Note: ..." indicates not all lines of the restore output are shown.
Resolution for restoration failure example 2
Action Step
Stop the services at the source as root by typing: 1.
[root@Celerra1 nasadmin] # /sbin/service nas stop
Perform a restore at the source as root by typing: 2.
[root@Celerra1 nasadmin] # /nasmcd/sbin/nas_rdf restore
Output:
...
Waiting for nbs clients to start ... done
Waiting for nbs clients to start ... done
Suspend RDF link(s).......................................Done.
server_2 :
replace in progress ...done
commit in progress (not interruptible)...done
done
server_3 :
Error 4003: server_3 : standby is not configured
Resume RDF
link(s)........................................Done.
Starting Services ...done
92 Using SRDF/A with VNX 7.1
Troubleshooting
Example 3 for restoration failure (database lock error)
The following example shows a restore error that occurs when a server fails to acquire the
database lock. The restore completes with the error, but resolving the error involves running
the server_standby command at the source for the server involved in the lock contention.
The error is highlighted.
Example 3 restore error
[root@cs0_dst rdfadmin]# /nas/sbin/nas_rdf restore
Is remote site cs100_src ready for Storage restoration?
Do you wish to continue? [yes or no]: yes
Contact cs0_src ... is alive
Target (R2) View Source (R1) View MODES
-------------------------------- -------------------- ---- --------
ST LI ST
Standard A N A
Logical T R1 Inv R2 Inv K T R1 Inv R2 Inv RDF Pair
Device Dev E Tracks Tracks S Dev E Tracks Tracks MDA STATE
-------------------------------- -- ----------------- --- --------
DEV001 08F2 RW 1 0 RW 37CD WD 0 0 C.D R1 Updated
DEV002 08F3 RW 58 0 RW 37CE WD 0 0 C.D R1 Updated
DEV003 08FA RW 0 0 RW 37D3 WD 0 0 C.D R1 Updated
DEV004 08FB RW 0 0 RW 37D4 WD 0 0 C.D R1 Updated
DEV005 08FC RW 12 0 RW 37D5 WD 0 0 C.D R1 Updated
DEV006 08FD RW 0 0 RW 37D6 WD 0 0 C.D R1 Updated
DEV007 092C RW 3546 0 RW 0629 WD 0 0 C.D R1 Updated
DEV008 0930 RW 2562 0 RW 062D WD 0 0 C.D R1 Updated
DEV009 06F5 RW 0 0 RW 0631 WD 0 0 C.D R1 Updated
Total -------- ----- ------ -----
Track(s) 6179 0 0 0
MB(s) 193.1 0.0 0.0 0.0
Legend for MODES:
M(ode of Operation): A = Async, S = Sync, E = Semi-sync, C = Adaptive
Copy
D(omino) : X = Enabled, . = Disabled
A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off
An RDF 'Update R1' operation execution is in progress for device group
'1R2_500_4'.
Please wait...
Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Devices: 37CD-37CE ...................................... Merged.
Devices: 37D3-37D6 ...................................... Merged.
Devices: 0629-0634 ...................................... Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
The RDF 'Update R1' operation successfully initiated for device group
'1R2_500_4'.
Known problems and limitations 93
Troubleshooting
Example 3 restore error
Is remote site cs0_src ready for Network restoration?
Do you wish to continue? [yes or no]: yes
server_2 : done
server_3 : done
server_4 :
Error 4003: server_4 : standby is not configured
server_5 :
Error 4003: server_5 : standby is not configured
/dev/ndj1: clean, 10308/231360 files, 175874/461860 blocks
fsck 1.26 (3-Feb-2002)
Waiting for nbs clients to die ... done
Waiting for nbs clients to die ... done
/net/500 /etc/auto.500 -t 0,rw,sync
Waiting for 1R2_500_4 access ...done
An RDF 'Failback' operation execution is in progress for device group
'1R2_500_4'.
Please wait...
Write Disable device(s) on RA at target (R2)..............Done.
Suspend RDF link(s).......................................Done.
Merge device track tables between source and target.......Started.
Devices: 37CD-37CE ...................................... Merged.
Devices: 37D3-37D6 ...................................... Merged.
Devices: 0629-0634 ...................................... Merged.
Merge device track tables between source and target.......Done.
Resume RDF link(s)........................................Started.
Resume RDF link(s)........................................Done.
Read/Write Enable device(s) on SA at source (R1)..........Done.
The RDF 'Failback' operation successfully executed for device group
'1R2_500_4'.
Waiting for 1R2_500_4 sync ...done
Starting restore on remote site cs0_src ...
Waiting for nbs clients to start ... done
Waiting for nbs clients to start ... done
server_2 :
Error 2201: server_2 : unable to acquire lock(s), try later
server_3 :
server_3 : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_4 :
Error 4003: server_4 : standby is not configured
server_5 :
Error 4003: server_5 : standby is not configured
94 Using SRDF/A with VNX 7.1
Troubleshooting
Example 3 restore error
If the RDF device groups were setup to operate in ASYNCHRONOUS ( SRDF/A )
mode,
now would be a good time to set it back to that mode.
Would you like to set device group 1R2_500_4 to ASYNC Mode ? [yes or no]:
yes
Starting Services on remote site cs0_src ...
[root@cs0_dst rdfadmin]# exit
[rdfadmin@cs0_dst rdfadmin]$ exit
[nasadmin@cs0_dst nasadmin]$ exit
Resolution for restoration failure example 3
Action
Run the server_standby command on the source VNX for the server that had the lock contention (in this example, server_2).
Example:
[nasadmin@cs0_src nasadmin]$ server_standby server_2 -restore rdf
Output
server_2 :
server_2 : going standby
rdf : going active
replace in progress ...done
failover activity complete
commit in progress (not interruptible)...done
done
server_2 : Nil
Resolve Data Mover failure after failover activation
If a Data Mover develops hardware issues after activate, you can replace the affected Data
Mover and update the hardware information. To update the hardware information, you must
run the setup_slot command first as nasadmin switching (su) to root and then as rdfadmin
switching (su) to root.
Action Step
Log in to the destination VNX (cs110_dst) as nasadmin and switch (su) to root. 1.
Known problems and limitations 95
Troubleshooting
Action Step
Initialize the Data Mover by using this command syntax: 2.
[root@cs110_dst nasadmin]# /nas/sbin/setup_slot -init <x>
where:
<x> = Slot number of the new Data Mover
Example:
To initialize the Data Mover for slot 2, type:
[root@cs110_dst nasadmin]# /nas/sbin/setup_slot -init 2
Initializing server in slot 2 as server_2 ...
Starting PXE service...:done
Reboot server in slot 2, waiting..... 0 0 0 0 0 0 1 1 1 3 3 3 3 3
3 3 4 (154 secs)
Stopping PXE service...:done
Ping server in slot 2 on primary interface ...ok
Ping server in slot 2 on backup interface ...ok
Discover disks attached to server in slot 2 ...
Discovering storage (may take several minutes)
server_2 : done
server_2 : done
server_2 : done
server_2 : done
Synchronize date+time on server in slot 2 ...
server_2 : Mon Aug 17 12:11:05 EDT 2009
server_2 :
Processor = Intel Pentium 4
Processor speed (MHz) = 2800
Total main memory (MB) = 4093
Mother board = CMB-Sledgehammer
Bus speed (MHz) = 800
Bios Version = 03.80
Post Version = Rev. 01.59
server_2 : reboot in progress 0.0.0.0.0.0.0.0.1.1.3.3.3.3.3.4.done
Checking to make sure slot 2 is ready........ 5 5 (63 secs)
Completed setup of server in slot 2 as server_2
This Data Mover (also referred to as Blade) is a MirrorView or RDF
standby Data Mover, log in to the system as rdfadmin and switch
(su) to root, and, regardless of straight (for example, server 2
to server 2) or criss-cross (for example, server 2 to server 3)
configuration, use the CLI command with the same slot id:
/nas/sbin/setup_slot -i 2
[root@cs110_dst nasadmin]#
Exit root by typing: 3.
[root@cs110_dst nasadmin]# exit
exit
Exit nasadmin by typing: 4.
[nasadmin@cs110_dst ~]$ exit
logout
96 Using SRDF/A with VNX 7.1
Troubleshooting
Action Step
Log in to the destination VNX (cs110_dst) as rdfadmin and switch (su) to root. 5.
Initialize the Data Mover by using this command syntax: 6.
[root@cs110_dst rdfadmin]# /nas/sbin/setup_slot -init <x>
where:
<x> = Slot number of the new Data Mover
Example:
To initialize the Data Mover for slot 2, type:
[root@cs110_dst rdfadmin]#
/nas/sbin/setup_slot -init 2
The script will update only hardware related configuration such
as the MAC addresses for the internal network and then reboot the
Data Mover (also referred to as Blade).
server_2 : reboot in progress 0.0.0.0.0.0.0.0.1.1.3.3.3.3.3.4.done
Checking to make sure slot 2 is ready........ 5 5 (64 secs)
Completed setup of server in slot 2 as server_2
[root@cs110_dst rdfadmin]#
CAUTION Ensure that you run the setup_slot command first as nasadmin switching (su) to root and
then as rdfadmin switching (su) to root.
If you run the command as rdfadmin before running it as nasadmin, you will get the following error message:
setup_slot has not been run as nasadmin before running it as rdfad-
min user on this Data Mover (also referred to as Blade) The script
will exit without changing the state of the system or rebooting
it. Please do the following to set up this Data Mover correctly:
1. Initialize the new Data Mover for the nasadmin database by
logging in to the system as nasadmin, switching (su) to root,and
using the CLI command:
/nas/sbin/setup_slot -init 2
2. Initialize the new Data Mover for the rdfadmin database by
logging in to the system as rdfadmin user, switching (su) to root,
and using the CLI command:
/nas/sbin/setup_slot -init 2
Exit root by typing: 7.
[root@cs110_dst rdfadmin]# exit
exit
Exit rdfadmin by typing: 8.
[rdfadmin@cs110_dst ~]$ exit
logout
Known problems and limitations 97
Troubleshooting
Handle additional error situations
x If you shut down or restart any Data Movers (SRDF-protected or non-SRDF Data Movers)
at the destination while the /nas/sbin/nas_rdf -activate or the nas/sbin/nas_rdf -restore
command is running, the Control Station does not find a path to the storage system. With
the communication between VNX and the backend interrupted, the command fails.
Respond by doing the following:
1. Rerun the /nas/sbin/nas_rdf -activate or the /nas/sbin/nas_rdf -restore command after
the Data Mover is operational.
2. Do not shut down or restart any Data Movers at the destination while these commands
are running.
x When you run the -init command, it changes the ACL for the local Data Movers to 1000.
This prevents DR administrators from inadvertently accessing the local Data Movers in
the failed over state. However, this also prevents Global Users in Unisphere from
accessing this Data Mover in the normal state.
To resolve this problem, the ACL for the local Data Movers is no longer changed when
you run the -init command. Instead, the ACL is changed to 1111 during failover when
you run the -activate command. This prevents DR administrators from accessing the
Data Movers after failover and allows Global Users to access them in the normal state.
During failback, when you run the -restore command, the ACL is changed to 0. If you
initially use ACL 1111 for the local Data Movers on the source side, ensure that you
change the ACL from 0 back to 1111 after failback. Alternately, you can change the ACL
for the local Data Movers on the source side to some other value, for example, 1000, to
avoid this manual change.
x For some applications, IP Address configuration must be considered carefully. Otherwise,
this could lead to the data becoming corrupt. Consideration when using applications that
can switch to the NFS copy from the R2 without a restart on page 33 provides more
information.
x When using applications such as Oracle, ensure that you use the correct client-side NFS
mount options. Otherwise, this could result in incorrect data on SRDF or TimeFinder
copies. Consideration when using applications that require transactional consistency on
page 32 provides more information.
Error messages
All event, alert, and status messages provide detailed information and recommended actions
to help you troubleshoot the situation.
To view message details, use any of these methods:
x Unisphere software:
98 Using SRDF/A with VNX 7.1
Troubleshooting
Right-click an event, alert, or status message and select to view Event Details, Alert
Details, or Status Details.
x CLI:
Use this guide to locate information about messages that are in the earlier-release
message format.
x EMC Online Support website:
Use the text from the error message's brief description or the message's ID to search
the Knowledgebase on the EMC Online Support website. After logging in to EMC
Online Support, locate the applicable Support by Product page, and search for the
error message.
EMC Training and Professional Services
EMC Customer Education courses help you learn how EMC storage products work together
within your environment to maximize your entire infrastructure investment. EMC Customer
Education features online and hands-on training in state-of-the-art labs conveniently located
throughout the world. EMC customer training courses are developed and delivered by EMC
experts. Go to the EMC Online Support website at http://Support.EMC.com for course and
registration information.
EMC Professional Services can help you implement your system efficiently. Consultants
evaluate your business, IT processes, and technology, and recommend ways that you can
leverage your information for the most benefit. From business plan to implementation, you
get the experience and expertise that you need without straining your IT staff or hiring and
training new personnel. Contact your EMC Customer Support Representative for more
information.
EMC Training and Professional Services 99
Troubleshooting
100 Using SRDF/A with VNX 7.1
Troubleshooting
Appendix A
Portfolio of High-Availability
Options
This appendix illustrates the VNX SRDF high-availability configuration
options. Figure 4 on page 102 shows a configuration featuring active/passive
SRDF in either asynchronous mode (SRDF/A) or synchronous mode
(SRDF/S).
Figure 4 on page 102, Figure 5 on page 102, Figure 6 on page 102, Figure
7 on page 103, Figure 8 on page 103, Figure 9 on page 104, and Figure 10
on page 104show disaster recovery and business continuance configurations
featuring SRDF/S only, SRDF links, or both, with TimeFinder/FS NearCopy,
TimeFinder/FS FarCopy, or both, with adaptive copy mode (adaptive copy
disk or disk-pending mode).
Note: Prior to the introduction of SRDF/A, adaptive copy mode was referred to as
asynchronous mode SRDF.
Using SRDF/A with VNX 7.1 101
For more information on the configuration that best fits your business
needs, contact your local EMC sales organization.
VNX
R1 data
volumes
VNX
R2 data
volumes
Dedicated SRDF links
(synchronous or
asynchronous mode)
PFS
VNX
VNX
Symmetrix
Symmetrix
DR mirror of
PFS
Production site
DR recovery site
SRDF
SRDF
SRDF
SRDF
VNX DR over
SRDF/S
VNX-000038
Figure 4. VNX replication and recovery with active/passive synchronous
SRDF (SRDF/S) or asynchronous SRDF (SRDF/A)
VNX
R1 data
volumes
VNX
R2 data
volumes
Dedicated SRDF links
(synchronous mode only)
PFS
VNX
VNX
Symmetrix
Symmetrix
DR mirror of
PFS
Production site
DR recovery +
business
continuance site
SRDF
SRDF
SRDF
SRDF
VNX DR over
SRDF/S
VNX-000039
VNX
local BCV
volumes
NearCopy
snapshot of
PFS
Figure 5. VNX disaster recovery active/passive SRDF/S only with
TimeFinder/FS NearCopy
102 Using SRDF/A with VNX 7.1
Portfolio of High-Availability Options
VNX
R1 data
volumes
VNX
R2 data
volumes
Dedicated SRDF links
(synchronous mode only)
PFS
PFS
VNX
R2 data
volumes
VNX
VNX
Symmetrix
Symmetrix
DR mirror of
PFS
DR mirror of
PFS
VNX
R1 data
volumes
Production + DR
recovery site
Production + DR
recovery site
SRDF
SRDF
SRDF
SRDF
VNX DR over
SRDF/S
VNX-000040
Figure 6. VNX disaster recovery active/active SRDF/S only
VNX
R1 BCV
volumes
VNX
R2 BCV
volumes
BCV snapshot of
PFS
PFS
VNX
VNX
Symmetrix
Symmetrix
Imported FS
Production site
Business
continuance
recovery site
SRDF
SRDF
SRDF
SRDF
Adaptive
copy
VNX-000041
VNX
local STD
volumes
VNX
local STD
volumes
FarCopy
snapshot of
PFS
Dedicated SRDF links
(adaptive copy
write-pending mode)
Figure 7. VNX business continuance with TimeFinder/FS FarCopy (version
5.1)
103
Portfolio of High-Availability Options
VNX
R1 BCV
volumes
VNX
R2 BCV
volumes
Dedicated SRDF links
(adaptive copy
write-pending mode)
BCV snapshot of
PFS
PFS
VNX
VNX
Symmetrix
Symmetrix
Imported FS
Production site
Business
continuance
recovery site
SRDF
SRDF
SRDF
SRDF
Adaptive copy
VNX-000042
VNX
local STD
volumes
VNX
local STD
volumes
FarCopy
snapshot of
PFS
BCV snapshot of
PFS
IP WAN
Control Station IP
Control Station IP
Figure 8. VNX business continuance with TimeFinder/FS FarCopy (version
5.3)
VNX
R1 BCV
volumes
VNX
R1 BCV
volumes
VNX
R2 BCV
volumes
VNX
R2 BCV
volumes
Dedicated SRDF Links
(adaptive copy
write-pending mode)
FarCopy
snapshot of
PFS
VNX
VNX
Symmetrix
VNX Symmetrix
Symmetrix
Imported FS
Imported FS
PFS
Business
continuance
recovery site 1
Business
continuance
recovery
site 2
SRDF
SRDF
SRDF
SRDF
SRDF
SRDF
VNX-000043
VNX
local STD
volumes
VNX
local STD
volumes
VNX
local STD
volumes
FarCopy
snapshot of
PFS
BCV snapshot of
PFS
BCV snapshot of
PFS
IP WAN
Control Station IP
Control
Station IP
Control
Station IP
Production site
VNX FarCopy over
adaptive copy
VNX FarCopy over
adaptive copy
Figure 9. VNX business continuance with redundant FarCopy sites
104 Using SRDF/A with VNX 7.1
Portfolio of High-Availability Options
VNX
R1 BCV
volumes
VNX
R2 BCV
volumes
VNX
R2 BCV
volumes
PFS
PFS
Imported FS
VNX
VNX
Symmetrix
VNX
Symmetrix
Symmetrix
Imported FS
Production site 1
Production site 2
Business
continuance
recovery site
SRDF
(adaptive copy
write-pending mode)
SRDF
(adaptive copy
write-pending mode)
VNX-000044
VNX
local STD
volumes
VNX
local STD
volumes
VNX
local STD
volumes
VNX
local STD
volumes
FarCopy
snapshot of
PFS
FarCopy
snapshot of
PFS
IP WAN
Control Station IP
Control Station IP
Control Station IP
VNX FarCopy over
adaptive copy
BCV snapshot of
PFS
BCV snapshot of
PFS
VNX
R1 BCV
volumes
VNX FarCopy over
adaptive copy
Figure 10. VNX business continuance using TimeFinder/FS FarCopy with
many sites
105
Portfolio of High-Availability Options
106 Using SRDF/A with VNX 7.1
Portfolio of High-Availability Options
Glossary
A
active/active
I IMC
Symmelx
) IMC MVev/Synhnu
ngualn, a bdelna ngualn vlh lv duln le, eah alng a lhe
landby lhe lhe. Iah VNX e ha blh duln and landby Dala Me. I ne
le a, lhe lhe le lae e and ee lhe enl blh le. I SRDI, eah Symmelx
ylem allnednl ue (duln) andemle delnalnume. I MVev/S,
eah VNX b nguedl hae ue and delnaln LUN and a nleny gu.
active/passive
I SRDI MVev/S ngualn, a undelna elu vhee ne VNX e, vlh
l allahed ylem, ee a lhe ue (duln) e ee and anlhe VNX e, vlh
l allahed lage, ee a lhe delnaln (bau). Th ngualn de ae
aable n lhe eenl lhal lhe ue le unaaabe. An SRDI ngualn eue
Symmelx ylem a baend lage. A MVev/S ngualn eue e
VNX b ee ylem a baend lage.
adaptive copy disk-pending mode
SRDI mde ealn n vhh vle la aumuale n gba memy n lhe a ylem
bee beng enl l lhe emle ylem. Th mde av lhe may and enday ume
l be me lhan ne I/O ul ynhnzaln. The maxmum numbe I/O lhal an be ul
yhnzaln dened ung a maxmum ev aue.
C
Common Internet File System (CIFS)
Ie-hang l baed n lhe Ml See Meage (SM). Il av ue l
hae e ylem e lhe Inlenel and nlanel.
D
delta set
I SRDI/Aynhnu (SRDI/A), a edelemned ye ealn ued aynhnu
hl vle ma ue l a delnaln. Iah dela el nlan gu I/O eng.
The deng lhee I/O managed nleny. I VNX Real
, a el nlan lhe
Using SRDF/A with VNX 7.1 107
b mdaln made l lhe ue e ylem lhal VNX Real ue l udale lhe
delnaln e ylem (ead-ny, nl-n-lme, nlenl ea lhe ue e ylem).
The mnmum dela-el ze 128 M.
dependent write consistency
I SRDI/A, lhe manlenane a nlenl nl-n-lme ea dala belveen a ue and
delnaln lhugh lhe eng and eealn a vle l lhe delnaln n deed,
numbeed dela el.
destination VNX for file
Tem lhe emle (enday) VNX e n an SRDI MVev/S ngualn. The
delnaln VNX e lyay lhe landby de a dale eey ngualn.
Symmelx ngualn len a lhe delnaln VNX e: lhe lagel VNX e.
L
local mirror
Symmelx hadv ume lhal a mele ea a duln ume vlhn lhe ame
lage unl. I lhe duln ume beme unaaabe, I/Onlnue l ue lhe a m
lanaenl l lhe hl.
See also R1 volume.
M
metavolume
On a VNX Ie, a nalenaln ume, vhh an nl d, e, le ume.
A aed a hyeume hye. Iey e ylem mul be ealed n l a unue
melaume.
See also disk volume, slice volume, stripe volume, and volume.
MirrorView Synchronous (MirrorView/S)
Slvae aaln lhal ynhnuy manlan e duln mage (ue LUN)
al a eaale aln l de dale eey aably. The edmage ae nlnuuy
udaled l be nlenl vlh lhe ue, and de lhe ably a landby VNX e l
lae e a aed VNX e n lhe eenl a dale n lhe duln le. Synhnu
emle m (ue and delnaln LUN) eman n ynhnzaln vlh eah lhe
eey I/O. MVev/S eue VNX b baend lage.
Multi-Path File System (MPFS)
VNX e ealue lhal av helegeneu ee vlh MIIS lvae l nuenly
ae, dely e Ibe Channe SCSI hanne, haed dala led n a IMC Symmelx
VNX b lage ylem. MIIS add a ghlveghl l aedIe Mang Il
(IMI) lhal nl meladala ealn.
P
Production File System (PFS)
Iduln Ie Sylemn VNX Ie. AIIS bul n Symmelx ume VNX
LUN and munled n a Dala Me n lhe VNX Ie.
108 Using SRDF/A with VNX 7.1
Glossary
R
R1 volume
SRDI lem denlng lhe ue (may) Symmelx ume.
See also local mirror.
R2 volume
SRDI lem denlng lhe delnaln (enday) Symmelx ume.
See also remote mirror.
remote mirror
I SRDI, a emle m a Symmelx hadv ume hyay aled n a emle
Symmelx ylem. Ung lhe IMC SRDI lehngy, lhe emle ylem |ned l a a
ylem vlh a a m. I lhe a m beme unaaabe, lhe emle m
aebe. I MVev/S, a emle m a LUN med n a deenl VNX b.
Iah emle m nlan a alua ue LUN (may mage) and l euaenl
delnaln LUN (enday mage). I lhe ue le ylem a, lhe delnaln LUN n lhe
m an be mled l lae e, lhu avng ae l dala al a emle aln.
See also R2 volume.
S
SnapSure
On lhe VNX e, a ealue lhal de ead-ny, nl-n-lme e, a nvn a
henl, a e ylem.
SRDF/Asynchronous (SRDF/A)
SRDI exlended-dlane ealn aly dng a elalabe, nl-n-lme emle ea
lhal ag nl a behnd lhe ue. Ung SRDI/A vlh VNX e de deendenl vle
nleny hl vle m a ue VNX e/Symmelx DMX
ylem a l a
delnaln VNX e/Symmelx DMXylema lhugh edelemnedlme ye (dela
el).
SRDF/Synchronous (SRDF/S)
SRDI mele dale eey ngualn ln lhal de ynhnzed, ea-lme
mng e ylem dala belveen lhe ue Symmelx ylem and ne me emle
Symmelx ylem al a mled dlane (u l 200 m). SRDI/S an nude IMC
TmeInde