Beruflich Dokumente
Kultur Dokumente
Action
Log in to the clustershell and execute the following command
cluster1::> cluster show
Node
Health
Eligibility
true
true
cluster1-02
false
true
cluster1-03
true
true
cluster1-04
true
true
2.
3.
MODULE 2: M-HOST
Exercise 1: Fun with mgwd and mroot
Time Estimate: 20 minutes
Step
1.
Action
On a node which does not own epsilon log in as admin to your cluster via
console and go into systemshell.
::> set diag
::*> systemshell local
2.
??
Ss
0:11.76 mgwd -z
2794
p1
DL+
The above listing shows that the process id of the running instance of mgwd on this
node is 913
Kill mgwd as follows
%sudo kill <pid of mgwd as obtained from above>
3.
4.
5.
Health
Eligibility
Epsilon
------------
cluster1-01
true
true
true
cluster1-02
true
true
false
cluster1-03
true
true
false
cluster1-04
true
true
false
Health
Eligibility
Epsilon
------------
cluster1-01
true
false
true
cluster1-02
false
true
false
cluster1-03
false
true
false
cluster1-04
false
true
false
6.
To fix this without rebooting and without manually re-mounting /mroot restart mgwd.
7.
Which phase in the boot process could we see this behavior occurring?
Action
1.
2.
3.
Create a new system configuration backup of the node and the cluster as follows:
cluster1::*> system configuration backup create -node
cluster1-01 -backup-type
node -backup-name cluster1-01.node
[Job 164] Job is queued: Local backup job.
::*> job private show
::*> job private show id [Job id given as output of the
backup create command above]
::*> job private show -id [id as above] -fields uuid
::*> job store show -id [uuid obtained from the command above]
4.
The following KB shows how to scp the backup files you created, as well as one of
the system-created backups off to the Linux client:
https://kb.netapp.com/support/index?page=content&id=1012580
Use the following to install p7zip on your Linux client and use it to unzip the backup
files.
# yum install p7zip
This is the recommended practice on live nodes however for vsims scp does not
work.
So in the current lab setup ,drop to the systemshell and cd to
/mroot/etc/backups/config
Unzip the system created backup file by doing the following:
cd into one of the folders created by the unzip. There will be another 7z file. Extract
it:
% 7za e [file name]
Whats in this file?
Extract the file:
% 7za e
[file name]
Compare it to what is in /mroot/etc of one of the cluster nodes. What are some of the
differences?
5.
6.
7.
Unzip the node and cluster backups you created. What do you notice about the
contents of these files?
Step
1.
Action
Move a nodes root volume to a new aggregate.
Work with your lab partners and do this on only one node.
For live nodes the following KB contains the steps to do this:
https://kb.netapp.com/support/index?page=content&id=1013350&actp=LIST
However for vsims the root volume that is created by default is only 20MB and
too small to hold the cluster configuration information.
Hence follow the steps given below:
2.
Run the following command to create a new 3-disk aggregate on the desired node :
cluster1::> aggr create -aggregate new_root -diskcount 3 nodes local
[Job 276] Job succeeded: DONE
cluster1::> aggr show -nodes local
Aggregate
Size Available Used% State
RAID Status
#Vols Nodes
15.45MB
98% online
1 cluster1-02
raid_dp,
normal
student2
raid_dp,
900MB
467.4MB
48% online
8 cluster1-02
normal
2 entries were displayed.
3.
Ensure that the node does not own an epsilon. If it does, run the following command
to move it to another node in the cluster:
cluster1::> set diag
Health
Eligibility
Epsilon
------------
cluster1-01
true
true
false
cluster1-02
true
true
true
cluster1-03
true
true
false
cluster1-04
true
true
false
Run the following command to move the epsilon and modify it to 'false' on the
owning node:
::*> cluster modify -node cluster1-02 -epsilon false
Then, run the following command to modify it to 'true' on the desired node:
::*> cluster modify -node cluster1-01 -epsilon true
Health
Eligibility
Epsilon
------------
cluster1-01
true
true
true
cluster1-02
true
true
false
cluster1-03
true
true
false
cluster1-04
true
true
false
4.
Run the following command to set the cluster eligibility on the node to 'false':
::*> cluster modify -node cluster1-02 -eligibility false
Note: This action must be performed on a node that is not to be marked as ineligible.
5.
Run the following command to reboot the node into maintenance mode
cluster1::*> reboot local
(system node reboot)
Warning: Are you sure you want to reboot the node? {y|n}: y
login:
Waiting for PIDS: 718.
Waiting for PIDS: 695.
Terminated
.
Uptime: 2h12m14s
System rebooting...
\
Hit [Enter] to boot immediately, or any other key for command
prompt.
Booting...
x86_64/freebsd/image1/kernel data=0x7ded08+0x1376c0
syms=[0x8+0x3b7f0+0x8+0x274a
8]
x86_64/freebsd/image1/platform.ko size 0x213b78 at 0xa7a000
NetApp Data ONTAP 8.1.1X34 Cluster-Mode
Copyright (C) 1992-2012 NetApp.
All rights reserved.
md1.uzip: 26368 x 16384 blocks
md2.uzip: 3584 x 16384 blocks
*******************************
*******************************
^CBoot Menu will be available.
Generating host.conf.
*>
6.
Run the following command to set the options for the new aggregate to become the
new root:
Note: It might be required to set the aggr options to CFO instead of SFO:
*> aggr options new_root root
aggr options: This operation is not allowed on aggregates with sfo HA
Policy
7.
.
Uptime: 6m26s
System halting...
\
Hit [Enter] to boot immediately, or any other key for command
prompt.
Booting in 1 second...
8.
Once the node is booted, a new root volume named AUTOROOT will be created. In
addition, the node will not be in quorum yet. This is because the new root volume
will not be aware of the cluster.
login: admin
Password:
***********************
**
SYSTEM MESSAGES
**
***********************
cluster1-02::>
9.
123478563412</uuid>
</d-volume-info>
<d-volume-info>
<name>AUTOROOT</name>
<uuid>30d8f742-fc04-11e1-bbf5123478563412</uuid>
</d-volume-info>
<d-volume-info>
<name>student2_cifs</name>
<uuid>b8868843-e788-11e1-ab6e123478563412</uuid>
</d-volume-info>
<d-volume-info>
<name>student2_cifs_child</name>
<uuid>c07f13ce-e788-11e1-ab6e123478563412</uuid>
</d-volume-info>
<d-volume-info>
<name>student2_nfs</name>
<uuid>c861f83b-e788-11e1-ab6e123478563412</uuid>
</d-volume-info>
% zsmcli -H 192.168.71.33 d-volume-set-info desired-attrs=size
id=30d8f742-fc04-11e1-bbf5-123478563412 volume-attrs='[dvolume-info=[size=+500m]]'
<results status="passed"/>
% zsmcli -H 192.168.71.33 d-volume-list-info id=30d8f742-fc0411e1-bbf5-123478563412 desired-attrs=size
<results status="passed">
<volume-attrs>
<d-volume-info>
<size>525m</size>
</d-volume-info>
</volume-attrs>
</results>
10.
->
11.
From a healthy node, with all nodes booted, run the following command:
::*> system configuration recovery cluster rejoin -node <the node
where new root volume was created>
12.
After a boot, check the cluster to ensure that the node is back and eligible:
cluster1::> cluster show
Node
Health
Eligibility
true
true
cluster1-02
true
true
cluster1-03
true
true
cluster1-04
true
true
13.
14.
After the node is in quorum, run the following command to add the new root vol to
VLDB. This is necessary because it is a 7-Mode volume and will not be
displayed until it is added:
cluster1::> set diag
cluster1::*> vol show -vserver cluster1-02
(volume show)
Vserver
Volume
Available Used%
Aggregate
State
Type
Size
aggr0_cluster1_02_0
online
283.3MB
RW
851.5MB
66%
Aggregate
State
Type
Size
new_root
online
RW
525MB
RW
851.5MB
cluster1-02
vol0
aggr0_cluster1_02_0
online
283.3MB
66%
15.
Run the following command to remove the old root volume from VLDB
cluster1::*> vol remove-other-volume -vserver cluster1-02 volume vol0
(volume remove-other-volume)
Aggregate
State
Type
Size
16.
AUTOROOT
27%
new_root
online
RW
525MB
Destroy the old root vol by running the following command from the node shell of the
node where the new root volume has been created
cluster1::*> node run local
Type 'exit' or 'Ctrl-D' to return to the CLI
cluster1-02> vol status vol0
Volume State
vol0 online
Status
Options
raid_dp, flex
nvfail=on
64-bit
Volume UUID: 014df353-bbc1-11e1-bb4c123478563412
Containing aggregate: 'aggr0_cluster1_02_0'
cluster1-02> vol offline vol0
Volume 'vol0' is now offline.
cluster1-02> vol destroy vol0
Are you sure you want to destroy volume 'vol0'? y
Volume 'vol0' destroyed.
And the old root aggr can be destroyed if desired:
From cluster shell:
cluster1::*> aggr show -node <node where new root vol was
created>
Aggregate
Size Available Used% State
RAID Status
#Vols Nodes
899.7MB
0% online
0 cluster1-02
900MB
371.9MB
59% online
1 cluster1-02
900MB
467.2MB
48% online
8 cluster1-02
raid_dp,
normal
new_root
raid_dp,
normal
student2
raid_dp,
normal
3 entries were displayed.
cluster1::*> aggr delete -aggregate <old root aggregate name>
17.
18.
What sort of things regarding the root vol did you observe during this?
Action
1.
2.
3.
4.
5.
List some of the reasons why customers could have this problem.
6.
2.
aggr0_cluster1_03_0
student1
student2
failed
3.
aggr0_cluster1_03_0
student1
student2
cluster1::*> vserver create -vserver test -rootvolume test aggregate student2 -ns-switch file -rootvolume-securitystyle unix
Info: Node cluster1-02 that hosts aggregate student2 is
offline
Error: create_imp: create txn
failed
4.
6.
6.
7.
What happened?
Action
1.
From clustershell of each node send a test autosupport as follows: (y takes the
values 1,2,3,4)
::*> system autosupport invoke -node clusterX-0y -type test
You will see an error such as:
Error: command failed: RPC: Remote system error - Connection refused
2.
3.
wait for
incoming events.
And then we check spmctl to see if it's still monitoring notifyd:
cluster-1-01% spmctl | grep notify
In this case, it looks like notifyd got removed from spmctl and we need to re-add it:
cluster-1-01% spmctl -e -h notifyd
cluster-1-01% spmctl | grep notify
Exec=/sbin/notifyd -n;Handle=56548532-c334-4633-8cd877ef97682d3d;Pid=15678;State=Running
cluster-1-01% ps aux | grep notify
root 15678 0.0 6.7 112244 50568 ?? Ss 4:06PM 0:02.42 /sbin/notifyd
diag 15792 0.0 0.2 12016 1144 p2 S+ 4:06PM 0:00.00 grep notify
4.
What happens?
MODULE 3: SCON
Exercise 1: Vifmgr and MGWD interaction
Time Estimate: 30 minutes
Step
1.
Action
Try to create an interface:
clusterX::*> net int create -vserver studentY -lif test -role
data -data-protocol nfs,cifs,fcache -home-node clusterX-02 home-port
You see the following error:
Warning: Unable to list entries for vifmgr on node clusterX02. RPC: Remote
system error - Connection refused
{<netport>|<ifgrp>}
2.
Home Port
-node clusterX-02
3.
4.
5.
6.
Oct 10*
What do you see?
7.
8.
9.
10.
11.
Step
Action
From the clustershell create a new network interface as follows: Y E {1,2,3,4}
1.
clusterX::*> net int create -vserver studentY -lif data1 role data -data-protocol nfs,cifs,fcache -home-node
clusterX-0Y -home-port e0c -address 192.168.81.21Y -netmask
255.255.255.0 -status-admin up
(network interface create)
2.
3.
View the mgwd log file on the node where you are giving the net int create
command and determine the lifid which is eing reported as duplicate
4.
Execute the following:
clusterX::*>debug smdb table vifmgr_virtual_interface show
-node clusterX-0* -lif-id [lifid/vifid determined from step
3]
What do you see?
5.
5.
MODULE 4: NFS
Exercise 1: Mount issues
Time Estimate: 20 minutes
Step
1.
Action
From the Linux Host execute the following:
#mkdir /cmodeY
#mount studentY:/studentY_nfs /cmodeY
You See the following:
mount: mount to NFS server 'studentY' failed: RPC Error:
Program not registered.
2.
lif
curr-node
curr-port
3.
Execute the following to start a packet trace from the nodeshell of the node that was
being mounted and attempt the mount once more
clusterX::*> run -node clusterX-01
Type 'exit' or 'Ctrl-D' to return to the CLI
clusterX-01> pktt start e0d
e0d: started packet trace
From the Linux Host attempt the mount once more as shown below:
vserver_fs
trend
4.
5.
Step
1.
Action
From the Linux Host attempt to mount volume studentX_nfs.
2.
From clustershell execute the following to find the export policy associated with the
volume studentX_nfs:
cluster1::*> vol show -vserver studentX -volume studentX_nfs
instance
Next use the export-policy rule show to find the properties of the export policy
associated with the volume studentX_nfs
Why did you get an access denied error?
How will you fix the issue
3.
Now once again attempt to mount studentX_nfs from the Linux Host
# mount studentX:/studentX_nfs /cmode
mount: studentX:/studentX_nfs failed, reason given by server:
No such file or directory
What issue is occurring here?
4.
Now once again attempt to mount studentX_nfs from the Linux Host
# mount studentX:/studentX_nfs /cmode
Is the mount successful?
If yes, cd into the mount point
#cd /cmode
-bash: cd: /cmode: Permission denied
How do you resolve this?
Note: Depending on how you resolved the issue with the export-policy in step
1 you may not see any error here.In that case move on to step 4
If you unmount and remount, does it still work?
5.
Try to write a file into the mount
[root@nfshost cmode]# touch f1
drwxr-xr-x 26 root
-rw-r--r--
root
1 admin admin
drwxrwxrwx 12 root
root
6.
Step
1.
Action
From the Linux Host execute:
# cd /nfsX
-bash: cd: /nfsX: Stale NFS file handle
2.
Unmount the volume from the client and try to re-mount. What happens?
3.
lif
curr-node
4.
Look for volumes with the MSID in the error shown in the vldb log as follows:
From clustershell execute the following to find the aggregate where the volume
being mounted(nfs_studentX) lives and on which node that aggregate lives:
cluster1::*> vol show -vserver studentX -volume nfs_studentX fields aggregate
(volume show)
vserver
volume
aggregate
clusterY-0X
Go to nodeshell of the node (underlined above) that hosts the volume and its
aggregate and use the showfh command and convert the msid from hex.
::>run node clusterY-0X
>priv set diag
*>showfh /vol/nfs_studentX
flags=0x00 snapid=0 fileid=0x000040 gen=0x5849a79f
fsid=0x16cd2501 dsid=0x0000000000041e msid=0x00000080000420
5.
6.
MODULE 5: CIFS
Instructions to Students:
As mentioned in the lab handout the valid windows users in the domain
Learn.NetApp.local are:
a) Administrator
b) Student1
c) Student2
Step
1.
Action
Find the node where the IP(s) for vserver studentX is hosted
From the RDP machine do the following to start a command window
Start->Run->cmd
In the command window type
ping studentX
From the clustershell find the node on which the IP is hosted (Refer to NFS Exercise
3)
Login to the console of that node and execute the steps of this exercise
2.
3.
Note: for all the steps of this exercise clusterY-0X should be the name of the
local node
Type the following to verify the name mapping of windows user student1 ,.
4.
5.
Type the following to test a Windows login for your user windows name in diag secd
cluster1::diag secd*> authentication login-cifs -node
local -vserver studentX -user <username that you have
used to RDP to the windows machine>
6.
[ Cache: LSA/learn.netapp.local ]
Queue> Waiting: 0, Max Waiting: 1, Wait Timeouts: 0, Avg
Wait: 0.00ms
Performance> Hits: 1, Misses: 4, Failures: 0, Avg
Retrieval: 6795.40ms
7.
8.
Debug
OFF
Type the following to set and view the current logging level in secd
cluster1::diag secd*> log set -node clusterY-0X -level err
Setting log level to "Error"
9.
Error
OFF
Type the following to enable tracing in secd to capture the logging level specified
cluster1::diag secd*> trace show -node local
Trace Spec
--------------------------------------Trace spec has not been set.
cluster1::diag secd*> trace set -node cluster1-01 -traceall yes
Trace spec set successfully for trace-all.
TraceAll:
10.
Type the following to check secd configuration for comparison with the ngsh
settings?
cluster1::diag secd*> config query -node local -source-name
cifs-server
account
kerberos-realm
machine-
nis-domain
to-name
vserver
vserverid-
unix-group-membership local-unix-user
group
local-unix-
kerberos-keyblock
client-config
ldap-config
ldap-
ldap-client-schema
kerberos
name-mapping
nfs-
cifs-server-security
dns
virtual-interface
routing-
cifs-server-options
cifs-preferred-dc
group-routes
secd-cache-config
vserver: 6
cur_pwd:
01433517c8acbbf66c2e287b4bee56f5d8b707cfb69710737bfb2061
6ebe61fc31163acde2b5a827f3c2d395b89fef15f28a8f514c147906
580cbaa30b4a1361444f76036d2c590222ce1a0feaa56779
new_pwd:
installdate: 1345202787
sid: S-1-5-21-3281022357-2736815186-1577070138-1610
11.
netbios-to-ad-domain
ems-
ldap-groupid-to-name
userid-to-creds
ldap-groupname-to-id
ldap-
ldap-username-to-creds
to-sid
log-duplicate
name-
sid-to-name
groupname-to-id
nis-groupid-to-name
nis-
nis-userid-to-creds
group-membership
nis-username-to-creds
nis-
netgroup
bad-route-to-target
schannel-key
lif-
cluster1::diag secd*> cache clear -node clusterY-0X vserver studentX -cache-name ad-to-netbios-domain
Type the following to clear all caches together
cluster1::diag secd*> restart -node clusterY-0X
12.
From the RDP machine close the cifs share \\studentX opened in windows explorer
Action
From the RDP machine access the cifs share \\studentX
Start->Run->\\studentX
What error message do you see?
2.
Does the user seem to be functioning properly? If not, what error do you get?
3.
4.
5.
Which log in systemshell can we look at to see errors for this problem?
6.
7.
8.
The Windows Explorer window which opens when you navigate to Start->Run>\\studentX shows 2 shares .
a) studentX
b) studentX_child
Try to access the shares
What happens?
Do the following:
Enable debug logging for secd on the node that owns your data lifs
Close the CIFS session on the Windows host and run net use /d * from
cmd to clear cached sessions and retry the connection
9.
Given the results of the previous tests, what could the issue be here?
10.
rootvolume
Vserver: studentX
Share: studentX
CIFS Server NetBIOS Name: STUDENTX
Path: /studentX_cifs
Share Properties: oplocks
browsable
changenotify
Symlink Properties: File Mode Creation Mask: Directory Mode Creation Mask: Share Comment: Share ACL: Everyone / Full Control
File Attribute Cache Lifetime: cluster1::*> vserver cifs share show -vserver studentX share-name studentX_child
Vserver: studentX
Share: studentX_child
CIFS Server NetBIOS Name: STUDENTX
Path: /studentX_cifs_child
Share Properties: oplocks
browsable
changenotify
Symlink Properties: File Mode Creation Mask: Directory Mode Creation Mask: Share Comment: Share ACL: Everyone / Full Control
File Attribute Cache Lifetime: From the above commands obtain the name of the volumes being accessed via the
shares
11.
Now that you know the volumes you are trying to access use fsecurity show to view
permissions on these.
cluster1::*> vol show -vserver studentX -volume studentX_cifs
instance
hosted on
From node shell of that node run
.
cluster1-01> fsecurity show /vol/studentX_root
What do you see?
12.
13.
Step
1.
Action
From a client go Start -> Run -> \\studentX\studentX
What do you see?
2.
3.
From the nodeshell of the node where the volume and its aggregate is hosted run:
cluster1-01> fsecurity show /vol/student1_cifs
[/vol/student1_cifs - Directory (inum 64)]
Security style: NTFS
Effective style: NTFS
Unix security:
uid: 0
gid: 0
mode: 0777 (rwxrwxrwx)
4.
From the above command, obtain the sid of the owner of the volume.
From ngsh run:
cluster1::*> diag secd authentication translate -node local vserver studentX -sid S-1-5-32-544
What do you see?
5.
Action
Try to access \\studentX\studentX
What do you see?
2.
3.
4.
What does the event log show? What about the secd log? (Exercise 2 steps 3 and 8)
From nodeshell of the node that hosts the volume and its aggregate run:
fsecurity show /vol/studentX_cifs
Do the permissions show that access should be allowed?
5.
From clustershell obtain the name of the export-policy associated with the volume as
follows:
cluster1::> volume show -vserver studentx -volume
student1_cifs -fields policy
Now view details of the export-policy obtained in the previous command
cluster1::> export-policy rule show -vserver studentX policyname <policy name obtained from the above command>
cluster1::> export-policy rule show -vserver studentX policyname <policy name obtained from the above command> ruleindex <rule index applicable>
What do you see?
How do you fix the issue?
Step
1.
Action
Review your SAN configuration on the cluster.
-
Licenses
Interfaces
2.
3.
Create an igroup and add the ISCSI IQN of your host to the group.
4.
5.
Map the lun and access from lab host. Format the lun and write data to it.
6.
From clustershell
cluster1::*> iscsi show
What do you see?
cluster1::*> debug seqid show
What do you see?
7.
EXERCISE 2
TASK 1: TROUBLESHOOT QUORUM ISSUES
STEP ACTION
1.
2.
3.
4.
5.
6.
7.
Team member 2 on the Node 2 ngsh, view the current cluster kernel status:
::*> cluster kernel-service show -instance
8.
Team member 2 on the Node 2 ngsh, bring down the cluster network LIFs on the interface:
::*> net int modify -vserver clusterY-02 -lif clus1,clus2 status-admin down
STEP ACTION
9.
Team member 2 on the Node 2 ngsh, view the current cluster kernel status:
::*> cluster kernel-service show -instance
10.
Team member 1 on the Node 1 ngsh, view the current cluster kernel status:
::*> cluster kernel-service show -instance
11.
On the Node 2 PuTTY interface, enable the cluster network LIFs on the interface:
::*> net int modify -vserver cluster1-02 -lif clus1,clus2 status-admin up
12.
Team member 2 on the Node 2 ngsh, view the current cluster kernel status:
::*> cluster kernel-service show -instance
What do you see?
13.
Team member 1 on the Node 1 ngsh, view the current cluster kernel status:
::*> cluster kernel-service show -instance
What do you see?
14.
STEP ACTION
15.
Team member 1on the Node 1 ngsh, view the current bcomd information:
cluster1::*> debug smdb table bcomd_info show
What do you see?
16.
Team member 2 reboot Node2 to have it start participating in SAN quorum again:
::*> reboot node clusterY-02
17.
18.
19.
20.
Verify the cluster kernel to verify both nodes have a status of in quorum (INQ):
::*> cluster kernel-service show instance
::*>debug smdb table bcomd_info show
In this task, you bring down the LIFs that are associated with a LUN.
STEP ACTION
1.
On your own, disable LIFs that are associated with studentX_iscsi and determine how this action
impacts connectivity to your LUN on the Windows host.
2.
END OF EXERCISE
Step
Action
1.
What are two ways we can see where the nvfail option is set on a volume?
2.
3.
4.
MODULE 7: SNAPMIRROR
Exercise 1: Setting up Intercluster SnapMirror
Time Estimate: 20 minutes
Step
1.
Action
From clustershell of cluster1 run:
cluster1::> snapmirror create -source-path
cluster1://student1/student1_snapmirror -destination-path
cluster2://student3/student3_dest -type DP -tries 8 throttle unlimited
2.
3.
4.
5.
6.
7.
Step
1.
Action
From clustershell of cluster1 run:
cluster1::*> snapmirror create -source-path
cluster1://student1/student1_snapmirror -destination-path
cluster2://student3/student3_dest -type DP -tries 8 throttle unlimited
What error do you see?What might he be doing wrong?
2.
3.
After correcting the issue, run the following command in clustershell of cluster2:
cluster2::> snapmirror create -source-path
cluster1://student1/student1_snapmirror -destination-path
cluster2://student3/student3_dest -type DP -tries 8 throttle unlimited
::>snapmirror show
What do you see? Is the snapmirror functioning?
How do you get the mirror working if its not?
4.
After the snapmirror is confirmed as functional, check to see how long it has been
since the last update (snapmirror lag).
Exercise 3: LS Mirrors
Time Estimate: 20 minutes
Step
1.
Action
Create two LS mirrors that point to your studentX_snapmirror volume.
clusterY::*> volume create -vserver studentX -volume
studentX_LS_snapmirror -aggregate studentX -size 100MB state online -type DP
[Job 265] Job succeeded: Successful
What steps did you have to consider? Check the MSIDs and DSIDs for the source
and destination volumes. What do you notice?
clusterY::*> volume show -vserver studentX -fields msid,dsid
2.
Attempt to initialize one of the mirrors using the snapmirror initialize command.
cluster1::*> snapmirror initialize -destination-path
cluster1://student1/student1_LS_snapmirror
[Job 276] Job is queued: snapmirror initialize of destination
cluster1://student1/student1_LS_snapmirror.
3.
After initializing the LS mirrors, try to update the mirrors using snapmirror update.
clusterY::*> snapmirror update -destination-path
clusterY://studentX/studentX_LS_snapmirror
[Job 279] Job is queued: snapmirror update of destination
clusterY://studentX/studentX_LS_snapmirror.
clusterY::*> job show
4.
5.
clusterY://studentX/studentX_snapmirror
What do you see in ls on the host? Why?
Modify the source volume to 000
clusterY::*> volume modify -vserver studentX -volume
studentX_snapmirror -unix-permissions 000
Queued private job: 163