Sie sind auf Seite 1von 141

NetApp Storage

Clustered Ontap
Configuration & Operation Instruction NetApp Storage

Configuration &
Operation
Instruction
12.2018
Document version 3.6
Configuration & Operation Instruction

Document control
date version author change/addition
no.
20/07/2013 2.0 Mathias Dubourg Initial Release for Clustered DoT 8.2
19/09/2014 2.1 Julien Lutran Sophos Antivirus configuration added (§4.20)
16/01/2015 2.2 Mathias Dubourg Updated with QOS
12/04/2015 2.3 Morgan DE Add NFS export §3.8
KERSAUSON Add Ipspace configuration §4.3
04/05/2016 2.4 Antoine Gutierrez Added Advanced Drive Partitioning, NDMP
modes & Tape drive operations
18/05/2017 2.5 Antoine Gutierrez Snaplock configuration added. (§4.22)
12/06/2017 2.6 Mathias Dubourg Updated with Snapmirror Transition + Harvest
user
27/06/2017 2.7 Antoine Gutierrez Metrocluster configuration added. (§4.25)
05/09/2017 2.8 Antoine Gutierrez NFSv4 configuration added (§3.9)
30/11/2017 2.9 Christophe Auchabie Disk Clearing/Sanitization process
01/12/2017 3.0 Christophe Auchabie Ndmpcopy usage
08/12/2017 3.1 Christophe Auchabie XCP Migration Tool
08/12/2017 3.2 Antoine Gutierrez §4.25 Metrocluster configuration added and
updated.
15/12/2017 3.3 Christophe Auchabie Update XCP / NDMPCopy
07/06/2018 3.4 Mathias Dubourg Update for Ontap 9.3 (Adaptive QoS, Inline
Aggregate-level Data Deduplication)
16/10/2018 3.5 Antoine Gutierrez SVM DR documentation added.
14/12/2018 3.6 Mathias Dubourg Update with NetApp Volume Encryption
16/01/2019 3.7 François Mocard Update with SVM DR description
08/02/2019 3.8 François Mocard Update with SVM DR activation

© copyright, Equant 2006


All rights reserved.
The information contained in this document is the property of Equant and its affiliates and
subsidiary companies forming part of the Equant group of companies (individually or collectively).
No part of this document may be reproduced, stored in a retrieval system, or transmitted in any
form or by any means; electronic, mechanical, photocopying, recording, or otherwise, without the
prior written permission of Equant. Legal action will be taken against any infringement.
Equant is part of the France Telecom group and operates under the name, Orange Business Services.

© Copyright Equant 12.2018


Internal Use Only 3 of 141
Configuration & Operation Instruction

Table of contents

1 Introduction........................................................................................... 6
1.1 Document purpose.................................................................................................... 6
1.2 Audience................................................................................................................... 6
1.3 Acronyms and Abbreviations..................................................................................... 6
2 Architecture...........................................................................................7
2.1 Naming Conventions................................................................................................. 7
3 Operation instructions............................................................................8
3.1 NetApp management tools........................................................................................ 8
3.1.1 Storage Web GUIs....................................................................................... 8
3.1.2 Storage Command Line Interface (CLI).......................................................8
3.1.3 Storage Serial Console................................................................................ 9
3.2 Qtree Clustered Data ONTAP..................................................................................... 9
3.2.1 Modify Qtrees............................................................................................. 9
3.2.2 Display Qtree Statistics.............................................................................10
3.2.3 Reset Qtree Statistics............................................................................... 10
3.2.4 Delete Empty Qtrees................................................................................ 10
3.2.5 Delete Qtrees that Contain Files...............................................................11
3.2.6 Display Information About Qtrees.............................................................11
3.2.7 Rename a Qtree........................................................................................ 12
3.3 Deduplication Clustered Data ONTAP......................................................................12
3.3.1 Inline Aggregate-level Data Deduplication................................................14
3.4 Compression Clustered Data ONTAP........................................................................ 16
3.4.1 Enable Postprocess Compression..............................................................16
3.4.2 Compress and Deduplicate Existing Data.................................................16
3.4.3 View Compression Space Savings.............................................................16
3.4.4 Turn Off All Compression...........................................................................17
3.4.5 Compression Inline Clustered Data ONTAP................................................17
3.4.6 Enable Inline Compression........................................................................17
3.4.7 Compress and Deduplicate Existing Data.................................................17
3.4.8 View Compression Space Savings.............................................................17
3.4.9 Turn Off Inline Compression...................................................................... 18
3.5 RLM Clustered Data ONTAP..................................................................................... 18
3.5.1 Log in to RLM from Administration Host....................................................18
3.5.2 Connect to Storage System Console from RLM.........................................18
3.5.3 RLM Administrative Mode Functions.........................................................18
3.5.4 RLM Advanced Mode Display Information.................................................20
3.5.5 Manage RLM with Data ONTAP..................................................................20
3.5.6 RLM and SNMP Traps................................................................................ 21
3.5.7 Disable SNMP Traps for Only RLM.............................................................21
3.6 Firewall for Clustered Data ONTAP...........................................................................21
3.7 CIFS Clustered Data ONTAP..................................................................................... 21
3.7.1 Check Active Client Connections...............................................................22
3.7.2 Create an Export Policy............................................................................. 22
3.7.3 Add a Rule to an Export Policy..................................................................22
3.7.4 Create a Name Mapping from Windows to UNIX.......................................22
3.8 NFSv3 Clustered Data ONTAP.................................................................................. 22
3.8.1 Create Export Policy.................................................................................. 22
3.8.2 Add a Rule to an Export Policy..................................................................22
3.8.3 Attach the export policy to the vol root of the vserver..............................23
3.8.4 Create the volume/qtree to export with junction path...............................23
3.9 NFSv4 Clustered Data ONTAP.................................................................................. 23
3.9.1 Considerations.......................................................................................... 23
3.9.2 Pre-requisites............................................................................................ 25
3.9.3 Configuration............................................................................................ 25
3.10 Syslog Clustered Data ONTAP.................................................................................. 29
3.10.1 Display Events.......................................................................................... 29
3.10.2 Display Event Status................................................................................. 29
3.11 User Access for Clustered Data ONTAP....................................................................29
3.11.1 Security-Related User Tasks...................................................................... 29
3.11.2 General User Administration..................................................................... 30
3.12 Data Protection on Clustered Data ONTAP...............................................................30

© Copyright Equant 12.2018


Internal Use Only 4 of 141
Configuration & Operation Instruction

3.12.1 Snapshot Clustered Data ONTAP...............................................................30


3.12.2 Restore Entire Volume from Snapshot Copy..............................................30
3.12.3 Restore File from Snapshot Copy..............................................................31
3.12.4 Volume SnapMirror Async Clustered Data ONTAP......................................31
3.12.5 Run SnapMirror Show................................................................................ 31
3.12.6 Run SnapMirror Modify..............................................................................31
3.12.7 Run SnapMirror Delete.............................................................................. 31
3.13 FlexClone Clustered Data ONTAP............................................................................. 32
3.13.1 Create FlexClone Volume..........................................................................32
3.13.2 View Basic FlexClone Information.............................................................32
3.13.3 View Detailed FlexClone Information........................................................32
3.13.4 Split FlexClone Volumes............................................................................ 33
3.13.5 View Space Estimates............................................................................... 33
3.13.6 Start FlexClone Split................................................................................. 33
3.13.7 Stop FlexClone Split.................................................................................. 33
3.13.8 View FlexClone Split Status.......................................................................33
3.14 QOS......................................................................................................................... 34
3.15 Adaptive QOS.......................................................................................................... 34
4 Configuration instructions.....................................................................36
4.1 Disks....................................................................................................................... 36
4.1.1 Initialize Disks........................................................................................... 36
4.1.2 Advanced Drive partitioning.....................................................................36
4.2 Networking.............................................................................................................. 43
4.2.1 IFGRP Static Multimode Clustered Data ONTAP.........................................43
4.2.2 Create Static Multimode Interface Group..................................................43
4.2.3 Add Ports to Static Multimode Interface Group.........................................43
4.2.4 IFGRP Single-Mode Clustered Data ONTAP................................................43
4.2.5 IFGRP LACP Clustered Data ONTAP...........................................................44
4.2.6 VLAN Clustered Data ONTAP..................................................................... 44
4.2.7 Jumbo Frames Clustered Data ONTAP.......................................................44
4.3 IPSPACE creation..................................................................................................... 44
4.3.1 Ipspace..................................................................................................... 44
4.3.2 Broadcast domain..................................................................................... 44
4.3.3 Vserver creation....................................................................................... 44
4.4 Cluster creation and cluster join.............................................................................. 45
4.4.1 Establish Serial Connection to Storage Controller.....................................45
4.4.2 Validate Storage Controller Configuration.................................................45
4.4.3 Configure Boot Variable............................................................................45
4.4.4 Create Cluster........................................................................................... 46
4.4.5 Join Cluster............................................................................................... 46
4.4.6 Cluster Create for Cluster-Mode................................................................47
4.4.7 Cluster Join for Clustered Data ONTAP......................................................48
4.5 Disk Assign Clustered Data ONTAP..........................................................................48
4.6 Flash Cache Clustered Data ONTAP.........................................................................49
4.7 Aggregates 64 Bit Clustered Data ONTAP................................................................49
4.8 Compression Clustered Data ONTAP........................................................................ 49
4.9 RLM Clustered Data ONTAP..................................................................................... 49
4.10 Service Processor Clustered Data ONTAP.................................................................50
4.11 Storage Failover Clustered Data ONTAP...................................................................51
4.12 License and protocols.............................................................................................. 51
4.12.1 CIFS Clustered Data ONTAP...................................................................... 51
4.12.2 NFSv3 Clustered Data ONTAP...................................................................51
4.12.3 FC Clustered Data ONTAP.........................................................................51
4.12.4 iSCSI Clustered Data ONTAP..................................................................... 52
4.13 DNS Clustered Data ONTAP..................................................................................... 52
4.14 NTP Clustered Data ONTAP...................................................................................... 52
4.15 SNMP Cluster-Mode................................................................................................. 53
4.16 Syslog Clustered Data ONTAP.................................................................................. 53
4.17 AutoSupport HTTPS Clustered Data ONTAP..............................................................54
4.18 User Access for Clustered Data ONTAP....................................................................54
4.19 HTTPS Access Clustered Data ONTAP......................................................................54
4.20 NDMP Clustered Data ONTAP................................................................................... 55
4.20.1 NDMP modes of operation........................................................................ 55
4.21 Tape drives.............................................................................................................. 56

© Copyright Equant 12.2018


Internal Use Only 5 of 141
Configuration & Operation Instruction

4.22 Antivirus solution for clustered Data ONTAP............................................................57


4.23 SnapLock................................................................................................................. 58
4.23.1 Snaplock configuration............................................................................. 60
4.23.2 Committing files to WORM........................................................................62
4.23.3 SnapLock with SnapVault.......................................................................... 63
4.23.4 Mirroring WORM files................................................................................ 65
4.24 7-Mode Data Transition Using Snapmirror................................................................66
4.25 SVM DR................................................................................................................... 72
4.25.1 Architecture overview............................................................................... 72
4.25.2 NetApp guides.......................................................................................... 74
4.25.3 Limitation and requirements..................................................................... 74
4.25.4 Disaster Recovery preparation..................................................................74
4.25.5 Disaster Recovery activation....................................................................85
4.25.6 Source SVM reactivation...........................................................................89
4.26 MetroCluster............................................................................................................ 95
4.26.1 Architecture overview............................................................................... 95
4.26.2 Limitation and requirements..................................................................... 99
4.26.3 MetroCluster Cabling.............................................................................. 101
4.26.4 ATTO FibreBridge Configuration..............................................................106
4.26.5 Brocade FC Switch Configuration............................................................107
4.26.6 MetroCluster Configuration..................................................................... 111
4.26.7 check the configuration.......................................................................... 114
4.27 Configure Harvest user.......................................................................................... 117
4.28 Disk Sanitization.................................................................................................... 118
4.28.1 Disk Clearing and Disk Sanitization.........................................................118
4.28.2 Certification Templates........................................................................... 121
4.29 NDMPCopy............................................................................................................ 121
4.29.1 Enabling SVM-scoped NDMP on the cluster.............................................121
4.29.2 Configuring LIFs...................................................................................... 122
4.29.3 NDMP operating in 'Vserver-scope'.........................................................125
4.29.4 Lab Testing NDMPCopy........................................................................... 126
4.29.5 NDMPCopy Pre-Requisites....................................................................... 126
4.30 XCP Migration Tool................................................................................................. 126
4.30.1 What Is XCP?........................................................................................... 126
4.30.2 Features.................................................................................................. 126
4.30.3 Prerequisites........................................................................................... 127
4.30.4 XCP Catalog Location.............................................................................. 127
4.30.5 Activate.................................................................................................. 127
4.30.6 2.3 Configure and Run............................................................................ 128
4.30.7 XCP Lab Testing...................................................................................... 129
4.30.8 Basic commands for XCP Copy & Sync....................................................130
4.31 Volume data encryption with NVE......................................................................... 130
4.31.1 Prerequisites........................................................................................... 130
4.31.2 Configuration of the key management....................................................131
4.31.3 Enabling encryption on a volume............................................................132

© Copyright Equant 12.2018


Internal Use Only 6 of 141
Configuration & Operation Instruction

1 Introduction
1.1 Document purpose

The Configuration & Operation Instruction (COPI) document provides main details
to do

- Daily storage operations on NetApp filer.

- Storage Reporting/Management

- Storage Troubleshooting

1.2 Audience

The COPI is a reference document for all engineering team members who have
to work on this specific product.

The document may also be used by Implementation team (TIO) and operational
team (TOO).

This document may be used to support:

 To do the initial configuration by Implementation team (TIO)


 To do the Change management by operational team (TOO)
 to do the Incident Management by TIO or TOO

The document cannot be used in any contract or commitment to the customer


about the Equant service.

1.3 Acronyms and Abbreviations

Provide a list of the acronyms and abbreviations used in this document and the
meaning of each.

© Copyright Equant 12.2018


Internal Use Only 7 of 141
Configuration & Operation Instruction

2 Architecture
This table describe in which situation the architecture fits.

Customer needs Architecture


Two node Cluster (minimum)
High availability Clustering
solution

Service high availability and DR Two node Cluster (minimum) with


capacity Snapmirror
Asynchronous mirroring
Backup option Single Node Cluster

For more information about supported architectures, please refer to the TSD
documentation

2.1 Naming Conventions

For more information about naming conventions, please refer to the Naming
Conventions documentation : ENG-Naming Convention for NetApp

© Copyright Equant 12.2018


Internal Use Only 8 of 141
Configuration & Operation Instruction

3 Operation instructions
3.1 NetApp management tools

The following management tools are available:

 Storage GUI: NetApp System Manager Web GUI for storage administration
 Storage CLI: use the NetApp CLI if you want to perform some operations not
available through GUI
 Storage Service Processor CLI: use the NetApp Service Processor CLI if the
general CLI is unavailable or if you want to perform controller
takeovers/givebacks or any other maintenance requiring a reboot
 Storage Serial console: use this console only if you can’t connect through
NetApp CLI anymore (network outage, unstable configuration)
 NetApp OnCommand Core: use this central GUI to manage and monitor all
NetApp storage controllers.

3.1.1 Storage Web GUIs

To access to the Web Console:

 Orange administrators access Web GUI by entering in their browser an URL


dedicated to management tasks:

URL format Description


https://a.b.c.d:<po NetApp System Manager
rt>/<path>
https://a.b.c.d:<po NetApp OnCommand Core
rt>/<path>
https://a.b.c.d:<po …
rt>/<path>

3.1.2 Storage Command Line Interface (CLI)

To access to the CLI:

 You can connect on:


IP Description
a.b.c.d NetApp Storage Controller A
a.b.c.d NetApp Storage Controller A – Service Processor
a.b.c.d NetApp Storage Controller B
a.b.c.d NetApp Storage Controller B – Service Processor
a.b.c.d …

 The credential can be found in PMR.

© Copyright Equant 12.2018


Internal Use Only 9 of 141
Configuration & Operation Instruction

3.1.3 Storage Serial Console

To access to the Serial Console:

 On the rear panel, plug a Serial Cable into the RS-232 port
 Configure the terminal or a terminal emulation utility, such as CRT or
HyperTerminal, to use these serial connection parameters:
- 9600 bits per second
- 8-bit No Parity (8N1)
- 1 Stop Bit
- No flow control

 As soon as you connect through the serial port, enter the admin login and the
associated password.
 You are now connected to the CLI

3.2 Qtree Clustered Data ONTAP

3.2.1 Modify Qtrees

To modify an existing qtree to change its security, locking, and permissions properties,
complete the following steps:
1 Use the volume qtree modify command to modify an existing qtree.
2 To modify a qtree, specify the virtual server, volume, and name of the qtree.
3 To modify the attributes of the default qtree qtree0, omit the -qtree parameter from the command
or specify the value "" for the -qtree parameter.
4 Use the following parameters to modify the additional attributes of a qtree:
 Security style. Use the -security-style parameter to specify the security style for the qtree.
Possible security style values include unix (for UNIX mode bits), ntfs (for CIFS ACLs), and
mixed (for mixed NFS and CIFS access).
 Opportunistic locking. Use the -oplock-mode parameter to enable oplocks for the qtree.
 UNIX permissions. Use the -unix-permissions parameter to specify the default UNIX
permissions for the qtree when the value of the -security-style parameter is set to unix or
mixed. Specify UNIX permissions either as a four-digit octal value (for example, 0700) or in
the style of the UNIX ls command (for example, -rwxr-x---).
For information about UNIX permissions, see the UNIX or Linux documentation.
If UNIX permissions are not specified, the qtree inherits them from the volume on which it is
being creating.
A quota policy or quota policy rule cannot be applied to qtree0. If a value for an optional
attribute is not specified, the qtree inherits it from the volume on which it resides.
5 Use the volume qtree show command to display information about the modified qtree.
The following example displays a modified qtree named qtree1. In this example:
0 Virtual server is named vs0
1 Volume containing the qtree is named vol0
2 Qtree security style is UNIX

© Copyright Equant 12.2018


Internal Use Only 10 of 141
Configuration & Operation Instruction

3 Oplocks is enabled
node::> volume qtree modify -vserver vs0 -volume vol0 -qtree qtree1 -security-style unix
-oplocks enabled

3.2.2 Display Qtree Statistics

To display statistical information about qtrees, complete the following step:


1 Use the volume qtree statistics command to display statistical information about qtrees.
Qtree statistics are available only when the volume containing the qtree is online. They can be
collected from the time the volume is created, when the volume state is set to online, or when the
statistics have been reset by using the volume qtree statistics-reset command. Qtree statistics are
not persistent; they are set to zero if the following activities occur:
 A node is restarted.
 A storage takeover and giveback occurs.
 The volume is set to offline and then to online.
The command output depends on the parameter or parameters specified with the command. If no
parameters are specified, the command displays the following statistical information about all
qtrees:
 Virtual server name
 Volume name
 Qtree name
 NFS operations
 CIFS operations
 Operations since statistics were last reset (advanced privilege level and higher)
 NFS operations since the volume was created (advanced privilege level and higher)
 CIFS operations since the volume was created (advanced privilege level and higher)
 Operations since the volume was created (advanced privilege level and higher)
The following example displays statistics information for all qtrees on the virtual server
named vs0.
node::> volume qtree statistics -vserver vs0
Virtual
Server Volume Qtree NFS Ops CIFS Ops
---------- ------------- ------------ ------- --------
vs0 vol0 qtree1 10876 2678
vs0 vol1 qtree1a 16543 0
vs0 vol2 qtree2 0 0
vs0 vol2 qtree2a 0 0
4 entries were displayed.

3.2.3 Reset Qtree Statistics

To reset qtree statistics for a volume, complete the following step:


1 Use the volume qtree statistics-reset command to reset qtree statistics for a volume.
When resetting qtree statistics, specify the virtual server and volume name.
The following example displays the command to reset statistics for all qtrees on the
virtual server named vs0 and the volume named vol0.
node::> volume qtree statistics-reset -vserver vs0 -volume vol0

3.2.4 Delete Empty Qtrees

To delete an empty qtree, complete the following steps:


1 Use the volume qtree delete command to delete an empty qtree.

© Copyright Equant 12.2018


Internal Use Only 11 of 141
Configuration & Operation Instruction

To delete a qtree, specify the virtual server name, volume on which the qtree is located, and the
qtree name.
Do not delete the special qtree referred to as qtree0, which in the CLI is denoted by empty
quotation marks ("") and has the ID zero (0).
If there is a quota policy or quota policy rule associated with a qtree, it is deleted when the qtree is
deleted.
2 Use the volume qtree show command to display information about the remaining qtrees.
The following example shows the command to delete a qtree named qtree4. The virtual
server name is named vs0 and the volume containing the qtree is named vol0.
node::> volume qtree delete -vserver vs0 -volume vol0 -qtree qtree4

3.2.5 Delete Qtrees that Contain Files

To force-delete a qtree that contains files, complete the following step:


1 Use the set command to set the privilege level to advanced.
The following example shows the command to set the privilege level to advanced.
node::> set -privilege advanced
Warning: These advanced commands are potentially dangerous; use them only when directed to do
so by NetApp personnel.
Do you want to continue? {y|n}: y
To delete a qtree that contains files, complete the following steps:
1 Use the volume qtree delete command with the -force parameter to delete a qtree that
contains files.
 When deleting a qtree, specify the virtual server name, the volume on which the qtree is
located, and the qtree name.
 Do not delete the special qtree qtree0; in the CLI, it is denoted by empty quotation marks ("")
and has the ID zero (0).
 If there is a quota policy or quota policy rule associated with a qtree, it is deleted when the
qtree is deleted.
2 Use the volume qtree show command to display information about the remaining qtrees.
The following example shows the command to delete a qtree named qtree5, which
contains files. The virtual server is named is vs0, and the volume containing the qtree is
named vol0.
node::> volume qtree delete -vserver vs0 -volume vol0 -qtree qtree4 -force true

3.2.6 Display Information About Qtrees

To display information about qtrees for volumes that are online, complete the following
step:
1 Use the volume qtree show command to display information about qtrees.
The command output depends on the parameter or parameters specified with the command. If no
parameters are specified, the command displays the following information about all qtrees:
 Qtree0—when a volume is created, a special qtree referred to as qtree0 is automatically
created for the volume. It represents all of the data stored in a volume that isn't contained in a
qtree. In the CLI output, qtree0 is denoted by empty quotation marks ("") and has the ID zero
(0). Qtree0 cannot be manually created or deleted.
 Virtual server name
 Volume name
 Qtree name
 Security style: UNIX mode bits, CIFS ACLs, or mixed NFS and CIFS permissions
 Whether opportunistic locking is enabled

© Copyright Equant 12.2018


Internal Use Only 12 of 141
Configuration & Operation Instruction

 Status
The following example displays default information about all qtrees and each qtree ID. On
the virtual server vs0, none of the qtrees were manually created, therefore, only the
qtrees referred to as qtree0 are shown. On the virtual server vs1, the volume vs1_vol1
contains qtree0 and two manually created qtrees, qtree1 and qtree2.
node::> volume qtree show -id
Virtual
Server Volume Qtree Style Oplocks Status ID
---------- ------------- ------------ ------------ --------- -------- ---
vs0 vs0_vol1 "" unix enable readonly 0
vs0 vs0_vol2 "" unix enable normal 0
vs0 vs0_vol3 "" unix enable readonly 0
vs0 vs0_vol4 "" unix enable readonly 0
vs0 root_vs_vs0 "" unix enable normal 0
vs1 vs1_vol1 "" unix enable normal 0
vs1 vs1_vol1 qtree1 unix disable normal 1
vs1 vsl_vol1 qtree2 unix enable normal 2
vs1 root_vs_vs1 "" unix enable normal 0
9 entries were displayed.
To display detailed information about a single qtree, complete the following step:
1 Execute the volume qtree show command with the -instance and -qtree parameters. The
detailed view provides information about UNIX permissions, the qtree ID, and the qtree status.
3.2.7 Rename a Qtree

To rename a qtree, complete the following steps:


1 Use the volume qtree rename command to rename a qtree.
When renaming a qtree, specify the following:
 Virtual server name
 Volume on which the qtree is located
 Existing qtree name
 New qtree name
A qtree name cannot contain a forward slash (/) and must be less than 65 characters in length.
2 Use the volume qtree show command to display information about the renamed qtree.
The following example displays the command to rename a qtree from qtree3 to qtree4.
The virtual server is named vs0 and the volume containing the qtree is named vol0.
node::> volume qtree modify -vserver vs0 -volume vol0 -qtree qtree3 -newname qtree4

3.3 Deduplication Clustered Data ONTAP

Table 1 presents the quick-start commands for deduplication.


Table 1) Quick-start commands for deduplication.

Command Summary
volume efficiency on –vserver Enable deduplication on volume.
<vservername> –volume <volname>

volume efficiency start –vserver Optional: Run deduplication against data that existed in
<vservername> –volume <volname> –scan- the volume before deduplication was enabled.
old-data true

volume efficiency show –vserver Monitor status of the deduplication scan.


<vservername> –volume <volname>
-fields progress

© Copyright Equant 12.2018


Internal Use Only 13 of 141
Configuration & Operation Instruction

Command Summary
volume efficiency modify –vserver Assign or change a deduplication policy for a volume.
<vservername> –volume <volname>
-policy <policy_name>

volume efficiency start –vserver Manually run deduplication.


<vservername> –volume <volname>

volume show -vserver <vservername> View deduplication space savings in the volume.
-volume <volname> -fields dedupe-
space-saved, dedupe-space-saved-
percent
Table 2 presents the deduplication commands for new data.
Table 2) Deduplication commands for new data.

Command Summary
volume efficiency show Display which volumes have deduplication enabled and
the current status of any deduplication processes.
volume efficiency policy show Display the deduplication scheduling policies available
within the cluster.
volume efficiency policy create – Create a scheduling policy for a specific virtual storage
vserver <vservername> - policy server (Vserver). This policy can then be assigned to a
<policyname> -schedule <cron job volume. The value for –schedule must correlate to an
schedule> –duration <time interval> existing cron job schedule within the cluster. The value for
-enabled true
–duration represents how many hours to run
deduplication before stopping.
volume efficiency show -fields policy Display the assigned scheduling policies for each volume
that has deduplication enabled.
volume efficiency modify –vserver Modify the deduplication scheduling policy for a specific
<vservername> –volume <volname> volume to run only when 20% of new data is written to the
-schedule auto volume. The 20% threshold can be adjusted by adding
the @num option to auto (auto@num), where num is a two-
digit number that specifies the percentage.
volume efficiency start –vserver Begin the deduplication process on the specified flexible
<vservername> –volume <volname> volume. This process deduplicates any data that has
been written to disk after deduplication was enabled on
the volume. This process does not deduplicate data that
existed on the volume before deduplication was enabled.
volume efficiency stop –vserver Suspend an active deduplication operation running on the
<vservername> –volume <volname> flexible volume without creating a checkpoint.
volume efficiency show –vserver Check the progress of post-process deduplication
<vservername> –volume <volname> operations running on a volume.
-fields progress

volume show –vserver <vservername> – Show the total percentage of space saved in the volume
volume <volname> -fields dedupe-space- by deduplication (deduplicated / [used + deduplicated] x
saved-percent 100).
volume show –vserver <vservername> – Show the total space saved in the volume by
volume <volname> -fields dedupe-space- deduplication.
saved
Table 3 presents the deduplication commands for existing data.

© Copyright Equant 12.2018


Internal Use Only 14 of 141
Configuration & Operation Instruction

Table 3) Deduplication commands for existing data.

Command Summary
volume efficiency start –vserver Begin deduplication on the specified flexible volume.
<vservername> –volume <volname> –scan- Deduplication uses the latest checkpoint if one exists and
old-data true is less than 24 hours old.
volume efficiency start –vserver Begin deduplication on the specified flexible volume by
<vservername> –volume <volname> – using the existing checkpoint information, regardless of
scan-old-data true –use-checkpoint the age of the checkpoint information.
true

volume efficiency start –vserver Begin deduplication on the specified flexible volume.
<vservername> –volume <volname> – Deduplication disregards any checkpoints that exist and
scan-old-data true –delete-checkpoint bypasses the compression of any blocks that are already
true deduplicated or locked in Snapshot copies.
volume efficiency show -fields Check the progress of post-process deduplication
progress operations.
Table 4 presents the commands that disable deduplication.
Table 4) Commands to disable deduplication.

Command Summary
volume efficiency stop –vserver Suspend an active deduplication process on the flexible
<vservername> –volume <volname> volume without creating a checkpoint.
volume efficiency off –vserver Disable deduplication on the specified volume. No
<vservername> –volume <volname> additional change logging or deduplication operations are
performed, but the flexible volume remains a
deduplicated volume and the storage savings are
preserved.
If this command is used and then deduplication is turned
back on for this flexible volume, the flexible volume
should be rescanned with the volume efficiency
start –scan-old-data true command to gain the
maximum savings.

3.3.1 Inline Aggregate-level Data Deduplication

Beginning with ONTAP 9.2, you can perform Cross Volume Sharing in volumes belonging
to the same aggregate using Inline Aggregate-level Deduplication. Cross Volume
Deduplication is enabled by default on AFF systems (no need to activate).
Beginning with ONTAP 9.3, background deduplication jobs run automatically with
Automatic Background Deduplication (ADS) on AFF systems. ADS is enabled by default for
all newly created volumes. The feature uses the block fingerprints created during the
inline deduplication process.

Some useful commands :


The storage aggregate efficiency show command displays information about the
different storage efficiency of all the aggregates. If no parameters are specified, the
command displays the following information for all aggregates (admin privilege):
 Aggregate
 Node
 Cross-vol-background-dedupe State (Enabled, Disabled)

© Copyright Equant 12.2018


Internal Use Only 15 of 141
Configuration & Operation Instruction

 Cross-vol-inline-dedupe State (Enabled, Disabled)


Example:
cluster:::> storage aggregate efficiency show

Aggregate: aggr0
Node: vivek6-vsim2

Has Cross Volume Deduplication Savings: false


Cross Volume Background Deduplication: false
Cross Volume Inline Deduplication: false

Aggregate: aggr1
Node: vivek6-vsim2

Has Cross Volume Deduplication Savings: true


Cross Volume Background Deduplication: true
Cross Volume Inline Deduplication: true
2 entries were displayed.

The storage aggregate cross-volume-dedupe revert-to command is used to revert


cross volume deduplication savings on an aggregate (advanced privilege).
Example:
cluster:::> storage aggregate efficiency cross-volume-dedupe revert-to -aggregate aggr1
The revert operation started on aggregate "aggr1" successfully.

cluster:::> storage aggregate efficiency cross-volume-dedupe revert-to -aggregate aggr1


-clean-up true
The revert operation started on aggregate "aggr1" successfully.

The storage aggregate efficiency cross-volume-dedupe show command displays


information in detail about the different storage efficiency of all the aggregates.If no
parameters are specified, the command displays the following information for all
aggregates (admin privilege).
Example :
cluster:::> storage aggregate efficiency cross-volume-dedupe show

Aggregate: aggr0
Node: vivek6-vsim2

Has Cross Volume Deduplication Savings: false

--------:Cross Volume Background Deduplication Status:--------


State: false
Progress: -
Operation Status: Idle
Last Operation State: Success
Last Success Operation Begin Time: -
Last Success Operation End Time: -
Last Operation Begin Time: -
Last Operation End Time: -
Last Operation Error: Operation succeeded
Stage: -
Checkpoint Time: -
Checkpoint Operation Type: -
Checkpoint Stage: -

-----------:Cross Volume Inline Deduplication Status:---------


State: false

Aggregate: aggr1
Node: vivek6-vsim2

Has Cross Volume Deduplication Savings: true

© Copyright Equant 12.2018


Internal Use Only 16 of 141
Configuration & Operation Instruction

--------:Cross Volume Background Deduplication Status:--------


State: true
Progress: -
Operation Status: Idle
Last Operation State: Success
Last Success Operation Begin Time: Wed Aug 30 06:31:50 2017
Last Success Operation End Time: Wed Aug 30 06:31:50 2017
Last Operation Begin Time: Wed Aug 30 06:31:50 2017
Last Operation End Time: Wed Aug 30 06:31:50 2017
Last Operation Error: Operation succeeded
Stage: Cross volume sharing Done
Checkpoint Time: -
Checkpoint Operation Type: -
Checkpoint Stage: -

-----------:Cross Volume Inline Deduplication Status:---------


State: true

2 entries were displayed.

The storage aggregate cross-volume-dedupe stop command is used to stop cross


volume background deduplication on an aggregate (advanced privilege).
For example :
cluster:::> storage aggregate efficiency cross-volume-dedupe stop -aggregate aggr1
The efficiency operation on aggregate "aggr1" is being stopped.

3.4 Compression Clustered Data ONTAP

Table 5 shows the use cases for compression in clustered Data ONTAP.
Table 5) Use cases for compression in clustered Data ONTAP.

Use Case Procedure


Enable postprocess compression Enable Postprocess Compression

Compress and deduplicate existing data on a Compress and Deduplicate Existing Data
volume

View compression space savings on a volume View Compression Space Savings

Turn off both inline and postprocess compression on Turn Off All Compression
a volume
3.4.1 Enable Postprocess Compression

To enable postprocess compression, complete the following steps:


1 Enable postprocess compression.
volume efficiency on –vserver <<var_vserver01>> –volume <<var_vserver01>>
volume efficiency modify –vserver <<var_vserver01>> –volume <<var_vol01>> –compression true

2 View which volumes have postprocess compression enabled.


volume efficiency show -fields compression

3.4.2 Compress and Deduplicate Existing Data

To compress and deduplicate existing data, complete the following steps:


1 Compress and deduplicate existing data on a volume.
volume efficiency start –vserver <<var_vserver01>> –volume <<var_vol01>> -scan-old-data true

© Copyright Equant 12.2018


Internal Use Only 17 of 141
Configuration & Operation Instruction

2 View the status of the compression operation.


volume efficiency show –vserver <<var_vserver01>> –volume <<var_vol01>> -fields progress

3.4.3 View Compression Space Savings

To view compression space savings, complete the following steps:


1 View the total space savings achieved through the compression of a volume.
volume show –vserver <<var_vserver01>> –volume <<var_vol01>> –fields compression-space-saved

2 View the total percentage of space savings achieved through the compression of a volume.
volume show –vserver <<var_vserver01>> –volume <<var_vol01>> –fields compression-space-saved–
percent

3.4.4 Turn Off All Compression

To turn off all data compression, complete the following steps:


1 Disable both inline and postprocess compression on a volume.
volume efficiency modify –vserver <<var_vserver01>> -volume <<var_vol01>> -compression false
-inline-compression false

3.4.5 Compression Inline Clustered Data ONTAP

Table 6 shows some use cases for inline compression of clustered systems.
Table 6) Use cases for compression inline clustered Data ONTAP.

Use Case Procedure


Enable inline compression Enable Inline Compression

Compress and deduplicate existing data on a Compress and Deduplicate Existing Data
volume

View compression space savings on a volume View Compression Space Savings

Turn off inline compression on a volume Turn Off Inline Compression


3.4.6 Enable Inline Compression

To enable inline compression, complete the following steps:


1 Enable inline compression.
volume efficiency on –vserver <<var_vserver01>> –volume <<var_vol01>>
volume efficiency modify –vserver <<var_vserver01>> –volume <<var_vol01>> –compression true –
inline-compression true

2 View which volumes have inline compression enabled.


volume efficiency show -fields inline-compression

3.4.7 Compress and Deduplicate Existing Data

To compress and deduplicate existing data, complete the following steps:


3 Compress and deduplicate existing data on a volume.
volume efficiency start –vserver <<var_vserver01>> –volume <<var_vol01>> -scan-old-data true

4 View the status of the compression operation.


volume efficiency show –vserver <<var_vserver01>> –volume <<var_vol01>> -fields progress

3.4.8 View Compression Space Savings

To view compression space savings, complete the following steps:


5 View the total space savings achieved through the compression of a volume.

© Copyright Equant 12.2018


Internal Use Only 18 of 141
Configuration & Operation Instruction

volume show –vserver <<var_vserver01>> –volume <<var_vol01>> –fields compression-space-saved

6 View the total percentage of space savings achieved through the compression of a volume.
volume show –vserver <<var_vserver01>> –volume <<var_vol01>> –fields compression-space-saved–
percent

3.4.9 Turn Off Inline Compression

To turn off inline compression, complete the following step:


7 Disable inline compression on a volume.
volume efficiency modify –vserver <<var_vserver01>> -volume <<var_vol01>> -inline-compression
false

3.5 RLM Clustered Data ONTAP


Table 7) RLM Clustered Data ONTAP use cases.

Use Case Procedure Name


Access the storage controller through a dedicated Log in to RLM from Administration Host
network interface.

Perform administrative tasks remotely. Connect to Storage System Console from RLM

Configure and define RLM settings through the Manage RLM with Data ONTAP
storage controller interface.
3.5.1 Log in to RLM from Administration Host

To perform administrative tasks remotely, complete the following steps:


0 The host must have a Secure Shell (SSH) client application that supports SSHv2, and the
account name must be configured with the service-processor application type.
1 The RLM must be running firmware version 4.0 or later and configured to use an IPv4
address.
1 Enter the following command from the UNIX host:
ssh <username@RLM_IP_address>

2 When prompted, enter the password.


0 If six SSH login attempts fail repeatedly from the same IP address within 10 minutes, the
RLM will reject SSH login requests and suspend all communication with the IP address for
15 minutes. Communication will resume after 15 minutes, at which time, log in to the RLM
again.
1 The RLM prompt indicates access to the RLM CLI.
3.5.2 Connect to Storage System Console from RLM

To connect to the storage system console from the RLM, complete the following steps:
1 Run the following command at the RLM prompt:
system console

The message Type Ctrl-D to exit displays.


2 Run the system node image show command and press Enter to display the results.
3 Press Ctrl-D to exit and return to the RLM prompt.
cluster1::>
cluster1::> system node image show
(Press Ctrl-D to exit from the storage system console and return to the RLM CLI.)

© Copyright Equant 12.2018


Internal Use Only 19 of 141
Configuration & Operation Instruction

3.5.3 RLM Administrative Mode Functions

In the RLM administrative mode, use the RLM commands to perform most tasks. Table 8
lists the RLM commands used in administrative mode:
Table 8) RLM commands used in administrative mode.

Function Command
Display system date and time date

Display storage system events logged by the RLM events {all | info | newest | oldest |
search string }

Exit from the RLM CLI exit

Display a list of available commands or help [command]


subcommands of a specified command

Set the privilege level to access the specified mode priv set {admin | advanced | diag}

Display the current privilege level priv show

Reboot the RLM rlm reboot

Display the RLM environmental sensor status rlm sensors [-c]

The -c option, which takes a few seconds to


display, displays current values, rather than
cached values.
Display RLM status rlm status [ -v| -d]

The -v option displays verbose statistics.


The -d option displays RLM debug
information.
Update the RLM firmware rlm update http://path [ -f]

The -f option issues a full image update.


Manage the RSA if it is installed on the storage rsa
system

Log in to the Data ONTAP CLI system console

Press Ctrl-D to return to the RLM CLI.


Dump the system core and reset the system system core

Running this command has the same effect


as pressing the NMI button on a storage
system. The RLM stays operational as long
as input power to the storage system is not
interrupted.
Turn the storage system on or off, or perform a system power {on | off | cycle}
power cycle (which turns system power off and then Standby power stays on, even when the
back on) storage system is off. During power-cycling,
a brief pause occurs before power is turned
back on.
Running the system power command to turn
off or power-cycle the storage system might
cause an improper shutdown of the system
(also known as a dirty shutdown) and is not
a substitute for a graceful shutdown using
the Data ONTAP system node halt command.

© Copyright Equant 12.2018


Internal Use Only 20 of 141
Configuration & Operation Instruction

Function Command
Display status for each power supply, such as system power status
presence, input power, and output power

Reset the storage system using the specified system reset {primary | backup | current}
firmware image The RLM remains operational as long as
input power to the storage system is not
interrupted.
Display the RLM version information, including version
hardware and firmware information
3.5.4 RLM Advanced Mode Display Information

The RLM advanced commands display more information than is available in


administrative mode, including the RLM command history, the RLM debug file, a list of
environmental sensors, and RLM statistics.
Table 9 lists the additional RLM commands used in advanced mode.
Table 9) RLM commands used in advanced mode.

Function Command
Display the RLM command history or search for audit rlm log audit
logs from the SEL

Display the RLM debug file rlm log debug

Display the RLM messages file rlm log messages

Display a list of environmental sensors, their states, system sensors


and their current values

Display RLM statistics rlm status -v

3.5.5 Manage RLM with Data ONTAP

Manage the RLM from Data ONTAP using the rlm commands in the nodeshell. Some
management functions include:
4 Set up the RLM
5 Reboot the RLM
6 Display the status of the RLM
7 Update the RLM firmware
Table 10 provides a complete list of the Data ONTAP commands needed to manage the
RLM.
Table 10) Data ONTAP commands.

RLM Management Function Command

Display the list of available rlm commands rlm help

Reboot the RLM and trigger the RLM to perform a rlm reboot
self test Any console connection through the RLM is
lost during the reboot.
Initiate the interactive RLM setup script rlm setup

Display the current status of the RLM rlm status

© Copyright Equant 12.2018


Internal Use Only 21 of 141
Configuration & Operation Instruction

RLM Management Function Command


Sends a test e-mail to all recipients specified in rlm test autosupport
AutoSupport In order for this command to work properly,
AutoSupport must be enabled and the
recipients and mail host must be
configured.
Perform SNMP test on the RLM, forcing the RLM to rlm test snmp
send a test SNMP trap to all configured trap hosts

Update the RLM firmware update

To download and install the new RLM


firmware image, perform the following:
1 Run the system node image get command
2 Run the nodeshell command software install
3 Run the update command
3.5.6 RLM and SNMP Traps

If SNMP is enabled for the RLM, the RLM generates SNMP traps to configured trap hosts
for all down system events.
2 Enable SNMP traps for both Data ONTAP and the RLM, but disable the SNMP traps for only
the RLM and leave the SNMP traps for Data ONTAP enabled.
3.5.7 Disable SNMP Traps for Only RLM

To disable SNMP traps for the RLM, complete the following steps:
1 To disable SNMP traps for the RLM, enter the rlm.snmp.traps off command in the nodeshell.
2 The default value is on.
3 Leave SNMP traps for Data ONTAP enabled. Do not enable SNMP traps for the RLM
when SNMP traps for Data ONTAP are disabled. When SNMP traps for Data ONTAP are
disabled, SNMP traps for the RLM are also disabled.
3.6 Firewall for Clustered Data ONTAP

1 From the cluster shell, show the default node-specific firewall options.
firewall show

2 Modify the firewall options specific to a node.


firewall modify -node <<var_node01>> -enabled true -logging false

3 Show the cluster-wide default firewall policy settings.


firewall policy show

4 The purpose of firewall policies is to give the user the flexibility to allow management
services such as ssh, http, ntp, on data interfaces. To modify a particular firewall policy, run
the following command.
Firewall policy modify –policy <<var_policy01>>

5 Create a firewall policy.


firewall policy create –policy <<var_policy01>> -service

4 Enter any of the required system services, such as dns, http, https, ndmp, ntp, snmp, ssh,
or telnet.
6 Clone a firewall policy.

© Copyright Equant 12.2018


Internal Use Only 22 of 141
Configuration & Operation Instruction

firewall policy clone –policy <<var_policy01>> -new-policy-name <<var_policy02>>

3.7 CIFS Clustered Data ONTAP

Use Case Procedure


The administrator wants to know whether a certain Check Active Client Connections
client is connected through CIFS to a cluster.

The administrator wants to create a new export Create an Export Policy


policy.

The administrator wants to create a new export Add a Rule to an Export Policy
policy rule to govern the behavior of new shares.

The administrator wants to map Windows domain Create a Name Mapping from Windows to UNIX
users to their UNIX IDs.
3.7.1 Check Active Client Connections

1 To check for active client connections, run the network connections active show-
clients command and look for the client host name or IP address.
network connections active show-clients

3.7.2 Create an Export Policy

1 To create an export policy, enter the vserver export-policy create command.


vserver export-policy create -vserver <<var_vserver01>> -policyname <<var_policy01>>

3.7.3 Add a Rule to an Export Policy

1 To create an export rule for an export policy, use the vserver export-policy rule create
command. This makes it possible to define client access to data.
vserver export-policy rule create -vserver virtual_server_name -policyname <<var_policy01>>
-ruleindex 1 -protocol cifs -clientmatch 0.0.0.0/0 -rorule any -rwrule any -anon 65535

3.7.4 Create a Name Mapping from Windows to UNIX

1 To create a name mapping from Windows to UNIX, such as mapping every user in the
domain to the UNIX text equivalent, enter the vserver name-mapping create command.
vserver name-mapping create -vserver <<var_vserver01>> -direction win-unix -position 1
-pattern "<<var_ad_domainname>>\\(.+)" -replacement "\1"

3.8 NFSv3 Clustered Data ONTAP

3.8.1 Create Export Policy

Generally, two export policies are created :

The first with ro rule, attached to the root volume and the volume parent to the qtree, and the
second with rw rule attached to the qtree

2 To create an export policy, enter the vserver export-policy create command.


vserver export-policy create -vserver <<var_vserver01>> -policyname <<var_policy01>>

© Copyright Equant 12.2018


Internal Use Only 23 of 141
Configuration & Operation Instruction

3.8.2 Add a Rule to an Export Policy

2 To create an export rule for an export policy, use the vserver export-policy rule create
command. This makes it possible to define client access to data.
3 For the ro rule :
vserver export-policy rule create -vserver virtual_server_name -policyname <<var_policy01>>
-ruleindex 1 -protocol nfs -clientmatch 0.0.0.0/0 -rorule any -rwrule none -anon 65535

4 For the rw rule :


vserver export-policy rule create -vserver virtual_server_name -policyname <<var_policy02>>
-ruleindex 1 -protocol nfs -clientmatch “export_subnet”/xx -rorule any -rwrule any -anon 65535

3.8.3 Attach the export policy to the vol root of the vserver

volume modify -vserver virtual_server_name -volume vsm_vol_root -policy <<var_policy01>>

3.8.4 Create the volume/qtree to export with junction path

Create the volume with junction path and the ro export rule

volume create -vserver virtual_server_name -volume vol_name_01 -aggregate aggr_name -size


xxxGB -policy <<var_policy01>> -space-guarantee none -junction-path /vol_name_01

Create the qtree with the rw ewport rule

vol qtree create -vserver virtual_server_name -volume vol_name_01 -qtree qtr01 -security-style
unix -oplock-mode enable -export-policy <<var_policy02>>

3.9 NFSv4 Clustered Data ONTAP

(Netapp Documentation can be found here: http://www.netapp.com/us/media/tr-


3580.pdf )

3.9.1 Considerations

- User and group identification (UID/GID) implementation. NFSv4 does not


natively support UID or GID; it supports string-based communication for
users and groups between the server and the client. Therefore, entry of
users and groups must be in the /etc/passwd file on 7-Mode systems or
local UNIX user entries in clustered Data ONTAP. The /etc/passwd file can
also be leveraged in Network Information Service (NIS) or LDAP. This is
mandatory with an NFSv4 environment as per RFC standards. When using
local files, the names must match exactly on the client/server, including
case sensitivity. If names do not match, the user does not map into the
NFSv4 domain and squashes to the “nobody” user. If legacy NFSv3 behavior
is desired, refer to the section in this document regarding the use of NFSv4
numerics.

- User authentication. You must plan how users are authenticated. There are
two options, Kerberos and standard UNIX password authentication.

© Copyright Equant 12.2018


Internal Use Only 24 of 141
Configuration & Operation Instruction

- Directory and file access. You must plan for access control on files and
directories. Depending on their business needs, they can choose standard
UNIX permissions or Windows-style ACLs.

- Shared file system namespace configuration. Shared file system


namespace allows storage administrators to present NFS exported file
systems to NFS clients. Depending on their business needs, storage
administrators can choose to make the server location transparent to their
users, use distributed directory structures, or structure access controls
grouped by organization or by role. For example, if /corporate is the main
entry point and a storage administrator wants to keep finance and IT
separated, then the entry points for those departments could become
/finance and /IT because / is the pseudo file system entry point for the file
system.

- Consideration for high availability. If high availability is a key business


requirement, you need to consider designing directory/identity service,
authentication service, and file services for high availability. For example, if
Microsoft Active Directory is used for LDAP, then multiple LDAP servers
could exist and update using domain replication, which is native to
Microsoft Active Directory.

- Implementation of pseudo file. In 7-Mode, if automounter is used with “/” as


one of the mounts for NFSv3 that translates to /vol/vol0, client mounts no
longer have the same representation in NFSv4. Due to the pseudo file
system as described later in this paper, the fsid changes as the request
traverses from a pseudo file system to an active file system. This means
that if the automount has an entry to mount “/” from the NetApp storage, it
mounts “/” and no longer mounts “/vol/vol0.” Therefore, these entries
should be updated with the actual pathnames. In clustered Data ONTAP, all
SVMs have a “/” entry point through vsroot. Therefore, the 7-Mode concepts
of vol0 do not apply.

- Single port configuration. In automount maps, the following entry has to be


specified because NFSv4 communicates only over port 2049:

-fstype=nfs4, rw, proto=tcp,port=2049

- UDP mount to TCP mount conversion. Because UDP is no longer supported


in NFSv4, any existing mounts over UDP must be changed to TCP.

- NFSv4 ACL configuration for POSIX ACLs. NFSv4 only supports NFSv4 ACLs,
not POSIX ACLs. Any data that is migrated from third-party storage must
manually set NFSv4 ACLs if they are required.

- Mounting NFSv4 file systems. Mounting file systems over NFSv4 is not the
same for all NFS clients. Clients mount using different syntax based on
what kernel is being used. Some clients mount NFSv4 by default, so the
same considerations must be used if NFSv3 is desired. youmust plan
appropriately to correctly mount file systems to the NFSv4 client.

- NFSv3 and NFSv4 clients can coexist. The same file system can be mounted
over NFSv3 and NFSv4. However, any ACLs set on a file or directory from
NFSv4 are enforced for NFSv3 mounts as well. Setting permissions on a

© Copyright Equant 12.2018


Internal Use Only 25 of 141
Configuration & Operation Instruction

folder with mode bits (rwx) affects the NFSv4-style ACL. For example, if a
directory is set to 777 permissions, then the NFSv4 ACL for “everyone” is
added. If a directory is set to 770, then “everyone” is removed from the
NFSv4 ACL. This can be prevented using ACL preservation options,
described in section 5.

- NFSv4 has a stateful personality, unlike NFSv3, which is stateless in nature.


There is a common misconception that if NFSv4 is stateful in nature like
CIFS, some amount of disruption could occur during storage failover and
giveback for NetApp storage controllers in an HA pair. However, with NFSv4,
the session is connected at the application layer and not at the TCP layer.
Lock states are stored in controller memory, which is not persistent.
However, lock states are kept at both the controller and network level, so
the application can send a request to the NetApp storage controller to make
sure of a persistent connection. Therefore, when storage failover and
giveback happen, the break in the TCP connection and lock does not affect
the NFSv4 session, and thus no disruption occurs.

3.9.2 Pre-requisites

- NFSv4 License must be installed on the cluster.

- We recommend having a dedicated SVM for NFSv4.

- The Client system must have all NFSv4 package installed and
configured.

- It is recommended to configure the NFSv4 authentication using an


authentication server like LDAP. (Not mandatory but offer more
security).

3.9.3 Configuration

On the client, install the NFSv4 package (depend of your distribution and OS,
here it’s tested with CentOS).

- Install NFS lib :

[root@fury ~]# yum install nfs-utils nfs-utils-lib

- Install NFSv4 acl tools (if ACL used).

[root@fury ~]# yum install rpcbin nfs-utils nfs4-acl-tools –y

On filer:

- Activation of NFSv4 on SVM

cnaces82::> nfs server modify -vserver svm_nas_02 -v4.0 enabled

- Specify the nfsv4 domain :

© Copyright Equant 12.2018


Internal Use Only 26 of 141
Configuration & Operation Instruction

cnaces82::> nfs server modify -vserver svm_nas_02 -v4.0 enabled -v4-id-domain


localdomain

Exemple:

cnaces82::> vserver nfs modify -vserver svm_nas_02 -v4-id-domain lbs.local

The vserver nfs options must be like the following, modify the options in order.

Here are the options:

[-vserver] <vserver name> Vserver


[[-access] {true|false}] General NFS Access
[ -v3 {enabled|disabled} ] NFS v3
[ -v4.0 {enabled|disabled} ] NFS v4.0
[ -udp {enabled|disabled} ] UDP Protocol
[ -tcp {enabled|disabled} ] TCP Protocol
[ -spinauth {enabled|disabled} ] Spin Authentication
[ -default-win-user <text> ] Default Windows User
[ -v4.0-acl {enabled|disabled} ] NFSv4.0 ACL Support
[ -v4.0-read-delegation {enabled|disabled} ] NFSv4.0 Read Delegation Support
[ -v4.0-write-delegation {enabled|disabled} ] NFSv4.0 Write Delegation Support
[ -v4-id-domain <nis domain> ] NFSv4 ID Mapping Domain
[ -v4.1 {enabled|disabled} ] NFSv4.1 Minor Version Support
[ -rquota {enabled|disabled} ] Rquota Enable
[ -v4.1-pnfs {enabled|disabled} ] NFSv4.1 Parallel NFS Support
[ -v4.1-acl {enabled|disabled} ] NFSv4.1 ACL Support
[ -vstorage {enabled|disabled} ] NFS vStorage Support
[ -default-win-group <text> ] Default Windows Group
[ -v4.1-read-delegation {enabled|disabled} ] NFSv4.1 Read Delegation Support
[ -v4.1-write-delegation {enabled|disabled} ] NFSv4.1 Write Delegation Support

© Copyright Equant 12.2018


Internal Use Only 27 of 141
Configuration & Operation Instruction

[ -mount-rootonly {enabled|disabled} ] NFS Mount Root Only


[ -nfs-rootonly {enabled|disabled} ] NFS Root Only
[ -v3-ms-dos-client {enabled|disabled} ] NFSv3 MS-DOS Client Support

- Allow only nfs on the dedicated svm:

cnaces82::> vserver modify -vserver svm_nas_02 -ns-switch file -nm-switch file


-allowed-protocols nfs

- You should allow NFSv4 user and group IDs as numeric strings

cnaces82::> vserver nfs modify -vserver svm_nas_02 -v4-numeric-ids enabled

If you use local account instead of an authentication server, you should create a
local account on the client, and on the cluster. The user on client and cluster
must be exactly the same that include their UID and GID.

- Create user for nfs on Client

[root@fury home]$  useradd nfs4user

[root@fury home]$  passwd nfs4user

- Check the UID

[root@fury home]$  cat /etc/passwd | grep nfs4user

nfs4user:x:1000:1000::/home/nfs4user:/bin/bash

- Check the GID

[root@fury home]$  cat /etc/group | grep 1000

nfs4user:x:1000:

- On filer create user with the same UID & GID

cnaces82::> unix-user create -vserver svm_nas_02 -user nfs4user -id 1000


-primary-gid 1000

cnaces82::> unix-group create -vserver svm_nas_02 -name nfs4group -id 1000

- The NFS export must be applied on / and /vol in read only, and then
another export for the NFS volume in read & write.

Read only export policy creation:

© Copyright Equant 12.2018


Internal Use Only 28 of 141
Configuration & Operation Instruction

cnaces82::> export-policy create -vserver svm_nas_02 -policyname


nfsv4export_ro

Read & Write export policy creation:

cnaces82::> export-policy create -vserver svm_nas_02 -policyname


nfsv4export_rw

Export RO policy rules creation:

cnaces82::> export-policy rule create -vserver svm_nas_02 -policyname


nfsv4export_ro -clientmatch 0.0.0.0/0 -rorule sys -rwrule never -protocol nfs4
-allow-suid true -anon 65535 -superuser sys -ruleindex 1

Export RW policy rule creation:

cnaces82::> export-policy rule create -vserver svm_nas_02 -policyname


nfsv4export_rw -clientmatch 0.0.0.0/0 -rorule sys -rwrule sys -protocol nfs4
-superuser sys

You should replace 0.0.0.0/0 by the appropriate client IP that will mount the NFS
share.

- Each SVM must have a “vol” volume accessible in Read-only in order


to add a Junction path /vol

cnaces82::> volume create -vserver <vserver> -volume vol -aggregate <aggr>


-size 20m -policy <export>_RO -junction-path /vol -space-guarantee none
-percent-snapshot-space 0 -snapshot-policy none

If the volume exists, then modify with the good export policy:

cnaces82::> volume modify -vserver <vserver> -volume vol -policy


<export>_RO

- Create the data volume using the good UID & GID

cnaces82::> vol create -vserver svm_nas_02 -volume vol_nfs4data -aggregate


aggr_data_cnaces82_02_sata_01 -size 10GB -policy nfsv4export_rw -junction-
path /vol/vol_nfs4data -space-guarantee none -percent-snapshot-space 0
-snapshot-policy none -state online -user 1000 -group 1000 -security-style unix
-unix-permissions ---rwxr-xr-x

You can use Qtree also.

© Copyright Equant 12.2018


Internal Use Only 29 of 141
Configuration & Operation Instruction

- Then you can mount the volume on your client.

[nfs4user@fury home]sudo mount -t nfs4 -o


rw,bg,auto,nfsvers=4,bg,hard,nolock,sec=sys,tcp,timeo=600,rsize=1048576,wsi
ze=1048576,ac,retrans=2 10.1.254.76:/vol_nfs4data /volnfs4

3.10 Syslog Clustered Data ONTAP

3.10.1 Display Events

1 View events in the storage system.


event log show

2 View more detailed information about a specific event in a specific node.


event log show -node <<var_node_name>> -seqnum <<var_seq_#>>

3.10.2 Display Event Status

1 View more detailed information about a specific type of event such as occurrence, last
occurrence, and number of events dropped.
event status show

3.11 User Access for Clustered Data ONTAP

3.11.1 Security-Related User Tasks

From clustershell, perform the following security-related tasks.


1 Run the following command to create a login account with application permissions.
security login create -username <<var_username>> -application <app_name> -authmethod
<app_password> -role admin
Table 11 lists the values for the parameters in this command .
Table 11) Login-related parameters.

Parameter Values
–application console, http, ontapi, service-processor, snmp, and ssh (for example).
Specific values depend on the application-level access.

© Copyright Equant 12.2018


Internal Use Only 30 of 141
Configuration & Operation Instruction

Parameter Values
–role admin, none, readonly, vsadmin, vsadmin-protocol, vsadmin-readonly and
vsadmin-volume (for example). Specific values depend on the granularity
of access.

–authmethod password, community, and publickey (for example). Depending on the


user account, role parameters can be leveraged to control the access
methods.

2 Run the following command to unlock or lock the admin account.


security login unlock –username admin
security login lock –username admin

3 Run the following command to lock the diag account.


security login lock –username diag

4 Run the following command to switch to a virtual storage server (Vserver) context.
vserver context –vserver <<var_vserver01>>

5 Run the following command to modify Vserver context user roles.


security login role config modify -role vsadmin-volume -vserver <<var_vserver01>> -change-
delay 5 -username-minsize 3 -username-alphanum disabled -passwd-minsize 8 -passwd-alphanum
enabled -disallowed-reuse 8
Table 12 lists the values for the parameters in this command.
Table 12) Role-related parameters.

Parameter Values
–role vsadmin, vsadmin-volume, vsadmin-protocol and vsadmin-readonly
(for example)
–change-delay 0–1000
–username-minsize 3–16
–username-alphanum Enabled or disabled
–passwd-minsize 3–64
–passwd-alphanum Enabled or disabled
–disallowed-reuse 1–25
3.11.2 General User Administration

6 Run the following command to modify the access-control role of a user's login.
security login modify –username

7 Run the following command to delete a user's login method.


security login delete –username <<var_username>>

8 Run the following command to change a user password.


security login password –username <<var_username>>

9 Run the following command to lock a user account.


security login lock –username <<var_username>>

10 Run the following command to unlock a user account.


security login unlock –username <<var_username>>

© Copyright Equant 12.2018


Internal Use Only 31 of 141
Configuration & Operation Instruction

3.12 Data Protection on Clustered Data ONTAP

3.12.1 Snapshot Clustered Data ONTAP

Use Case Procedure


A volume is corrupted and must be restored in its Restore Entire Volume from Snapshot Copy
entirety.

A set of files must be restored, but their exact Restore Entire Volume from Snapshot Copy
name or location is not known.

A file must be restored. Restore File from Snapshot Copy


3.12.2 Restore Entire Volume from Snapshot Copy

To perform a Snapshot restore, complete the following step:


1 Run the following CLI operation.
volume snapshot restore -vserver <<var_vserver01>> -volume <<var_vol01>> -snapshot
<<var_snap01>>

3.12.3 Restore File from Snapshot Copy

To restore a single file from a Snapshot copy, complete the following step:
2 Run the following CLI operation.
volume snapshot restore-file -vserver <<var_vserver01>> -volume <<var_vol01>> -snapshot
<<var_snap01>> -path <<var_file_path>>

3.12.4 Volume SnapMirror Async Clustered Data ONTAP

Use Case Procedure


List SnapMirror relationships. Run SnapMirror Show

Modify SnapMirror relationships. Run SnapMirror Modify

Delete a SnapMirror relationship. Run SnapMirror Delete


3.12.5 Run SnapMirror Show

The snapmirror show command is used to display information about SnapMirror


relationships.
To check the status of the initial transfer, complete the following step:
1 Run the snapmirror show command from the destination or source cluster.
3.12.6 Run SnapMirror Modify

To modify a SnapMirror relationship, complete the following step:


2 Run snapmirror modify from the cluster containing the destination volume.
In the following example, snapmirror modify modifies the number of tries and the schedule of a
relationship.
snapmirror modify -destination-path <<var_cluster02>>://<<var_vserver02>>/<<var_vol02>> -tries
<<var_NumberOfTries>> -schedule "<<var_schedule>>"

© Copyright Equant 12.2018


Internal Use Only 32 of 141
Configuration & Operation Instruction

3.12.7 Run SnapMirror Delete

The snapmirror delete command can be executed on either the destination or the source
cluster, but with different results. When the command is executed from the destination
cluster, the SnapMirror relationship information as well as the Snapshot copy owners
associated with the relationship are removed from both clusters as long as both clusters
are accessible.
If the source cluster is inaccessible, only the relationship information and associated
Snapshot copy owners on the destination cluster are removed. In this case, a note is
generated indicating that only the destination relationship information and Snapshot copy
owners were removed. The remaining source cluster relationship information and
Snapshot copy owners can be removed by performing a snapmirror delete on the source
cluster independently.
Running snapmirror delete on the source cluster only removes the relationship
information and Snapshot copy owners on the source cluster. It does not contact the
destination cluster. A warning/confirmation request is issued by the command when it is
executed on the source cluster indicating that only the source-side information will be
deleted. It can be overridden by using the -force option on the CLI.
To delete a SnapMirror relationship, complete the following step:
3 Run snapmirror delete.
snapmirror delete <<var_cluster02>>://<<var_vserver02>>/<<var_vol02>>

3.13 FlexClone Clustered Data ONTAP


Table 13) FlexClone Clustered Data ONTAP use cases.

Use Case Procedure Name


A copy of a volume is required but physical disk Create FlexClone Volume
space is limited.

Basic information about cloned volumes is required. View Basic FlexClone Information

Detailed information about cloned volumes is View Detailed FlexClone Information


required.

Convert the FlexClone volume to a physical volume. Split FlexClone Volumes


3.13.1 Create FlexClone Volume

To create a FlexClone volume on the Vserver, complete the following step:


1 From the cluster shell, run the volume clone create command.
5 Starting in Data ONTAP 8.2, the term Storage Virtual Machine (SVM) replaces the term
Vserver. However, Vserver continues to appear in GUI fields, wizard pages, and as
commands.
volume clone create -vserver <<var_vserver>> -flexclone <<var_vol01>>_1 –parentvolume
<<var_vol01>> -junction-active true -foreground true -comment "Testing FlexClone creation"

6 The volume clone create command is not supported on Vservers with NetApp Infinite
Volume.
3.13.2 View Basic FlexClone Information

To view basic FlexClone volume information, complete the following step:


2 From the cluster shell, run the volume clone show command.
cluster1::> volume clone show -vserver <<var_vserver>>.

© Copyright Equant 12.2018


Internal Use Only 33 of 141
Configuration & Operation Instruction

(volume clone show)


Vserver FlexClone Parent-Volume Parent-Snapshot
--------- -------------- --------------- ---------------
vs0 fc_vol_1 test_vol clone_fc_vol_1.0
fc_vol_2 test_vol2 clone_fc_vol_2.0
fc_vol_3 tv9 clone_fc_vol_3.0
tv8 tv7 clone_tv8.0
tv9 test_vol2 clone_tv9.0
5 entries were displayed.

3.13.3 View Detailed FlexClone Information

To view detailed FlexClone volume information, complete the following step:


3 From the cluster shell, run the volume clone show command using the –flexclone option.
cluster1::> volume clone show -vserver <<var_vserver>> -flexclone <<var_vol01>>_2
Vserver Name: vs0
FlexClone Volume: fc_vol_2
FlexClone Parent Volume: test_vol2
FlexClone Parent Snapshot: clone_fc_vol_2.0
Junction Path: -
Junction Active: -
Space Guarantee Style: volume
Space Guarantee In Effect: true
FlexClone Aggregate: test_aggr
FlexClone Data Set ID: 1038
FlexClone Master Data Set ID: 2147484686
FlexClone Size: 47.50MB
Used Size: 128KB
Split Estimate: 0.00B
Inodes processed: -
Total Inodes: -
Percentage complete: -
Blocks Scanned: -
Blocks Updated: -
Comment:
Qos Policy Group Name: pg1

3.13.4 Split FlexClone Volumes

FlexClone volume splitting is used to assign physical space to the virtual volume. The
following sections describe operating procedures for using the clone split command.
3.13.5 View Space Estimates

To view the free disk space estimates for FlexClone volumes residing on the Vserver,
complete the following step:
4 From the cluster shell, run the volume clone split estimate command.
cluster1::> volume clone split estimate -vserver <<var_vserver>>.
(volume clone split estimate)
Split
Vserver FlexClone Estimate
--------- ------------- ----------
vs0 fc_vol_1 851.5MB
fc_vol_3 0.00B
flex_clone1 350.3MB
fv_2 47.00MB
tv9 0.00B
5 entries were displayed.

7 The space estimate reported might differ from the space actually required to perform the
split, especially if the cloned volume is changing while the split is being performed.
8 The volume clone split estimate command is not supported on Vservers with Infinite
Volume.

© Copyright Equant 12.2018


Internal Use Only 34 of 141
Configuration & Operation Instruction

3.13.6 Start FlexClone Split

To split or separate a FlexClone volume from the parent Vserver, complete the following
step:
5 From the cluster shell, run the volume clone split start command.
volume clone split start -vserver <<var_vserver>> -flexclone <<var_vol01>>_1 –foreground true

9 Both the parent volume and the FlexClone volume are available during this operation.
10 The volume clone split start command is not supported on Vservers with Infinite
Volume.
3.13.7 Stop FlexClone Split

To stop an in-process clone-splitting operation, complete the following step:


6 From the cluster shell, run the volume clone split stop command.
volume clone split stop -vserver <<var_vserver>> -flexclone <<var_vol01>>_1

11 This procedure stops the process of separating the FlexClone volume from its underlying
parent volume without losing any progress achieved during the split operation.
12 The volume clone split stop command is not supported on Vservers with Infinite
Volume.
3.13.8 View FlexClone Split Status

To view the status of active FlexClone volume splitting operations, complete the following
step:
7 From the cluster shell, run the volume clone split show command.
cluster1::> volume clone split show
(volume clone split show)
Inodes Blocks
--------------------- ---------------------
Vserver FlexClone Processed Total Scanned Updated % Complete
--------- ------------- ---------- ---------- ---------- ---------- ----------
vs1 fc_vol_1 0 1260 0 0 0

13 Use the -instance option to view detailed information about all volume-splitting
operations.
The volume clone split show command is not supported on Vservers with Infinite
Volume.

3.14 QOS

The following storage objects can have storage QoS policies applied to them:
 SVM
 FlexVol volume
 LUN
 File

Create a policy group :


cnaces82::> qos policy-group create -policy-group pg_archive -vserver svm_nas_02 -max-
throughput 1000iops

© Copyright Equant 12.2018


Internal Use Only 35 of 141
Configuration & Operation Instruction

The max throughput can be define by a number of IOPS or MB/S.


Modify a policy group :
cnaces82::> qos policy-group modify -policy-group pg_archive -max-throughput 500iops

Apply a QoS policy at the volume level :


cnaces82::> volume modify -vserver svm_nas_02 -volume test_nas_01 -qos-policy-group pg_archive

Volume modify successful on volume: test_nas_01

Monitor QoS performance :


cnaces82::> qos statistics performance show
Policy Group IOPS Throughput Latency
-------------------- -------- --------------- ----------
-total- 307 4.67MB/s 4.78ms
_System-Best-Effort 218 3.67MB/s 6.72ms
_System-Background 78 0KB/s 0ms
pg_archive 10 1.MB/s 1ms

3.15 Adaptive QOS

Beginning with ONTAP 9.3, Storage QoS supports adaptive QoS. Adaptive QoS
automatically scales a throughput ceiling or floor, maintaining the ratio of IOPS to TBs|
GBs as the size of the volume changes (volumes only). By default, 3 adaptive QOS are
defined but we can create new ones :

cnaces84::> qos adaptive-policy-group show


Expected Peak
Name Vserver Wklds IOPS IOPS
------------ ------- ------ ----------- ------------
extreme cnaces84
0 6144IOPS/TB 12288IOPS/TB
performance cnaces84
0 2048IOPS/TB 4096IOPS/TB
value cnaces84
0 128IOPS/TB 512IOPS/TB

Availability: The command of creation is available to cluster administrators at the admin


privilege level.
Create an adaptive policy group :
cnaces84::> qos adaptive-policy-group create -policy-group apg_gold_1000 -vserver
svm_fca_se_cifs_01 -expected-iops 1000/TB -peak-iops 2000/TB -absolute-min-iops 100

After you create an adaptive policy group, use the volume create command or volume
modify command to apply the adaptive policy group to a volume.

Delete an adaptive policy group :


cnaces84::> qos adaptive-policy-group create -policy-group apg_gold_1000

Modify an adaptive policy group :

© Copyright Equant 12.2018


Internal Use Only 36 of 141
Configuration & Operation Instruction

cnaces84::> qos adaptive-policy-group modify -policy-group apg_gold_1000 -expected-iops


2000/TB -peak-iops 5000/TB -absolute-min-iops 500

Rename an adaptive policy group :


cnaces84::> qos adaptive-policy-group rename -policy-group apg_gold_1000 –new-name
apg_plat_2000

© Copyright Equant 12.2018


Internal Use Only 37 of 141
Configuration & Operation Instruction

4 Configuration instructions

4.1 Disks

4.1.1 Initialize Disks

All disks owned by the storage system must be initialized when repurposing or reinstalling
a NetApp storage controller. This action wipes all data from the disks, including
aggregates, volumes, qtrees, and LUNs.
3 The controller and disks must be rebooted to begin disk initialization.
4 The disks that will be initialized must be owned by that node.
To initialize disks, complete the following steps:
8 Reboot the storage controller by running the halt command.
9 At the Loader prompt, type autoboot.
10 Press Ctrl-C to bring up the Boot menu.
*******************************
* *
* Press Ctrl-C for Boot Menu. *
* *
*******************************
mkdir: /cfcard/cores: Read-only file system
^C
The Boot Menu will be presented because an alternate boot method was specified.

11 Select option 4: Clean Configuration and Initialize all Disks.


Please choose one of the following:

(1) Normal Boot.


(2) Boot without /etc/rc.
(3) Change password.
(4) Clean configuration and initialize all disks.
(5) Maintenance mode boot.
(6) Update flash from backup config.
(7) Install new software first.
(8) Reboot node.
Selection (1-8)? 4

12 Enter y to zero the disks, reset the configuration, and to install the new file system.
Zero disks, reset config and install a new file system?: y

13 Enter y to erase all data on the disks.


This will erase all the data on the disks, are you sure?:y
The system now reboots and zeroes all of the drives owned by the node.
5 The initialization and creation of the root volume can take a long time to complete, depending
on the number and type of disks attached. When initialization is complete, the storage system reboots.

4.1.2 Advanced Drive partitioning

Data ONTAP 8.3 introduced a new feature called Advanced Drive Partitioning.

With this feature, a single disk drive can be partitioned into multiple logical
partitions.

© Copyright Equant 12.2018


Internal Use Only 38 of 141
Configuration & Operation Instruction

In the current shipping releases, a single hard disk drive can be partitioned into
two (one small and one large) logical partitions on entry-level systems.

Additionally, a solid state drive can be partitioned into two logical partitions on
All Flash FAS systems to maximize the usable capacity.

Also, the solid state drives can be partitioned into four equal partitions to be used
on hybrid aggregates as a shared storage pool to maximize the usable capacity.

4.1.2.1 Supported configuration

4.1.2.2 Root-Data HDD partitioning

The ADP Root-Data HDD partitioning is configured at the installation of the


controllers by Netapp.

Activating it during the RUN is not possible unless you erase all data on the filer,
and you reconfigure the filer from scratch.

 How to identify if ADP is configured on clustered Data ONTAP 8.3

To check the aggregates with ADP, run the following command:

© Copyright Equant 12.2018


Internal Use Only 39 of 141
Configuration & Operation Instruction

::> storage aggregate show -fields uses-shared-disks

This shows aggregates which have been created with ADP disks. When you run
the storage disk show command, the disks which are partitioned by ADP will have
a 'P1' or 'P2' appended to the end. You can also run this command to view the
partitioned disks:

::>storage disk show -container-type shared

 Setting up an active-passive configuration on nodes using root-data


partitioning

When an HA pair is configured to use root-data partitioning by the factory,


ownership of the data partitions is split between both nodes in the pair, for use in
an active-active configuration. If you want to use the HA pair in an active-passive
configuration, you must update partition ownership before creating your data
aggregate.

Before you begin

© Copyright Equant 12.2018


Internal Use Only 40 of 141
Configuration & Operation Instruction

- You should have decided which node will be the active node and which
node will be the passive node.

- Storage failover must be configured on the HA pair.

About this task

This task is performed on two nodes: Node A and Node B.

All commands are input at the clustershell.

This procedure is designed for nodes for which no data aggregate has been
created from the partitioned disks.

Steps

View the current ownership of the data partitions:

storage aggregate show-spare-disks

Example

You can see that half of the data partitions are owned by one node and half are
owned by the other node. All of the data partitions should be spare.

Enter the advanced privilege level:

set advanced

For each data partition owned by the node that will be the passive node, assign it
to the active node:

© Copyright Equant 12.2018


Internal Use Only 41 of 141
Configuration & Operation Instruction

storage disk assign -force -data true -owner active_node_name -disk


disk_name

You do not need to include the partition as part of the disk name.

Example

You would enter a command similar to the following example for each data
partition you need to reassign:

storage disk assign -force -data true -owner cluster1-01 -disk 1.0.3

Confirm that all of the partitions are assigned to the active node.

Example

Note that cluster1-02 still owns a spare root partition

Return to administrative privilege:

set admin

Create your data aggregate, leaving at least one data partition as spare:

storage aggregate create new_aggr_name -diskcount


number_of_partitions -node active_node_name

The data aggregate is created and is owned by the active node

© Copyright Equant 12.2018


Internal Use Only 42 of 141
Configuration & Operation Instruction

4.1.2.3 SSD Storage Pool

 How you use SSD storage pools

To enable SSDs to be shared by multiple Flash Pool aggregates, you place them
in a storage pool. After you add an SSD to a storage pool, you can no longer
manage it as a stand-alone entity—you must use the storage pool to assign or
allocate the storage provided by the SSD.

You create storage pools for a specific HA pair. Then, you add allocation units
from that storage pool to one or more Flash Pool aggregates owned by the same
HA pair. Just as disks must be owned by the same node that owns an aggregate
before they can be allocated to it, storage pools can provide storage only to Flash
Pool aggregates owned by one of the nodes that owns the storage pool.

If you need to increase the amount of Flash Pool cache on your system, you can
add more SSDs to a storage pool, up to the maximum RAID group size for the
RAID type of the Flash Pool caches using the storage pool. When you add an SSD
to an existing storage pool, you increase the size of the storage pool's allocation
units, including any allocation units that are already allocated to a Flash Pool
aggregate.

You should provide one or more spare SSDs for your storage pools, so that if an
SSD in that storage pool becomes unavailable, Data ONTAP can use a spare SSD
to reconstruct the partitions of the malfunctioning SSD. You do not need to
reserve any allocation units as spare capacity; Data ONTAP can use only a full,
unpartitioned SSD as a spare for SSDs in a storage pool.

After you add an SSD to a storage pool, you cannot remove it, just as you cannot
remove disks from an aggregate. If you want to use the SSDs in a storage pool as
discrete drives again, you must destroy all Flash Pool aggregates to which the

© Copyright Equant 12.2018


Internal Use Only 43 of 141
Configuration & Operation Instruction

storage pool's allocation units have been allocated, and then destroy the storage
pool.

 Creating an SSD storage pool

Storage pools do not support a diskcount parameter; you must supply a disk list
when creating or adding disks to a storage pool.

Steps

Determine the names of the spare SSDs available to you:

storage aggregate show-spare-disks -disk-type SSD

The SSDs used in a storage pool can be owned by either node of an HA pair.

Create the storage pool:

storage pool create -storage-pool sp_name -disk-list disk1,disk2,...

Show the newly created storage pool:

storage pool show -storage-pool sp_name

After the SSDs are placed into the storage pool, they no longer appear as spares
on the cluster, even though the storage provided by the storage pool has not yet
been allocated to any Flash Pool caches. The SSDs can no longer be added to a
RAID group as a discrete drive; their storage can be provisioned only by using the
allocation units of the storage pool to which they belong.

 Adding SSDs to an SSD storage pool

When you add SSDs to an SSD storage pool, you increase the storage pool's
physical and usable sizes and allocation unit size. The larger allocation unit size
also affects allocation units that have already been allocated to Flash Pool
aggregates.

Before you begin

You must have determined that this operation will not cause you to exceed the
cache limit for your HA pair. Data ONTAP does not prevent you from exceeding
the cache limit when you add SSDs to an SSD storage pool, and doing so can
render the newly added storage capacity unavailable for use.

About this task

When you add SSDs to an existing SSD storage pool, the SSDs must be owned by
one node or the other of the same HA pair that already owned the existing SSDs
in the storage pool. You can add SSDs that are owned by either node of the HA
pair.

Steps

View the current allocation unit size and available storage for the storage pool:

© Copyright Equant 12.2018


Internal Use Only 44 of 141
Configuration & Operation Instruction

storage pool show -instance sp_name

Find available SSDs:

storage disk show -container-type spare -type SSD

Add the SSDs to the storage pool:

storage pool add -storage-pool sp_name -disk-list disk1,disk2…

The system displays which Flash Pool aggregates will have their size increased by
this operation and by how much, and prompts you to confirm the operation.

4.2 Networking

4.2.1 IFGRP Static Multimode Clustered Data ONTAP

This type of interface group requires two or more Ethernet interfaces and an external
switch or switches that follow the 802.3ad (static) IEEE standard.
4.2.2 Create Static Multimode Interface Group

To create a static multimode interface group, complete the following step:


11 Run the following command from the clustershell interface:
network port ifgrp create –node <<var_node01>> –ifgrp <<var_ifgrp01>> –distr-func ip –mode
multimode

4.2.3 Add Ports to Static Multimode Interface Group

To add ports to the static multimode interface group, complete the following step:
12 Run the following command from the clustershell interface:
network port ifgrp add-port –node <<var_node01>> –ifgrp <<var_ifgrp01>> –port <<var_port1>>
network port ifgrp add-port –node <<var_node01>> –ifgrp <<var_ifgrp01>> –port <<var_port2>>

14 The interface group name must be formatted as an alphanumeric value, followed by a


numeric value, and ending with an alphanumeric value (for example, a0a).
4.2.4 IFGRP Single-Mode Clustered Data ONTAP

4.2.4.1 Create Single-Mode Interface Group

To create a single-mode interface group, complete the following steps:


1 Run the following command from the clustershell interface to create a single-mode interface group:
network port ifgrp create –node <<var_node01>> –ifgrp <<var_ifgrp01>> –distr-func ip –mode
singlemode

2 To add ports to the single-mode interface group, use the following command from the clustershell
interface:
network port ifgrp add-port –node <<var_node01>> –ifgrp <<var_ifgrp01>> –port <<var_port1>>
network port ifgrp add-port –node <<var_node01>> –ifgrp <<var_ifgrp01>> –port <<var_port2>>
4.2.4.2 Control Single-Mode Interface Group Port Favoring

To control manually which ports will be favored or unfavored in the single-mode interface
group, complete the following steps using the ifgrp command from the cluster interface:

© Copyright Equant 12.2018


Internal Use Only 45 of 141
Configuration & Operation Instruction

1 Enter the nodeshell from the clustershell.


system node run –node <<var_node01>>

2 From within the nodeshell, manually control the port favoring.


ifgrp {favor|nofavor} <<var_interface01>>

3 Exit the nodeshell and return to the clustershell.


exit

15 The interface group name must be formatted as an alphanumeric value, followed by a


numeric value, and ending with an alphanumeric value (for example, a0a).
4.2.5 IFGRP LACP Clustered Data ONTAP

This type of interface group requires two or more Ethernet interfaces and a switch that
supports LACP. Therefore, make sure that the switch is configured properly.
1 Run the following command on the command line. This example assumes that there are two
network interfaces called e0a and e0b and that an interface group called <<var_vif01>> is being
created.
network port ifgrp create -node <<var_node01>> -ifgrp <<var_vif01>> -distr-func ip -mode
multimode_lacp
network port ifgrp add-port -node <<var_node01>> -ifgrp <<var_vif01>> -port e0a
network port ifgrp add-port -node <<var_node01>> -ifgrp <<var_vif01>> -port e0b

16 All interfaces must be in the down status before being added to an interface group.
17 The interface group name must follow the standard naming convention of x0x.
4.2.6 VLAN Clustered Data ONTAP

1 Run the following command from the clustershell to create a VLAN network port. This example
assumes that there is a physical network port called e0d on a cluster node named <<var_node01>>
and that traffic tagging is enabled with the VLAN ID VLAN 10.
network port vlan create –node <<var_node01>> -vlan-name e0d-10

18 network port, a VLAN network port or IFGRP can be used instead.


4.2.7 Jumbo Frames Clustered Data ONTAP

1 To configure a clustered Data ONTAP network port to use jumbo frames (which usually have an
MTU of 9,000 bytes), run the following command from the clustershell:
network port modify –node <<var_node01>> -port <network_port> -mtu <<var_mtu>>

WARNING: Changing the network port settings will cause a serveral second interruption in
carrier.
Do you want to continue? {y|n}: y

6 The network port identified by -port can be a physical network port, a VLAN network port, or
an IFGRP.
4.3 IPSPACE creation

4.3.1 Ipspace

ipspace create -ipspace ”ipspace_name”

4.3.2 Broadcast domain

network port broadcast-domain create -broadcast-domain bd_”vlan_id” -mtu “mtu_size” -ipspace


”ipspace_name” -ports “node”-01:”ifgrp”,“node”-02:”ifgrp”

© Copyright Equant 12.2018


Internal Use Only 46 of 141
Configuration & Operation Instruction

4.3.3 Vserver creation

server create -vserver vserver_name -rootvolume “vserver_name”_root -aggregate “aggr_name”


-rootvolume-security-style unix -ipspace ”ipspace_name”

4.4 Cluster creation and cluster join

4.4.1 Establish Serial Connection to Storage Controller

To establish a serial connection to the storage console port, complete the following steps:
1 Use the serial console cable shipped with the storage controller to connect the serial console port
on the controller to a computer workstation.
19 The serial console port on the storage controller is indicated by IOIOI.
2 Configure access to the storage controller by opening a terminal emulator application, such as
HyperTerminal in a Windows environment or tip in a UNIX environment.
3 Enter the following serial connection settings:
 Bits per second: 9600
 Data bits: 8
 Parity: none
 Stop bits: 1
 Flow control: none
4 Power on the storage controller.
4.4.2 Validate Storage Controller Configuration

To verify that the system is ready and configured to boot to clustered Data ONTAP 8.2,
complete the following steps on all of the nodes:
1 Stop the autoboot process by pressing Ctrl+C.
2 At the LOADER prompt, enter the following command:
setenv bootarg.init.boot_clustered

3 Confirm that the result of the command is true.


4 At the LOADER prompt, enter autoboot.
4.4.3 Configure Boot Variable

To configure the boot variable, complete the following steps:


7 Use this procedure only if validation results are set as false.
1 Stop the autoboot process by pressing Ctrl+C.
2 Enter the following commands:
setenv bootarg.init.boot_clustered true
setenv bootarg.bsdportname e0M

3 At the LOADER prompt, enter autoboot


4 Stop the autoboot process and access the Boot menu by pressing Ctrl+C.
5 Select option 4 Clean configuration and initialize all disks.
6 At the prompt Zero disks, reset config and install a new file system, enter y.
7 Enter y to erase all of the data on the disks.

© Copyright Equant 12.2018


Internal Use Only 47 of 141
Configuration & Operation Instruction

20 Depending on the number of disks attached, the initialization and creation of the root
volume can take 75 minutes or more to complete.
8 After initialization is complete, wait while the storage system reboots.
4.4.4 Create Cluster

The Cluster Setup wizard is used to create the cluster on the first node. The wizard helps
in completing the following tasks:
8 Configuring the cluster network that connects the nodes (if the cluster consists of two or more
nodes)
9 Creating the cluster admin Storage Virtual Machine (SVM), formerly known as Vserver
10 Adding feature license keys
11 Creating the node management interface for the first node
8 The storage system hardware should be installed and cabled, and the console should be
connected to the node on which you intend to create the cluster.
To create the cluster, complete the following steps:
1 On system boot, verify that the console output displays the cluster setup wizard.
Welcome to the cluster setup wizard.

You can enter the following commands at any time:


"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
"exit" or "quit" - if you want to quit the cluster setup wizard.
Any changes you made before quitting will be saved.

You can return to cluster setup at any time by typing "cluster setup". To accept a default or
omit a question, do not enter a value.

Do you want to create a new cluster or join an existing cluster?


{create, join}:

21 If a login prompt appears instead of the Cluster Setup wizard, you must start the wizard by
logging in by using the factory default settings and then entering the cluster setup
command.
2 Enter the following command to create a new cluster:
create

3 Verify that the system defaults are displayed.


System Defaults:
Private cluster network ports [e0a,e0b].
Cluster port MTU values will be set to 9000.
Cluster interface IP addresses will be automatically generated.
Do you want to use these defaults? {yes, no} [yes]:

4 NetApp recommends accepting the system defaults. To accept the system defaults, press Enter.
5 Follow the prompts to complete the Cluster Setup wizard. NetApp recommends accepting the
defaults. To accept the defaults, press Enter for each prompt.
6 After the Cluster Setup wizard is completed and exits, verify that the cluster is active and that the
first node is healthy by typing cluster show and pressing Enter:
cluster show
Node Health Eligibility
--------------------- ------- ------------
cluster1-01 true true

9 You can access the Cluster Setup Wizard to change any of the values you entered for the
admin SVM or node SVM by using the cluster setup command.

© Copyright Equant 12.2018


Internal Use Only 48 of 141
Configuration & Operation Instruction

4.4.5 Join Cluster

After creating a new cluster, for each remaining node, use the Cluster Setup wizard to join
the node to the cluster and create its node management interface.
10 The storage system hardware should be installed and cabled, and the console should be
connected to the node on which you intend to create the cluster.
To join the node to the cluster, complete the following steps:
1 Power on the node. The node boots, and the Cluster Setup wizard opens on the console.
Welcome to the cluster setup wizard.

You can enter the following commands at any time:


"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
"exit" or "quit" - if you want to quit the cluster setup wizard.
Any changes you made before quitting will be saved.

You can return to cluster setup at any time by typing "cluster setup". To accept a default or
omit a question, do not enter a value.

Do you want to create a new cluster or join an existing cluster?


{create, join}:

2 To join a cluster, run the following command:


Join

3 Follow the prompts to set up the node and join it to the cluster:
 To accept the default value for a prompt, press Enter.
 To enter a different value for the prompt, type the value and press Enter.
4 After the Cluster Setup wizard is completed and exits, verify that the node is healthy and eligible to
participate in the cluster. Type cluster show and press Enter:
cluster show
Node Health Eligibility
--------------------- ------- ------------
cluster1-01 true true
cluster1-02 true true

5 Repeat this task for each remaining node.


4.4.6 Cluster Create for Cluster-Mode

Table 14) Cluster create for Cluster-Mode prerequisites.

Cluster Detail Cluster Detail Value


Cluster name <<var_clustername>>

Cluster-Mode base license 14-digit license key

Cluster-management IP address <<var_clustermgmt_IP>>

Cluster-management netmask <<var_clustermgmt_netmask>>

Cluster-management port <<var_clustermgmt_port>>

Cluster-management gateway <<var_clustermgmt_gateway>>


The first node in the cluster performs the cluster create operation. All other nodes
perform a cluster join operation. The first node in the cluster is considered Node01.
1 During the first node boot, the Cluster Setup wizard starts running on the console.
Welcome to the cluster setup wizard.
You can enter the following commands at any time:
"help" or "?" - if you want to have a question clarified,

© Copyright Equant 12.2018


Internal Use Only 49 of 141
Configuration & Operation Instruction

"back" - if you want to change previously answered questions, and


"exit" or "quit" - if you want to quit the cluster setup wizard.
Any changes you made before quitting will be saved.
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
Do you want to create a new cluster or join an existing cluster?
{create, join}:

11 If a login prompt displays instead of the Cluster Setup wizard, start the wizard by logging in
using the factory default settings and then enter the cluster setup command.
2 Enter the following command to create a new cluster:
create

3 The system defaults are displayed.


System Defaults:
Private cluster network ports [e0a,e0b].
Cluster port MTU values will be set to 9000.
Cluster interface IP addresses will be automatically generated.
Do you want to use these defaults? {yes, no} [yes]:

4 NetApp recommends accepting the system defaults. To accept the system defaults, press Enter.
5 Follow the prompts to complete the Cluster Setup wizard. NetApp recommends accepting the
defaults. To accept the defaults, press Enter for each prompt.
4.4.7 Cluster Join for Clustered Data ONTAP

Table 15) Cluster join for Clustered Data ONTAP prerequisites.

Cluster Detail Cluster Detail Value


Node01 cluster port 1 IP (node already in cluster) <<var_cluster_IP1>>
The first node in the cluster performs the cluster create operation. All other nodes
perform a cluster join operation. The first node in the cluster is considered Node01, and
the node joining the cluster in this example is Node02.
1 During the first node boot, the Cluster Setup wizard starts running on the console.
Welcome to the cluster setup wizard.
You can enter the following commands at any time:
"help" or "?" - if you want to have a question clarified,
"back" - if you want to change previously answered questions, and
"exit" or "quit" - if you want to quit the cluster setup wizard.
Any changes you made before quitting will be saved.
You can return to cluster setup at any time by typing "cluster setup".
To accept a default or omit a question, do not enter a value.
Do you want to create a new cluster or join an existing cluster?
{create, join}:

12 If a login prompt displays instead of the Cluster Setup wizard, start the wizard by logging in
using the factory default settings, and then enter the cluster setup command.
2 Enter the following command to join a cluster:
join

3 The system defaults are displayed.


System Defaults:
Private cluster network ports [e0a,e0b].
Cluster port MTU values will be set to 9000.
Cluster interface IP addresses will be automatically generated.
Do you want to use these defaults? {yes, no} [yes]:

4 NetApp recommends accepting the system defaults. To accept the system defaults, press Enter.
5 Follow the prompts to complete the Cluster Setup wizard. NetApp recommends accepting the
defaults. To accept the defaults, press Enter for each prompt.
22 command.

© Copyright Equant 12.2018


Internal Use Only 50 of 141
Configuration & Operation Instruction

cluster join -clusteripaddr <<var_cluster_IP1>> -mtu 9000

4.5 Disk Assign Clustered Data ONTAP

To reassign a spare disk to a different node in the same HA pair, complete the following
steps:
1 Run the following command:
storage disk modify -disk <<var_disk01>> -owner <<var_node01>> -force-owner true

13 Disks can be assigned to either storage controller in an HA pair. Use the information obtained
from the sizing done during the presales process to determine how many disks must be assigned to
each node to support each application's workload.
4.6 Flash Cache Clustered Data ONTAP

Complete the following steps to enable Flash Cache on each node:


1 Shut down the storage system and install the Flash Cache module in the PCIe slot.
2 Boot the storage system.
3 Run the following commands on each node:
system node run -node <<var_node01>>
options flexscale.enable on
options flexscale.lopri_blocks off
options flexscale.normal_data_blocks on
exit

14 Data ONTAP 8.1 and later do not require a separate license for Flash Cache.

4.7 Aggregates 64 Bit Clustered Data ONTAP

A 64-bit aggregate containing the root volume is created during the Data ONTAP setup
process. To create additional 64-bit aggregates, determine the aggregate name, the node
on which to create it, and how many disks it will contain.
4.7.1.1 Create New Aggregate

To create a new aggregate, complete the following step:


3 Run the following command:
storage aggr create -aggregate <<var_aggr01>> -nodes <<var_node01>> -B 64 -diskcount
<<var_num_disks>>

15 Leave at least one disk (select the largest disk) in the configuration as a spare. A best practice
is to have at least one disk spare for each disk type and size.
16 Because Data ONTAP 8.2 has a 64-bit default value, the -B 64 option is not needed.
4.8 Compression Clustered Data ONTAP
Table 16) Compression clustered Data ONTAP prerequisite.

Description
Compression can be enabled only on FlexVol volumes that exist within a 64-bit aggregate.
4.8.1.1 Enable Data Compression

To enable postprocess data compression, complete the following steps:


4 Run the following command:

© Copyright Equant 12.2018


Internal Use Only 51 of 141
Configuration & Operation Instruction

volume efficiency modify –vserver <<var_vserver01>> -volume <<var_vol01>> -compression true

4.9 RLM Clustered Data ONTAP

Before configuring the Remote LAN Module (RLM), gather information about the network
and the AutoSupport settings.
Configure the RLM using DHCP or static addressing. To use static addressing, first gather
the following information:
12 An available static IP address
13 The netmask of the network
14 The gateway address of the network
15 Autosupport information
As a best practice, configure at least the AutoSupport recipients and mail host and then
configure the RLM; the name or the IP address of the AutoSupport mail host is required.
Data ONTAP automatically sends AutoSupport configuration to the RLM, allowing the RLM
to send alerts and notifications through an AutoSupport message to the system
administrative recipients specified in AutoSupport.
4.9.1.1 Configure RLM

To configure the RLM, complete the following steps:


8 From the nodeshell, enter the following command:
system node run -node <<var_node01>> rlm setup

9 When prompted whether to configure the RLM, enter y.


10 When prompted whether to enable DHCP on the RLM, use one of the following responses:
 To use DHCP addressing, enter y.
 To use static addressing, enter n.
11 If DHCP for the RLM is not enabled, the RLM setup requires static IP information. When prompted,
provide the following information:
 The IP address for the RLM
 The netmask for the RLM
 The IP address for the RLM gateway
 The name or IP address of the AutoSupport mail host
4.10 Service Processor Clustered Data ONTAP

4.10.1.1 Set Up Service Processor

To configure and enable the service processor (SP), complete the following step:
1 Run the following command:
system node service-processor network modify -node <<var_node01>> -address-type <IPv4> -enable
<true> -ip-address <<var_ipaddress>> -netmask <<var_netmask>> -gateway <<var_gateway>>

Where
 -address-type specifies whether the IPv4 or IPv6 configuration of the SP should be modified.
 -enable enables the network interface of the specified IP address type.
 -dhcp specifies whether to use the network configuration from the DHCP server or the
network address that you provide.
23 You can enable DHCP (by setting -dhcp to v4), but only if you are using IPv4. You cannot
enable DHCP for IPv6 configurations.
 -ip-address specifies the public IP address for the SP.

© Copyright Equant 12.2018


Internal Use Only 52 of 141
Configuration & Operation Instruction

 -netmask specifies the netmask for the SP (IPv4).


 -prefix-length specifies the network prefix length of the subnet mask for the SP (IPv6).
 -gateway specifies the gateway IP address for the SP.

4.11 Storage Failover Clustered Data ONTAP

To enable storage failover, run the following commands in a failover pair:


1 Enter the cluster license on both nodes.
system node run <<var_node01>> license add <<var_cluster_license>>
system node run <<var_node02>> license add <<var_cluster_license>>

2 Enable failover on one of the two nodes.


storage failover modify -node <<var_node01>> -enabled true

3 Enable HA mode for two-node clusters only.


cluster ha modify -configured true

4 Select Yes if a prompt displays an option to enable SFO.


4.12 License and protocols

To add a new license, complete the following step:


1 Run the following command to install the license:
license add -license-code <<var_license>>

4.12.1 CIFS Clustered Data ONTAP

To deploy CIFS, complete the following step:


1 Install the CIFS license.
license add -license-code <<var_cifs_license>>

4.12.2 NFSv3 Clustered Data ONTAP

To configure NFS on the v-server, run the following command:


1 Install the NFS license.
system license add -license-code <<var_nfs_license>>

4.12.3 FC Clustered Data ONTAP

To deploy FC clustered Data ONTAP, complete the following steps:


1 License FCP.
system license add -license-code <<var_fc_license>>

2 If needed, create the FC service on each Vserver. This command also starts the FC service and
sets the FC alias to the name of the Vserver.
vserver fcp create -vserver <<var_vserver01>>

3 If needed, start the FC service on each Vserver.


vserver fcp start -vserver <<var_vserver01>>

4 Verify whether the FC ports are targets or initiators.


system node run -node <<var_node01>> fcadmin config

5 If needed, make an FC port into a target to allow connections into the node.

© Copyright Equant 12.2018


Internal Use Only 53 of 141
Configuration & Operation Instruction

For example, make a port called <<var_fctarget01>> into a target port by running the
following command:
system node run -node <<var_node01>> fcadmin config -t target <<var_fctarget01>>

17 If an initiator port is made into a target port, a reboot is required. NetApp recommends
rebooting after completing the entire configuration because other configuration steps might also
require a reboot.
4.12.4 iSCSI Clustered Data ONTAP

To configure the iSCSI service on a Vserver, run the following commands:


18 These steps do not include any host-specific configuration tasks.
1 From the cluster shell, license the iSCSI protocol.
system license add -license-code <<var_iscsi_license>>

2 If needed, create the iSCSI service on each Vserver. This command also starts the iSCSI service
and sets the iSCSI alias to the name of the Vserver.
vserver iscsi create -vserver <<var_vserver01>>

3 If needed, start the iSCSI service on each Vserver.


vserver iscsi start -vserver <<var_vserver01>>

4.13 DNS Clustered Data ONTAP

To configure DNS, complete the following step:


1 Run the following command for the Vserver:
vserver services dns create -vserver <<var_vserver01>> -domains <<var_global_domain_name>>
-name-servers <<var_global_nameserver_IP>> -state enabled

19 To modify an existing entry, replace the word create with modify in the command.
4.14 NTP Clustered Data ONTAP

4.14.1.1 Configure Time Synchronization for Clustered Data ONTAP 8.2.x

To configure time synchronization on the cluster, complete the following steps on each
node in the cluster:
1 Run the system services ntp server create command to associate the node with the NTP
server.
system services ntp server create -node <<var_node01>> -server
<<var_global_ntp_server_ip_addr>> -version max

2 Run the cluster date show command to verify that the date, system time, and time zone are set
correctly for each node.
24 All nodes in the cluster should be set to the same time zone.
cluster date show
Node Date Timezone
------------ ------------------- -----------------
cluster1-01 04/06/2013 09:35:15 America/New_York
cluster1-02 04/06/2013 09:35:15 America/New_York
cluster1-03 04/06/2013 09:35:15 America/New_York
cluster1-04 04/06/2013 09:35:15 America/New_York
cluster1-05 04/06/2013 09:35:15 America/New_York
cluster1-06 04/06/2013 09:35:15 America/New_York
6 entries were displayed.

3 To correct any time zone or date values, run cluster date modify to change the date or time
zone on all of the nodes.

© Copyright Equant 12.2018


Internal Use Only 54 of 141
Configuration & Operation Instruction

cluster date modify -timezone GMT

4.15 SNMP Cluster-Mode

1 Configure SNMP basic information, such as the location and contact. When polled, this information
is visible as the sysLocation and sysContact variables in SNMP.
snmp contact "Services Engineering"
snmp location "Firebird Lab"
snmp init 1
options snmp.enable on

2 Configure SNMP traps to send to remote hosts, such as a DFM server or another fault
management system.
snmp traphost add <<var_dfm_server_fqdn>>
4.15.1.1 SNMPv1 Cluster-Mode

1 Set the shared secret plain-text password, which is called a community.


snmp community delete all
snmp community add ro public

20 Use the delete all command with caution. If community strings are used for other
monitoring products, the delete all command will remove them.
4.15.1.2 SNMPv3 Cluster-Mode

SNMPv3 requires that a user be defined and configured for authentication.


1 Create a user called snmpuser.
security login create -username snmpuser -authmethod usm -application snmp

2 Select all of the default authoritative entities and select md5 as the authentication protocol.
3 Enter an 8-character minimum-length password for the authentication protocol, when prompted.
4 Select des as the privacy protocol.
5 Enter an 8-character minimum-length password for the privacy protocol, when prompted.
4.16 Syslog Clustered Data ONTAP
Table 17) Syslog prerequisites.

Description
The SMTP server with appropriate mail IDs is configured.

The remote server has syslogd running and listening on the appropriate UDP port.
Complete the following steps to configure syslog on the cluster:
1 Set up the preconfigured SMTP server as the default mail server. Use the mail ID as it appears in
the From: field of the e-mail.
event config modify -mailfrom <<var_admin_username>> -mailserver <<var_site_a_mailhost>>

2 Configure the syslog server as an event destination.


event destination create -name <ems_destination> -syslog <<var_loghost>> -syslog-facility
<syslog_facility>

3 Configure specific events, message names, and severity to specific destinations.


event route add-destination {-messagename <<var_message>> -severity <event_severity> }
-destination <destination_name>

4 Optimize the information sent to the destination on a specific frequency and time interval.

© Copyright Equant 12.2018


Internal Use Only 55 of 141
Configuration & Operation Instruction

event route modify -messagename <<var_message>> -destination <destination_name>


-frequencythreshold <frequency_threshold_count> -timethreshold <time_threshold_count>

4.17 AutoSupport HTTPS Clustered Data ONTAP

AutoSupport sends support summary information to NetApp through HTTPS.


To configure AutoSupport, complete the following step:
1 Run the following commands:
system node autosupport modify -node * -state enable -transport https -support enable -noteto
<<var_storage_admin_email>>

4.18 User Access for Clustered Data ONTAP

Best Practice
Delete or lock the default admin account.
There are two default administrative accounts: admin and diag. The admin account serves
in the role of administrator and is allowed access using all applications. To set up user
access for clustered Data ONTAP, complete the following steps:
1 Create a login method for a new administrator from clustershell.
security login create –username <<var_username>> -authmethod password -role admin -application
ssh
security login create –username <<var_username>> -authmethod password -role admin -application
http
security login create –username <<var_username>> -authmethod password -role admin -application
console
security login create –username <<var_username>> -authmethod password -role admin -application
ontapi
security login create –username <<var_username>> -authmethod password -role admin -application
service-processor

2 Lock the default admin account.


security login lock –username admin

4.19 HTTPS Access Clustered Data ONTAP

Secure access to the storage controller must be configured. To configure secure access,
complete the following steps:
1 Increase the privilege level to access the certificate commands.
set -privilege advanced
Do you want to continue? {y|n}: y

2 Generally, a self-signed certificate is already in place. Check it with the following command:
security certificate show

3 If a self-signed certificate does not exist, run the following command as a one-time command to
generate and install a self-signed certificate:
21 You can also use the security certificate delete command to delete expired
certificates.
security certificate create -vserver <<var_vserver01>> -common-name
<<var_security_cert_common_name>> -size 2048 -country US -state CA -locality Sunnyvale
-organization IT -unit Software -email-addr user@example.com

4 Configure and enable SSL and HTTPS access and disable Telnet access.
system services web modify -external true -sslv3-enabled true
Do you want to continue {y|n}: y
system services firewall policy delete -policy mgmt -service http -action allow

© Copyright Equant 12.2018


Internal Use Only 56 of 141
Configuration & Operation Instruction

system services firewall policy create -policy mgmt -service http -action deny -ip-list
0.0.0.0/0
system services firewall policy delete -policy mgmt -service telnet -action allow
system services firewall policy create -policy mgmt -service telnet -action deny -ip-list
0.0.0.0/0
security ssl modify -vserver <<var_vserver01>> -certificate <<var_security_cert_common_name>>
-enabled true

22 It is normal for some of these commands to return an error message stating that the entry
does not exist.
4.20 NDMP Clustered Data ONTAP

NDMP must be enabled before it can be used by a NetApp storage controller. To enable
NDMP, complete the following steps:
1 Enable NDMP on each node in the cluster.
system services ndmpd on -node <<var_node01>>

25 The setting is persistent across reboots.


2 Create an NDMP password for each node in the cluster.
system services ndmpd password –node <<var_node01>>

4.20.1 NDMP modes of operation

Starting with Data ONTAP 8.2, you can choose to perform tape backup and restore
operations either at the node level as you have been doing until now or at the Storage
Virtual Machine (SVM) level. To perform these operations successfully at the SVM level,
NDMP service must be enabled on the SVM.

If you upgrade from Data ONTAP 8.1 to Data ONTAP 8.2, NDMP continues to follow node-
scoped behavior. You must explicitly disable node-scoped NDMP mode to perform tape
backup and restore operations in the SVM-scoped NDMP mode.

If you install a new Data ONTAP 8.2 cluster, NDMP is in the SVM-scoped NDMP mode by
default. To perform tape backup and restore operations in the node-scoped NDMP mode,
you must explicitly enable the node-scoped NDMP mode.

What node-scoped NDMP mode is:


In the node-scoped NDMP mode, you can perform tape backup and restore operations at
the node level. If you upgrade from 8.1 to 8.2, NDMP continues to follow the node-scoped
behavior.

In this mode, you can perform tape backup and restore operations on a node that owns
the volume. To perform these operations, you must establish NDMP control connections
on a LIF hosted on the node that owns the volume or tape devices.
What SVM-scoped NDMP mode is:
Starting with Data ONTAP 8.2, you can perform tape backup and restore operations at the
Storage Virtual Machine (SVM) level successfully if the NDMP service is enabled on the
SVM. You can back up and restore all volumes hosted across different nodes in an SVM of
a cluster if the backup application supports the CAB extension.

© Copyright Equant 12.2018


Internal Use Only 57 of 141
Configuration & Operation Instruction

An NDMP control connection can be established on different LIF types. In the SVM-scoped
NDMP mode, these LIFs belong to either the data SVM or admin SVM. Data LIF belongs to
the data SVM and the intercluster LIF, node-management LIF, and cluster-management
LIF belong to the admin SVM. The NDMP control connection can be established on a LIF
only if the NDMP service is enabled on the SVM that owns this LIF.
In the SVM context, the availability of volumes and tape devices for backup and restore
operations depends upon the LIF type on which the NDMP control connection is
established and the status of the CAB extension. If your backup application supports the
CAB extension and a volume and tape device share the same affinity, then the backup
application can perform a local backup or restore operation instead of a three-way backup
or restore operation.

4.21 Tape drives

There are commands for viewing information about tape drives and media changers in a
cluster, bringing a tape drive online and taking it offline, modifying the tape drive
cartridge position, setting and clearing tape drive alias name, and resetting a tape drive.
You can also view and reset tape drive statistics.

You have to access the nodeshell to use some of the commands listed in the following
table. You can access the nodeshell by using the system node run command.

If you want to... Use this command...


Bring a tape drive online storage tape online
Clear an alias name for tape drive or media changer storage tape alias clear

Enable or disable a tape trace operation for a tape drive storage tape trace

Modify the tape drive cartridge position storage tape position


Reset a tape drive storage tape reset

© Copyright Equant 12.2018


Internal Use Only 58 of 141
Configuration & Operation Instruction

Note: This command is available only at the


advanced privilege level.

Set an alias name for tape drive or media changer storage tape alias set
Take a tape drive offline storage tape offline
View information about all tape drives and media changers storage tape show

View information about tape drives attached to the cluster storage tape show-tape-drive

system node hardware tape drive show

View information about media changers attached to the storage tape show-media-changer
cluster
View error information about tape drives attached to the storage tape show-errors
cluster
View all Data ONTAP qualified and supported tape drives storage tape show-supported-status
attached to each node in the cluster
View aliases of all tape drives and media changers storage tape alias show
attached to each node in the cluster
Reset the statistics reading of a tape drive to zero storage stats tape zero tape_name

You must use this command at the nodeshell.


View tape drives supported by Data ONTAP storage show tape supported [-v]

You must use this command at the nodeshell. You can


use the -v option to view more details about each
tape drive.
View tape device statistics to understand tape performance storage stats tape tape_name
and check usage pattern

You must use this command at the nodeshell.

4.22 Antivirus solution for clustered Data ONTAP

Virus scanning is available only for CIFS-related traffic. This procedure indicates how to
connect enable antivirus scanning on a SVM.

Prerequisites :

 SVM and Vscan server are integrated in the same AD domain

 SVM CIFS setup is completed (see §4.3.18)

© Copyright Equant 12.2018


Internal Use Only 59 of 141
Configuration & Operation Instruction

1. Create a user with at least read-only access to the network interface


command directory for ontapi

cluster::> security login create -username <antivirus_user> -vserver <vserver> -application


ontapi -authmethod password -role vsadmin

Please enter a password for user '<antivirus_user>':


Please enter it again:

2. Create scanner pool :

Best Practice
Credentials used as service accounts to run the Antivirus Connector service must be added as
privileged users in the scanner pool.
The same service account must be used to run the antivirus engine service.

cluster::> vserver vscan scanner-pool create –vserver <vserver_name> -scanner-pool


<scanner_pool_name> -servers <vscan-server1>,<vscan-server2> -privilegied-users
<antivirus_user>

3. Apply scanner policy to scanner pool :

cluster::> vserver vscan scanner-pool apply-policy –vserver <vserver_name> -scanner-pool


<scanner_pool_name> -scanner-policy primary

4. Enable Virus Scanning on SVM :

cluster::> vserver vscan enable -vserver <vserver_name>

/!\ To finalize the configuration, please refer to the NetApp Antivirus Solution
for Clustered Ontap INM

1.1 SnapLock

SnapLock is a high-performance compliance solution for organizations that use


WORM storage to retain files in unmodified form for regulatory and governance
purposes. A single license entitles you to use SnapLock in strict Compliance
mode, to satisfy external mandates like SEC Rule 17a-4, and a looser Enterprise
mode, to meet internally mandated regulations for the protection of digital
assets.

Differences between Compliance and Enterprise modes

SnapLock Compliance and Enterprise modes differ mainly in the level at which
each mode protects WORM files:

• Compliance-mode WORM files are protected at the disk level.

© Copyright Equant 12.2018


Internal Use Only 60 of 141
Configuration & Operation Instruction

You cannot reinitialize a disk that contains Compliance-mode aggregates.

• Enterprise-mode WORM files are protected at the file level.

A related difference involves how strictly each mode manages file deletes:

• Compliance-mode WORM files cannot be deleted during the retention period.

• Enterprise-mode WORM files can be deleted during the retention period by the
compliance administrator, using an audited privileged delete procedure.

After the retention period has elapsed, you are responsible for deleting any files
you no longer need. Once a file has been committed to WORM, whether under
Compliance or Enterprise mode, it cannot be modified, even after the retention
period has expired.

The following table shows the differences between SnapLock Compliance and
Enterprise modes.

© Copyright Equant 12.2018


Internal Use Only 61 of 141
Configuration & Operation Instruction

1.1.1 Snaplock configuration

Snaplock license must be installed on the node.

 Initializing the ComplianceClock

The SnapLock ComplianceClock ensures against tampering that might alter the
retention period for WORM files. You must initialize the system ComplianceClock
on each node that hosts a SnapLock aggregate. Once you initialize the
ComplianceClock on a node, you cannot initialize it again.

Initialize the system ComplianceClock:

# snaplock compliance-clock initialize -node node_name

Do it for each node that will hosts a Snaplock aggregate.

 Create A Snaplock aggregate

You must create a SnapLock aggregate before creating a SnapLock volume.


The SnapLock mode for the aggregate—Compliance or Enterprise—is
inherited by the volumes in the aggregate.

• You cannot create Compliance aggregates for MetroCluster configurations or


array LUNs.

• You cannot create Compliance aggregates with the SyncMirror option.

© Copyright Equant 12.2018


Internal Use Only 62 of 141
Configuration & Operation Instruction

• You can destroy or rename an Enterprise aggregate at any time.

• You cannot destroy a Compliance aggregate until the retention period has
elapsed.

• You can never rename a Compliance aggregate.

Command:

# storage aggregate create -aggregate aggregate_name -node node_name


-diskcount number_of_disks -snaplock-type compliance|enterprise

 Create a Snaplock volume

You must create a SnapLock volume for the files or Snapshot copies that you
want to commit to the WORM state. The volume inherits the SnapLock mode
—Compliance or Enterprise—from the SnapLock aggregate, and the
ComplianceClock time from the node.

• The SnapLock aggregate must be online.

• You must have created a standard aggregate and a SVM to host the
SnapLock volume.

Command:

# volume create -vserver SVM_name -volume volume_name –aggregate


aggregate_name

 Setting the retention time

SnapLock uses the default retention period to calculate the retention time.

© Copyright Equant 12.2018


Internal Use Only 63 of 141
Configuration & Operation Instruction

A SnapLock Compliance or Enterprise volume has three retention period


values:

• Minimum retention period (min), with a default value of 0

• Maximum retention period (max), with a default value of 30 years

• Default retention period, with a default value that depends on the mode:

◦ For Compliance mode, the default is equal to max.

◦ For Enterprise mode, the default is equal to min.

The SnapLock volume must be online.

Set the default retention period for files on a SnapLock volume:

# volume snaplock modify -vserver SVM_name -volume volume_name


-defaultretention-period default_retention_period -minimum-retention-period
min_retention_period -maximum-retention-period max_retention_period

Here the default is set to 10 days.

4.22.1 Committing files to WORM

You can use a command or program to commit files to WORM manually, or


use the SnapLock autocommit feature to commit files to WORM automatically.

 Committing files to WORM manually

You commit a file to WORM manually by making the file read-only. You can
use any suitable command or program over NFS or CIFS to change the read-
write attribute of a file to read-only.

Use a suitable command or program to change the read-write attribute of a


file to read-only.

In a UNIX shell, use the following command to make a file named


document.txt read-only:

# chmod -w document.txt

In a Windows shell, use the following command to make a file named


document.txt read-only:

# attrib +r document.txt

 Auto committing files to WORM

The SnapLock autocommit feature lets you commit files to WORM


automatically. You can use the volume snaplock modify command to
configure the autocommit feature for files on a SnapLock volume.

© Copyright Equant 12.2018


Internal Use Only 64 of 141
Configuration & Operation Instruction

The autocommit period specifies the amount of time that files must remain
unchanged before they are autocommitted. Changing a file before the
autocommit period has elapsed restarts the autocommit period for the file.

- Autocommit files on a SnapLock volume to WORM:

# volume snaplock modify -vserver SVM_name -volume volume_name -


autocommit-period autocommit_period

4.22.2 SnapLock with SnapVault

You can use SnapLock for SnapVault to WORM-protect Snapshot copies on


secondary storage. You perform all of the basic SnapLock tasks on the
SnapVault destination. The destination volume is automatically mounted
read-only, so there is no need to explicitly commit the Snapshot copies to
WORM.

• The source cluster must be running ONTAP 8.2.2 or later.

• The source and destination aggregates must be 64-bit.

• The source volume cannot be a SnapLock volume.

• The source and destination volumes must be created in peered clusters


with peered SVMs. For more information, see the Cluster Peering Express
Guide.

• If volume autogrow is disabled, the free space on the destination volume


must be at least five percent more than the used space on the source
volume.

The following illustration shows the procedure for initializing a SnapVault


relationship:

© Copyright Equant 12.2018


Internal Use Only 65 of 141
Configuration & Operation Instruction

1. Identify the destination cluster.

2. On the destination cluster, install the SnapLock license, initialize the


ComplianceClock, and create a SnapLock aggregate.

3. On the destination cluster, create a SnapLock destination volume of type


DP that is either the same or greater in size than the source volume:

# volume create -vserver SVM_name -volume volume_name


-aggregate aggregate_name -type DP -size size

Note: The SnapLock mode, Compliance or Enterprise, is inherited from the


aggregate. Version-flexible destination volumes are not supported. The
language setting of the destination volume must match the language setting
of the source volume.

The following command creates a 2 GB SnapLock Compliance volume named


dstvolB in SVM2 on the aggregate node01_aggr:

cluster2::> volume create -vserver SVM2 -volume dstvolB -aggregate


node01_aggr -type DP -size 2GB

4. On the destination cluster, set the default retention period

5. On the destination SVM, create a SnapVault policy:

# snapmirror policy create -vserver SVM_name -policy policy_name


-type vault

The following command creates the SVM-wide SnapVault policy SVM1-vault:

SVM2::> snapmirror policy create -vserver SVM2 -policy SVM1-vault


- type vault

6. Add rules to the policy that define Snapshot copy labels and the retention
policy for each label:

# snapmirror policy add-rule -vserver SVM_name -policy policy_name -


snapmirror-label label -keep number_of_snapshot_copies_to_retain

The following command adds a rule to the SVM1-vault policy that defines the
Daily label and specifies that 30 Snapshot copies matching the label should
be kept in the vault:

SVM2::> snapmirror policy add-rule -vserver SVM2 -policy SVM1-


vault -snapmirror-label Daily -keep 30

7. On the destination SVM, create a SnapVault schedule:


# job schedule cron create -name schedule_name -dayofweek
day_of_week - hour hour -minute minute

The following command creates a SnapVault schedule named weekendcron:

© Copyright Equant 12.2018


Internal Use Only 66 of 141
Configuration & Operation Instruction

SVM2::> job schedule cron create -name weekendcron -dayofweek


"Saturday, Sunday" -hour 3 -minute 0

8. On the destination SVM, create the SnapVault relationship and assign the
SnapVault policy and schedule:
# snapmirror create -source-path source_path -destination-path
destination_path -type XDP -policy policy_name -schedule
schedule_name

The following command creates a SnapVault relationship between the source


volume srcvolA on SVM1 and the destination volume dstvolB on SVM2, and
assigns the policy SVM1-vault and the schedule weekendcron:

SVM2::> snapmirror create -source-path SVM1:srcvolA -destination-


path SVM2:dstvolB -type XDP -policy SVM1-vault -schedule
weekendcron

9. On the destination SVM, initialize the SnapVault relationship:

# snapmirror initialize -destination-path destination_path

The initialization process performs a baseline transfer to the destination


volume. SnapMirror makes a Snapshot copy of the source volume, then
transfers the copy and all of the data blocks it references to the destination
volume

The following command initializes the relationship between the source volume
srcvolA on SVM1 and the destination volume dstvolB on
# SVM2: SVM2::> snapmirror initialize -destination-path
SVM2:dstvolB
4.22.3 Mirroring WORM files

You can use SnapMirror to replicate WORM files to another geographic location
for disaster recovery and other purposes. Both the source volume and
destination volume must be configured for SnapLock, and both volumes must
have the same SnapLock mode, Compliance or Enterprise. All key SnapLock
properties of the volume and files are replicated.

The source and destination volumes must be created in peered clusters with
peered SVMs.

The following illustration shows the procedure for initializing a SnapMirror


relationship:

© Copyright Equant 12.2018


Internal Use Only 67 of 141
Configuration & Operation Instruction

1. Identify the destination cluster.

2. On the destination cluster, install the SnapLock license, initialize the


ComplianceClock, and create a SnapLock aggregate.

3. On the destination cluster, create a SnapLock destination volume of type


DP that is either the same size as or greater in size than the source volume:

# volume create -vserver SVM_name -volume volume_name


-aggregate aggregate_name -type DP -size size

The following command creates a 2 GB SnapLock Compliance volume


named dstvolB in SVM2 on the aggregate node01_aggr:

cluster2::> volume create -vserver SVM2 -volume dstvolB


-aggregate node01_aggr -type DP -size 2GB

4. On the destination SVM, create a SnapMirror policy:


# snapmirror policy create -vserver SVM_name -policy policy_name

The following command creates the SVM-wide policy SVM1-mirror:


SVM2::> snapmirror policy create -vserver SVM2 -policy SVM1-
mirror

5. On the destination SVM, create a SnapMirror schedule:

# job schedule cron create -name schedule_name -dayofweek


day_of_week - hour hour -minute minute

The following command creates a SnapMirror schedule named


weekendcron:

SVM2::> job schedule cron create -name weekendcron -dayofweek


"Saturday, Sunday" -hour 3 -minute 0

6. On the destination SVM, create a SnapMirror relationship:

# snapmirror create -source-path source_path -destination-path


destination_path -type DP -policy policy_name -schedule
schedule_name

The following command creates a SnapMirror relationship between the


source volume srcvolA on SVM1 and the destination volume dstvolB on
SVM2, and assigns the policy SVM1-mirror and the schedule weekendcron:

# SVM2::> snapmirror create -source-path SVM1:srcvolA


-destination-path SVM2:dstvolB -type DP -policy SVM1-mirror
-schedule weekendcron

7. On the destination SVM, initialize the SnapMirror relationship:

# snapmirror initialize -destination-path destination_path

The initialization process performs a baseline transfer to the destination


volume. SnapMirror makes a Snapshot copy of the source volume, then
transfers the copy and all the data blocks that it references to the

© Copyright Equant 12.2018


Internal Use Only 68 of 141
Configuration & Operation Instruction

destination volume. It also transfers any other Snapshot copies on the


source volume to the destination volume.

The following command initializes the relationship between the source


volume srcvolA on SVM1 and the destination volume dstvolB on SVM2:
#SVM2::> snapmirror initialize -destination-path SVM2:dstvolB

1.2 7-Mode Data Transition Using Snapmirror

Extract of the official documentation : please refer to this documentation for all
prerequisites.

You can transition 7-Mode volumes in a NAS and SAN environment to clustered
Data ONTAP volumes by using clustered Data ONTAP SnapMirror commands.
You must then set up the protocols, services, and other configuration on the
cluster after the transition is complete.

Preparing the 7-Mode system for transition

Before you begin


All the 7-Mode volumes that you want to transition must be online.

Steps
1. Add and enable the SnapMirror license on the 7-Mode system:
a. Add the SnapMirror license on the 7-Mode system:
license add license_code
license_code is the license code you purchased.
b. Enable the SnapMirror functionality:
options snapmirror.enable on
2. Configure the 7-Mode system and the target cluster to communicate with
each other by choosing
one of the following options:
• Set the snapmirror.access option to all.
• Set the value of the snapmirror.access option to the IP addresses of all the
LIFs on the
cluster.
• If the snapmirror.access option is legacy and the snapmirror.checkip.enable
option is off, add the SVM name to the /etc/snapmirror.allow file.
• If the snapmirror.access option is legacy and the snapmirror.checkip.enable
option is on, add the IP addresses of the LIFs to the /etc/snapmirror.allow file.
3. Depending on the Data ONTAP version of your 7-Mode system, perform the
following steps:
a. Allow SnapMirror traffic on all the interfaces:
options interface.snapmirror.blocked ""
b. If you are running Data ONTAP version 7.3.7, 8.0.3, or 8.1 and you are using
the IP address of the e0M interface as the management IP address to interact
with 7-Mode Transition Tool, allow data traffic on the e0M interface:
options interface.blocked.mgmt_data_traffic off

Preparing the cluster for transition

You must set up the cluster before transitioning a 7-Mode system and ensure
that the cluster meets requirements such as setting up LIFs and verifying
network connectivity for transition.

Before you begin

© Copyright Equant 12.2018


Internal Use Only 69 of 141
Configuration & Operation Instruction

• The cluster and the SVM must already be set up.

The target SVM must not be in an SVM disaster recovery relationship.


• The cluster must be reachable by using the cluster-management LIF.
• The cluster must be healthy and none of the nodes must be in takeover
mode.
• The target aggregates, which will contain the transitioned volumes, must
have an SFO policy.
• The aggregates must be on nodes that have not reached the maximum
volume limit.
• For establishing an SVM peer relationship when transitioning a volume
SnapMirror relationship,
the following conditions must be met:
The secondary cluster should not have an SVM with the same name as that of
the primary
SVM.
◦ The primary cluster should not have an SVM with the same name as that of
the secondary
SVM.
◦ The name of the source 7-Mode system should not conflict with any of the
local SVMs or
already peered SVMs.
About this task
You can set up intercluster LIFs or local LIFs that are in the default IPspace, on
each node of the cluster to communicate between the cluster and 7-Mode
systems. If you have set up local LIFs, then you do not have to set up
intercluster LIFs. If you have set up both intercluster LIFs and local LIFs, then
the local LIFs are preferred.

Step
1. Create the intercluster LIF on each node of the cluster for communication
between the cluster and 7-Mode system:

a. Create an intercluster LIF by using the network interface create command.


Example
cluster1::> network interface create -vserver cluster1-01 -lif
intercluster_lif -role intercluster -home-node cluster1-01 -homeport
e0c –address 192.0.2.130 -netmask 255.255.255.0

b. Create a static route for the intercluster LIF by using the network route
create
command.
Example
cluster1::> network route create -vserver vs0 -destination
0.0.0.0/0 -gateway 10.61.208.1

c. Verify that you can use the intercluster LIF to ping the 7-Mode system by
using the network ping command.
Example
cluster1::> network ping -lif intercluster_lif -lif-owner cluster1-01 -destination
system7mode
system7mode is alive

Creating a transition peer relationship

You must create a transition peer relationship before you can set up a
SnapMirror relationship for transition between a 7-Mode system and a cluster.

© Copyright Equant 12.2018


Internal Use Only 70 of 141
Configuration & Operation Instruction

As a cluster administrator, you can create a transition peer relationship


between an SVM and a 7-Mode system by using the vserver peer transition
create command.

Before you begin


• You must have ensured that the name of the source 7-Mode system does not
conflict with any of local SVMs or already peered SVMs.
• You must have created a clustered Data ONTAP volume of type DP to which
the 7-Mode data must be transitioned.
The size of the clustered Data ONTAP volume must be equal to or greater than
the size of the 7-Mode volume.
• You must have ensured that the SVM names do not contain a "."
• If you are using local LIFs, you must have ensured the following:
◦ Local LIFs are created in the default IPspace
◦ Local LIFs are configured on the node on which the volume resides
◦ LIF migration policy is same as the volume node, so that both can migrate to
the same
destination node

About this task


When creating a transition peer relationship, you can also specify a multipath
FQDN or IP address for load balancing the data transfers.

Steps
1. Use the vserver peer transition create command to create a transition peer
relationship.
2. Use the vserver peer transition show to verify that the transition peer
relationship is
created successfully.

Example of creating and viewing transition peer relationships


The following command creates a transition peer relationship between the SVM
vs1 and the 7-Mode system src1 with the multipath address src1-e0d and local
LIFs lif1 and lif2:

cluster1::> vserver peer transition create -local-vserver vs1 -srcfiler-


name src1 -multi-path-address src1-e0d -local-lifs lif1,lif2
The following examples show a transition peer relationship between a single
SVM (vs1) and
multiple 7-Mode systems:
cluster1::> vserver peer transition create -local-vserver vs1 -srcfiler-
name src3
Transition peering created
cluster1::> vserver peer transition create -local-vserver vs1 -srcfiler-
name src2
Transition peering created
The following output shows the transition peer relationships of the SVM vs1:
cluster1::> vserver peer transition show
Vserver Source Filer Multi Path Address Local LIFs
------- ------------ ----------------- ---------
vs1 src2 - -
vs1 src3 - -

Transitioning a stand-alone volume

Transitioning a stand-alone volume involves creating a SnapMirror relationship,


performing a

© Copyright Equant 12.2018


Internal Use Only 71 of 141
Configuration & Operation Instruction

baseline transfer, performing incremental updates, monitoring the data copy


operation, breaking the SnapMirror relationship, and moving client access from
the 7-Mode volume to the clustered Data ONTAP volume.

Before you begin


• The cluster and SVM must already be set up.

Steps
1. Copy data from the 7-Mode volume to the clustered Data ONTAP volume:
a. If you want to configure the TCP window size for the SnapMirror relationship
between the 7-Mode system and the SVM, create a SnapMirror policy of type
async-mirror with the
window-size-for-tdp-mirror option.
You must then apply this policy to the TDP SnapMirror relationship between the
7-Mode
system and the SVM.
You can configure the TCP window size in the range of 256 KB to 7 MB for
improving the
SnapMirror transfer throughput so that the transition copy operations get
completed faster.
The default value of TCP window size is 2 MB.
Example
cluster1::> snapmirror policy create -vserver vs1 –policy
tdp_policy -window-size-for-tdp-mirror 5MB -type async-mirror
b. Use the snapmirror create command with the relationship type as TDP to
create a
SnapMirror relationship between the 7-Mode system and the SVM.
If you have created a SnapMirror policy to configure the TCP window size, you
must apply
the policy to this SnapMirror relationship.
Example
cluster1::> snapmirror create -source-path system7mode:dataVol20 -
destination-path vs1:dst_vol -type TDP -policy tdp_policy
Operation succeeded: snapmirror create the relationship with
destination vs1:dst_vol.
c. Use the snapmirror initialize command to start the baseline transfer.
Example
cluster1::> snapmirror initialize -destination-path vs1:dst_vol
Operation is queued: snapmirror initialize of destination
vs1:dst_vol.
d. Use the snapmirror show command to monitor the status.
Example
cluster1::>snapmirror show -destination-path vs1:dst_vol

Source Path: system7mode:dataVol20
Destination Path: vs1:dst_vol
Relationship Type: TDP
Relationship Group Type: none
SnapMirror Schedule: ­
SnapMirror Policy Type: async­mirror
SnapMirror Policy: DPDefault
Tries Limit: ­
Throttle (KB/sec): unlimited
Mirror State: Snapmirrored
Relationship Status: Idle
File Restore File Count: ­
File Restore File List: ­
Transfer Snapshot: ­
Snapshot Progress: ­
Total Progress: ­

© Copyright Equant 12.2018


Internal Use Only 72 of 141
Configuration & Operation Instruction

Network Compression Ratio: ­
Snapshot Checkpoint: ­
Newest Snapshot: vs1(4080431166)_dst_vol.1
Newest Snapshot Timestamp: 10/16 02:49:03
Exported Snapshot: vs1(4080431166)_dst_vol.1
Exported Snapshot Timestamp: 10/16 02:49:03
Healthy: true
Unhealthy Reason: ­
Constituent Relationship: false
Destination Volume Node: cluster1­01
Relationship ID:
97b205a1­54ff­11e4­9f30­005056a68289
Current Operation ID: ­
Transfer Type: ­
Transfer Error: ­
Current Throttle: ­
Current Transfer Priority: ­
Last Transfer Type: initialize
Last Transfer Error: ­
Last Transfer Size: 152KB
Last Transfer Network Compression Ratio: 1:1
Last Transfer Duration: 0:0:6
Last Transfer From: system7mode:dataVol20
Last Transfer End Timestamp: 10/16 02:43:53
Progress Last Updated: ­
Relationship Capability: 8.2 and above
Lag Time: ­
Number of Successful Updates: 0
Number of Failed Updates: 0
Number of Successful Resyncs: 0
Number of Failed Resyncs: 0
Number of Successful Breaks: 0
Number of Failed Breaks: 0
Total Transfer Bytes: 155648
Total Transfer Time in Seconds: 6

e. Depending on whether you want to update the clustered Data ONTAP


volume manually or by setting up a SnapMirror schedule, perform the
appropriate action:

© Copyright Equant 12.2018


Internal Use Only 73 of 141
Configuration & Operation Instruction

If you have a schedule for incremental transfers, perform the following steps
when you are ready to perform cutover:
a. Optional: Use the snapmirror quiesce command to disable all future update
transfers.
Example
cluster1::> snapmirror quiesce -destination-path vs1:dst_vol

© Copyright Equant 12.2018


Internal Use Only 74 of 141
Configuration & Operation Instruction

b. Use the snapmirror modify command to delete the SnapMirror schedule.


Example
cluster1::> snapmirror modify -destination-path vs1:dst_vol -schedule ""
c. Optional: If you quiesced the SnapMirror transfers earlier, use the snapmirror
resume
command to enable SnapMirror transfers.
Example
cluster1::> snapmirror resume -destination-path vs1:dst_vol
3. Wait for any ongoing transfers between the 7-Mode volumes and the
clustered Data ONTAP
volumes to finish, and then disconnect client access from the 7-Mode volumes
to start cutover.
4. Use the snapmirror update command to perform a final data update to the
clustered Data
ONTAP volume.
Example
cluster1::> snapmirror update -destination-path vs1:dst_vol
Operation is queued: snapmirror update of destination vs1:dst_vol.
5. Use the snapmirror show command to verify that the last transfer was
successful.
6. Use the snapmirror break command to break the SnapMirror relationship
between the 7-
Mode volume and the clustered Data ONTAP volume.
Example
cluster1::> snapmirror break -destination-path vs1:dst_vol
[Job 60] Job succeeded: SnapMirror Break Succeeded
7. If your volumes have LUNs configured, at the advanced privilege level, use
the lun
transition 7-mode show command to verify that the LUNs were transitioned.
You can also use the lun show command on the clustered Data ONTAP volume
to view all of
the LUNs that were successfully transitioned.
8. Use the snapmirror delete command to delete the SnapMirror relationship
between the 7-
Mode volume and the clustered Data ONTAP volume.
Example
cluster1::> snapmirror delete -destination-path vs1:dst_vol
9. Use the snapmirror release command to remove the SnapMirror relationship
information
from the 7-Mode system.

Example
system7mode> snapmirror release dataVol20 vs1:dst_vol

After you finish


You must delete the SVM peer relationship between the 7-Mode system and the
SVM when all of the required volumes in the 7-Mode system are transitioned to
the SVM.

4.23 SVM DR

SVM disaster recovery is the asynchronous mirroring of SVM data and


configuration. You can choose to replicate all or a subset of the SVM
configuration (ie excluding the network and protocol configuration).

4.23.1 Architecture overview

A replication relationship is configured between SVMs.

© Copyright Equant 12.2018


Internal Use Only 75 of 141
Configuration & Operation Instruction

Snapmirror software eliminates the need to maintain replication relationships for


each individual volume inside the SVMs. Change management between the two
SVMs is managed automatically.

It can be configured in two differents mode (identify preserve to true or false,


depending on the network configuration).

To recover from a disaster, you must activate the destination SVM.

Activating the destination SVM involves :

 quiescing scheduled SnapMirror transfers,

 aborting any ongoing SnapMirror transfers,

 breaking the SVM disaster recovery (DR) relationship

 stopping the source SVM

 starting the destination SVM.

© Copyright Equant 12.2018


Internal Use Only 76 of 141
Configuration & Operation Instruction

4.23.2 NetApp guides

For a full description of the SVM disaster recovery, see:

Preparation Express Guide:

https://library.netapp.com/ecm/ecm_download_file/ECMLP2496254

Configuration Express Guide:

https://library.netapp.com/ecm/ecm_download_file/ECMLP2496252

4.23.3 Limitation and requirements

SVM disaster recovery is supported by ENG from Data Ontap 9.

SVMs must use FlexVol volumes on clusters (Infinite volume not supported)

The source or destination cluster are not in a MetroCluster configuration.

Both source and destination clusters must have a snapmirror license.

The source and destination clusters are peered.

The source SVM does not contain DP or TDP volume (data protection or transition
data protection)

The source SVM do not contain any volume that resides in a FabricPool-enabled
aggregate.

The source SVM root volume must not contain any other data apart from
metadata because the other data is not replicated. Root volume metadata such
as volume junctions, symbolic links, and directories leading to junctions symbolic
links are replicated.

The CIFS audit consolidation path must be on a non-root volume.

The SVM root volume must not have any qtrees.

The destination cluster must have at least one non-root aggregate with a
minimum free space of 10 GB for configuration replication

The destination cluster have at least one non-root aggregate with a sufficient
space (replicated datas).

if any clone parent or clone child volumes are moved by using the volume move
command, then you must move the corresponding volume at the destination
SVM.

© Copyright Equant 12.2018


Internal Use Only 77 of 141
Configuration & Operation Instruction

4.23.4 Disaster Recovery preparation

4.23.4.1 Deciding whether to replicate SVM network configuration

Depending on the architecture and the need, all or a subset of the SVM
configuration can be replicated:

 Replicate data and all the SVM configuration

 Replicate data and all the SVM configuration except the NAS data LIFs

 Replicate data and a subset of the SVM configuration

See the SVM DR Preparation Express Guide for more details.

4.23.4.2 Preparation

Verify that the source and destination clusters are peered :

cnasimu01::> cluster peer show


Peer Cluster Name Cluster Serial Number Availability Authentication
------------------------- --------------------- -------------- --------------
cnasimu02 1-80-000008 Available ok

Verify the SnapMirror licenses on both clusters:

cnasimu02::> license show


(system license show)

Serial Number: 1-80-000008


Owner: cnasimu02
Package Type Description Expiration
----------------- -------- --------------------- -------------------
Base license Cluster Base License -

Serial Number: 1-81-0000000000000004082368511


Owner: cnasimu02-01
Package Type Description Expiration
----------------- -------- --------------------- -------------------
NFS license NFS License -
SnapMirror license SnapMirror License -

Create the same custom schedules on the destination cluster as on the source
one:

cnasimu01::> job schedule cron show


Cluster Name Description
------------- ----------- -----------------------------------------------------
cnasimu02
5min @:00,:05,:10,:15,:20,:25,:30,:35,:40,:45,:50,:55
8hour @2:15,10:15,18:15
daily @0:10
hourly @:05
monthly 1@0:20
pg-daily @0:10
pg-hourly @:30
pg-weekly Sun@0:15
weekly Sun@0:15

© Copyright Equant 12.2018


Internal Use Only 78 of 141
Configuration & Operation Instruction

If needed, create new schedules using the command:

cnasimu02::> job schedule cron create -name weekly -dayofweek "Sunday" -hour 0 -minute 15

Ensure that the destination cluster has at least one non-root aggregate with a
minimum 10GB free space for configuration replication (the best practice is two).

cnasimu02::> storage aggregate show

Aggregate Size Available Used% State #Vols Nodes RAID Status


--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr0_cnasimu02_01_0
3.34GB 167.4MB 95% online 1 cnasimu02-01 raid_dp,
normal
aggr0_cnasimu02_02
3.34GB 165.4MB 95% online 1 cnasimu02-02 raid_dp,
normal
cnasimu02_01_FC_1
24.61GB 24.61GB 0% online 0 cnasimu02-01 raid_dp,
Normal

If needed, create a non-root aggregate of 10GB by using storage aggregate


create command.

4.23.4.3 Creation of the IP space

If you want to replicate all the SVM configuration, the IPspaces of the source and
destination SVMs must have ports belonging to the same subnet.

cnasimu02::> ipspace create -ipspace ips_802


(network ipspace create)

If needed, remove the ports (ifgroup or vlan) from the Default broadcast domain
and create a new one:

cnasimu02::> network port broadcast-domain remove-ports -ipspace Default -broadcast-domain


Default -ports cnasimu02-01:e0d,cnasimu02-02:e0d

cnasimu02::> network port broadcast-domain create -ipspace ips_802 -broadcast-domain


bcast_vlan-802 -mtu 9000 -ports cnasimu02-01:e0d,cnasimu02-02:e0d

Here is a basic configuration:

cnasimu02::> network port broadcast-domain show


IPspace Broadcast Update
Name Domain Name MTU Port List Status Details
------- ----------- ------ ----------------------------- --------------
Cluster Cluster 1500
cnasimu01_02:e0a complete
cnasimu01_02:e0b complete
cnasimu01_01:e0a complete
cnasimu01_01:e0b complete
Default Default 1500
cnasimu01_02:e0c complete
cnasimu01_01:e0c complete

© Copyright Equant 12.2018


Internal Use Only 79 of 141
Configuration & Operation Instruction

ips_802 bcast_vlan-802
9000
cnasimu01_02:e0d-802 complete
cnasimu01_01:e0d-802 complete

4.23.4.4 Creation of the destination SVM

Create the destination SVM with the subtype dp-destination.

cnasimu02::> vserver create -vserver dsvm_test_svmdr -subtype dp-destination -ipspace ips_802

Note : Cannot specify options other than Vserver name, comment and ipspace for
a Vserver that is being configured as the destination for Vserver DR

Warning: be sure to set the good ipspace (the ipspace cannot be


modified on the SVM is created)

cnasimu02::> vserver show -vserver dsvm_test_svmdr -fields ipspace


vserver ipspace
--------------- -------
dsvm_test_svmdr ips_802

The SVM is stopped:

cnasimu02::> vserver show


Admin Operational Root
Vserver Type Subtype State State Volume Aggregate
----------- ------- ---------- ---------- ----------- ---------- ----------
cnasimu02 admin - - - - -
cnasimu02-01
node - - - - -
cnasimu02-02
node - - - - -
dsvm_test_svmdr
data dp-destination stopped - -
running

4.23.4.5 SVM peer relationship

You must create an intercluster SVM peer relationship between the source and
the destination SVMs.

On the destination cluster, create the SVM peer relationship

cnasimu02::> vserver peer create -vserver dsvm_test_svmdr -peer-vserver svm_test_svmdr


-applications snapmirror -peer-cluster cnasimu01

On the source cluster, accept the SVM peer relationship

cnasimu01::> vserver peer accept -vserver svm_test_svmdr -peer-vserver dsvm_test_svmdr

© Copyright Equant 12.2018


Internal Use Only 80 of 141
Configuration & Operation Instruction

On the destination cluster, verify that the SVM peer relationship is in the peered
state

cnasimu02::> vserver peer show


Peer Peer Peering Remote
Vserver Vserver State Peer Cluster Applications Vserver
----------- ----------- ------------ ----------------- -------------- ---------
dsvm_test_svmdr
svm_test_svmdr
peered cnasimu01 snapmirror svm_test_svmdr

4.23.4.6 SnapMirror policy

There are several default SnapMirror policies which can be used for the
relationship. It can be viewed using the following command :

cnasimu02::> snapmirror policy show


Vserver Policy Policy Number Transfer
Name Name Type Of Rules Tries Priority Comment
------- ------------------ ------ -------- ----- -------- ----------
cnasimu02
DPDefault async-mirror 2 8 normal Asynchronous SnapMirror policy for
mirroring all Snapshot copies and the latest active file system.
SnapMirror Label: sm_created Keep: 1
all_source_snapshots 1
Total Keep: 2

cnasimu02
MirrorAllSnapshots async-mirror 2 8 normal Asynchronous SnapMirror policy for
mirroring all Snapshot copies and the latest active file system.
SnapMirror Label: sm_created Keep: 1
all_source_snapshots 1
Total Keep: 2

If the source and destination SVMs are in different network subnets, and you do
not want to replicate the LIFs, you must create a SnapMirror policy with the
-discard-configs network option.

On the destination cluster :

1. Create a SnapMirror policy to exclude the LIFs from replication by using the
snapmirror policy create command.

2. Verify that the new SnapMirror policy is created by using the snapmirror policy
show command.

You must use the newly created SnapMirror policy when creating the SnapMirror
relationship.

Example

cnasimu02::> snapmirror policy create -vserver dsvm_test_svmdr -policy exclude_LIF -type


async-mirror -discard-configs network

cnasimu02::> snapmirror policy show -vserver dsvm_test_svmdr -instance

Vserver: dsvm_test_svmdr
SnapMirror Policy Name: exclude_LIF

© Copyright Equant 12.2018


Internal Use Only 81 of 141
Configuration & Operation Instruction

SnapMirror Policy Type: async-mirror


Policy Owner: vserver-admin
Tries Limit: 8
Transfer Priority: normal
Ignore accesstime Enabled: false
Transfer Restartability: always
Network Compression Enabled: false
Create Snapshot: true
Discard Configs: network
Comment: -
Total Number of Rules: 1
Total Keep: 1
Rules:
SnapMirror Label Keep Preserve Warn Schedule Prefix
----------------------------- ---- -------- ---- -------- ----------
sm_created 1 false 0 - -

4.23.4.7 SnapMirror relationship

On the destination cluster.

Create a snapmirror relationship between the source and the destination SVMs :

 You can specify the source SVM and the destination SVM as either paths
or SVM names. If you want to specify the source and destination as paths,
then the SVM name must be followed by a colon.

 Select the policy depending on your need

 Replicate the data and a subset of the configuration by setting the –


identity-preserve option to false

Example :

cnasimu02::> snapmirror create -source-vserver svm_test_svmdr -destination-vserver


dsvm_test_svmdr -type DP -throttle unlimited -policy DPDefault -schedule hourly -identity-
preserve true

Verify the relationship

cnasimu02::> snapmirror show


Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm_test_svmdr:
XDP dsvm_test_svmdr:
Uninitialized
Idle - true -

cnasimu02::> snapmirror show -instance

Source Path: svm_test_svmdr:


Destination Path: dsvm_test_svmdr:
Relationship Type: XDP
Relationship Group Type: vserver
SnapMirror Schedule: hourly
SnapMirror Policy Type: async-mirror
SnapMirror Policy: DPDefault
Tries Limit: -

© Copyright Equant 12.2018


Internal Use Only 82 of 141
Configuration & Operation Instruction

Throttle (KB/sec): unlimited


Mirror State: Uninitialized
Relationship Status: Idle
File Restore File Count: -
File Restore File List: -

4.23.4.8 Excluding volumes from replication

By default, all rw data volumes of the source SVM are replicated. Volumes can be
excluded from the replication by modifying the –vserver-dr-protection option.

On the source cluster.

cnasimu01::> volume modify -vserver svm_test_svmdr -volume vol_svmdr_nas_01 -vserver-dr-


protection unprotected
Volume modify successful on volume vol_svmdr_nas_01 of Vserver svm_test_svmdr.

Verify the unprotected status

cnasimu01::> volume show -fields vserver-dr-protection


vserver volume vserver-dr-protection
------------ ------ ---------------------
svm_test_svmdr
svm_test_svmdr_root
unprotected
svm_test_svmdr
vol_svmdr_nas_01
unprotected
svm_test_svmdr
vol_svmdr_nas_02
protected

4.23.4.9 CIFS server

If the source SVM has CIFS configuration and you choose to set identity-preserve
to false, a CIFS server must be created for the destination SVM.

 Start the destination SVM

 Create a LIF

 Create a route

 Configure DNS

 Add the preferred domain controller

 Create the CIFS server

 Stop the destination SVM

See the SVM DR Preparation Express Guide for more details.

© Copyright Equant 12.2018


Internal Use Only 83 of 141
Configuration & Operation Instruction

4.23.4.10 Destination SVM initialization

The destination SVM must be initialized for the baseline transfer of data and
configuration details.

On the destination cluster.

Perform a baseline transfer from the source to the destination SVM

cnasimu02::> snapmirror initialize dsvm_test_svmdr:

Once the SVM snapmirrored, the relationship status will switch from Transferring
to Idle :

cnasimu02::> snapmirror show


Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm_test_svmdr:
XDP dsvm_test_svmdr:
Uninitialized
Transferring - true -

cnasimu02::> snapmirror show


Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm_test_svmdr:
XDP dsvm_test_svmdr:
Snapmirrored
Idle - true -

4.23.4.11 NAS LIF configuration

If the source and destination SVMs are in the same network subnets, the LIFs are
configured on the destination SVM (discard-configs network option not set in
snapmirror policy).

It is in down state.

In case of a disaster recovery, it will switch in up state.

cnasimu02::> network interface show -vserver dsvm_test_svmdr


Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
dsvm_test_svmdr
lif_file_svm_test_svmdr_01
up/down 192.168.1.224/24 cnasimu02-01 e0d-802 true

Note : the LIF follows the naming convention of the source site which can be
ambiguous (especially in OCB where the VLAN ID is specified in the LIF name)

© Copyright Equant 12.2018


Internal Use Only 84 of 141
Configuration & Operation Instruction

If discard-configs network option is set in snapmirror policy, example of the


source and destination SVMs in different network subnets, the NAS LIFs and
routes must be configured manually on the destination SVM.

4.23.4.12 Destination SVM configuration

If the identity-preserve option is set to false or of source SVM has SAN


configuration, network and protocols must be configured on the destination SVM
when a disaster occurs.

The destination SVM must be started and in the running state.

For NAS:

 Create data LIFs and routes

 Configure name services (LDAP, NIS, DNS)

 Configure CIFS / NFS protocols (or both)

 Stop the destination SVM if the source SVM has CIFS configuration

Note : a ro access for NFS clients can be set up from the destination SVM

For SAN:

 Create data LIFs

 Create igroups for the LUNs

 Map the LUNs to the igroups

 Configure iSCSI / FC protocols (or both)

Note : a ro access for SAN hosts can be set up from the destination SVM

4.23.4.13 SnapMirror monitoring

The SnapMirror relationship status can be monitored to verify that the updates
are occurring.

cnasimu02::> snapmirror show -instance

Source Path: svm_test_svmdr:


Destination Path: dsvm_test_svmdr:
Relationship Type: XDP
Relationship Group Type: vserver
SnapMirror Schedule: hourly
SnapMirror Policy Type: async-mirror
SnapMirror Policy: DPDefault
Tries Limit: -
Throttle (KB/sec): unlimited
Mirror State: Snapmirrored
Relationship Status: Idle
File Restore File Count: -
File Restore File List: -

© Copyright Equant 12.2018


Internal Use Only 85 of 141
Configuration & Operation Instruction

Transfer Snapshot: -
Snapshot Progress: -
Total Progress: -
Network Compression Ratio: -
Snapshot Checkpoint: -
Newest Snapshot: vserverdr.2.bed5a4e1-180d-11e9-a9da-
000c2912003c.2019-01-21_090500
Newest Snapshot Timestamp: 01/21 09:05:00
Exported Snapshot: vserverdr.2.bed5a4e1-180d-11e9-a9da-
000c2912003c.2019-01-21_090500
Exported Snapshot Timestamp: 01/21 09:05:00
Healthy: true
Unhealthy Reason: -
Constituent Relationship: false
Destination Volume Node: -
Relationship ID: 2f3cbd01-180e-11e9-a9da-000c2912003c
Current Operation ID: -
Transfer Type: -
Transfer Error: -
Current Throttle: -
Current Transfer Priority: -
Last Transfer Type: update
Last Transfer Error: -
Last Transfer Size: 8.59KB
Last Transfer Network Compression Ratio: -
Last Transfer Duration: 0:0:15
Last Transfer From: svm_test_svmdr:
Last Transfer End Timestamp: 01/21 09:05:15
Progress Last Updated: -
Relationship Capability: -
Lag Time: 0:9:1
Identity Preserve Vserver DR: true
Volume MSIDs Preserved: true
Is Auto Expand Enabled: -
Number of Successful Updates: -
Number of Failed Updates: -
Number of Successful Resyncs: -
Number of Failed Resyncs: -
Number of Successful Breaks: -
Number of Failed Breaks: -
Total Transfer Bytes: -
Total Transfer Time in Seconds: -

Note : SNMP is not supported for that monitoring

4.23.4.14 Data Protection volumes cloning

DP volumes of the destination SVM can be cloned to another SVM in the


destination cluster for a specific usage (test, etc.).

Prerequisites:

 The destination cluster must have FlexClone license

 The two SVMs must exist in the same cluster

Note : this task is performed in advanced privilege mode

cnasimu02::> set advanced

Warning: These advanced commands are potentially dangerous; use them only when directed to do
so by NetApp personnel.
Do you want to continue? {y|n}: y

© Copyright Equant 12.2018


Internal Use Only 86 of 141
Configuration & Operation Instruction

Create a SVM and a FlexClone volume

cnasimu02::*> vserver create -vserver svm_test_clone

cnasimu02::*> volume clone create -vserver svm_test_clone -flexclone vol_svmdr_nas_02_clone


-type RW -parent-vserver dsvm_test_svmdr -parent-volume vol_svmdr_nas_01 -junction-active true
-parent-snapshot vserverdr.2.bed5a4e1-180d-11e9-a9da-000c2912003c.2019-01-21_090500
[Job 334] Job is queued: Create vol_svmdr_nas_02_clone.

Info: The default export policies of the Vserver "svm_test_clone" will be assigned to the
clone volume. Use the "volume modify"
command to change the policies associated with the clone volume.
[Job 334] Job succeeded: Successful

The FlexClone volume inherits the export policies and Snapshot policies from the
SVM to which it belongs.

The clone status can be checked :

cnasimu02::*> volume clone show


Parent Parent Parent
Vserver FlexClone Vserver Volume Snapshot State Type
------- ------------- ------- ------------- -------------------- --------- ----
svm_test_clone
vol_svmdr_nas_02_clone
dsvm_test_svmdr
vol_svmdr_nas_01
vserverdr.2.bed5a4e1-180d-11e9-a9da-
000c2912003c.2019-01-21_090500
online RW

© Copyright Equant 12.2018


Internal Use Only 87 of 141
Configuration & Operation Instruction

4.23.5 Disaster Recovery activation

To recover from a disaster, you must activate the destination SVM.

Activating the destination SVM involves quiescing scheduled SnapMirror


transfers, aborting any ongoing SnapMirror transfers, breaking the SVM disaster
recovery (DR) relationship, stopping the source SVM, and starting the destination
SVM.

Notes:

 The ONTAP version of the destination SVM must be at or above the


version of the source. This is not a requirement for volume async-mirror
and XDP relationship.

 During a disaster, any new data that is written on the source SVM after
the last SnapMirror transfer is lost.

Here is the workflow :

© Copyright Equant 12.2018


Internal Use Only 88 of 141
Configuration & Operation Instruction

4.23.5.1 Quiescing SnapMirror transfers

Before activating the destination SVM, you must quiesce the SVM disaster
recovery relationship to stop scheduled SnapMirror transfers from the source
SVM.

Stop the scheduled SnapMirror transfers by using the snapmirror quiesce


command from the destination cluster:

cnasimu02::> snapmirror quiesce -destination-path dsvm_test_svmdr:

Verify that the SnapMirror relationship between the source and the destination
SVMs is in the Quiescing or Quiesced state by using the snapmirror show
command

© Copyright Equant 12.2018


Internal Use Only 89 of 141
Configuration & Operation Instruction

cnasimu02::> snapmirror show -vserver dsvm_test_svmdr


Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm_test_svmdr:
XDP dsvm_test_svmdr:
Snapmirrored
Quiesced - true -

4.23.5.2 Aborting any ongoing SnapMirror transfers

If necessary, you must abort any ongoing SnapMirror transfers or any long-
running quiesce operations before breaking the SVM disaster recovery
relationship.

Abort any ongoing SnapMirror transfers by using the snapmirror abort command.

cnasimu02::> snapmirror abort -destination-path dsvm_test_svmdr:

Warning : depending on the relation status, the command can fail

Error: command failed: Cannot perform this operation for destination-path "dsvm_test_svmdr"
because the SnapMirror operation status is "Idle".

Verify that the SnapMirror relationship between the source and destination SVMs
is in the Idle state by using the snapmirror show command.

cnasimu02::> snapmirror show -vserver dsvm_test_svmdr


Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm_test_svmdr:
XDP dsvm_test_svmdr:
Snapmirrored
Idle - true -

4.23.5.3 Breaking the SVM disaster recovery relationship

The SnapMirror relationship must be broken before activating the destination


SVM.

From the destination cluster.

cnasimu02::> snapmirror break -destination-path dsvm_test_svmdr:

Notice: Volume quota and efficiency operations will be queued after "SnapMirror break"
operation is complete. To check the status, run "job show -description "Vserverdr Break
Callback
job for Vserver : dsvm_test_svmdr"".

cnasimu02::> job show -description "Vserverdr Break Callback job for Vserver :
dsvm_test_svmdr"
Owning
Job ID Name Vserver Node State
------ -------------------- ---------- -------------- ----------

© Copyright Equant 12.2018


Internal Use Only 90 of 141
Configuration & Operation Instruction

583 Vserverdr Break Callback


cnasimu02 cnasimu02-01 Success
Description: Vserverdr Break Callback job for Vserver : dsvm_test_svmdr

Verify the mirror state, it must be “Broken-off” :

cnasimu02::> snapmirror show -vserver dsvm_test_svmdr


Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm_test_svmdr:
XDP dsvm_test_svmdr:
Broken-off
Idle - true -

The SVM subtype changes from dp-destination to default

cnasimu02::> vserver show -vserver dsvm_test_svmdr -fields subtype


vserver subtype
--------------- -------
dsvm_test_svmdr default

The type of the volumes changes from DP to RW.

cnasimu02::> volume show -vserver dsvm_test_svmdr


Vserver Volume Aggregate State Type Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
dsvm_test_svmdr
svm_test_svmdr_root
cnasimu02_01_FC_2
online RW 20MB 17.29MB 8%
dsvm_test_svmdr
vol_svmdr_nas_01
cnasimu02_01_FC_1
online RW 1GB 248.1MB 75%

4.23.5.4 Stopping the source SVM

If you chose to set identity-preserve to true or if you want to test the SVM
disaster recovery setup, you must stop the source SVM before activating the
destination SVM.

Before you begin : if the source SVM is available on the source cluster, then you
must have ensured that all clients connected to the source SVM are
disconnected.

From the source cluster.

Stop the source SVM by using the vserver stop command.

cnasimu01::> vserver stop svm_test_svmdr

Check that the SVM is stopped by using the vserver show command.

© Copyright Equant 12.2018


Internal Use Only 91 of 141
Configuration & Operation Instruction

4.23.5.5 Starting the destination SVM

In case of a disaster, once the source SVM is stopped or unavailable, you must
activate the destination SVM to provide data access.

From the destination cluster.

Start the source SVM by using the vserver start command.

cnasimu02::> vserver start dsvm_test_svmdr

Check that the SVM is started by using the vserver show command.

4.23.5.6 Configuring the destination SVM

Depending on the SVM DR options, you must configure the destination SVM
(network, etc.).

© Copyright Equant 12.2018


Internal Use Only 92 of 141
Configuration & Operation Instruction

4.23.6 Source SVM reactivation

If the source SVM exists after a disaster, you can reactivate it and protect it by
re-creating the SVM disaster recovery relationship between the source and the
destination SVMs. If the source SVM does not exist, you must create and set up a
new source SVM and then reactivate it.

Here is the workflow :

© Copyright Equant 12.2018


Internal Use Only 93 of 141
Configuration & Operation Instruction

4.23.6.1 Creating the new source SVM

If the source SVM does not exist, you must delete the SnapMirror relationship
between the source and destination SVMs, delete the SVM peer relationship, and
create and set up a new source SVM to replicate the data and configuration from
the destination SVM.

© Copyright Equant 12.2018


Internal Use Only 94 of 141
Configuration & Operation Instruction

From the destination cluster :

 identify the SnapMirror relationship between the source SVM that no


longer exists and its destination SVM by using the snapmirror show
command.

 delete the SnapMirror relationship by using the snapmirror delete


command.

 verify that the SnapMirror relationship is deleted by using the snapmirror


show command

 identify the SVM peer relationship between the source SVM that no longer
exists and its destination SVM by using the vserver peer show

 delete the SVM peer relationship by using the vserver peer delete
command

After that, you must set up the disaster recovery relationship by using the same
method and configuration that you used to set up the SnapMirror relationship
before the disaster.

For example, if you chose to replicate data and all the configuration details when
creating the SnapMirror relationship between the original source SVM and
destination SVM, you must choose to replicate data and all the configuration
details when creating the SnapMirror relationship between the new source SVM
and the original destination SVM.

4.23.6.2 Setting up the existing source SVM

If the source SVM exists after a disaster, you must create the SVM disaster
recovery relationship between the destination and source SVMs and
resynchronize the data and configuration from the destination SVM to the source
SVM.

Steps :

 Creating a SnapMirror relationship

 Resynchronizing the source SVM from the destination SVM

Prerequisites:

 the source cluster and destination cluster must be peered

 the existing source SVM and the destination SVM must be peered

© Copyright Equant 12.2018


Internal Use Only 95 of 141
Configuration & Operation Instruction

You must set up the disaster recovery relationship by using the same method and
configuration that you used before the disaster.

The SnapMirror relationship musn’t be initialized.

Verify that the SnapMirror relationship is established, and is in the “Broken-off”


state by using the snapmirror show command.

cnasimu01::> snapmirror show


Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
dsvm_test_svmdr:
XDP svm_test_svmdr:
Broken-off
Idle - true -

Resynchronizing the source SVM from the destination SVM

Before activating the source SVM, you must resynchronize the data and
configuration details from the destination SVM to the existing source SVM for
data access.

Warning : the source SVM must not contain any new protected volumes, delete
them if necessary.

From the source cluster

 Resynchronize the source SVM by using the snapmirror resync command

 Verify that the resynchronization operation is complete, and the


SnapMirror relationship is in the Snapmirrored state by using the
snapmirror show command.

cnasimu01::> snapmirror resync svm_test_svmdr:

cnasimu01::> snapmirror show


Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
dsvm_test_svmdr:
XDP svm_test_svmdr:
Broken-off
Transferring - true -

cnasimu01::> snapmirror show


Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------

© Copyright Equant 12.2018


Internal Use Only 96 of 141
Configuration & Operation Instruction

dsvm_test_svmdr:
XDP svm_test_svmdr:
Snapmirrored
Idle - true -

Note : if you run the command with –instance option, you will see that transfer
size is corresponding to a resynchronization.

4.23.6.3 Stopping the destination SVM

If you chose to set identity-preserve to true, you must stop the destination SVM
before starting the source SVM to prevent any data corruption.

From the destination cluster :

 stop the destination SVM by using the vserver stop command

 verify that the destination SVM is in the stopped state by using the
vserver show command

4.23.6.4 Updating the SnapMirror relationship

You must update the SnapMirror relationship to replicate the changes from the
destination SVM to the source SVM since the last resynchronization operation.

From the source cluster

 perform a SnapMirror update by using the snapmirror update command.

 Verify that the SnapMirror update operation is complete and the


SnapMirror relationship is in the Snapmirrored state.

cnasimu01::> snapmirror update svm_test_svmdr:


cnasimu01::> snapmirror show
Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
dsvm_test_svmdr:
XDP svm_test_svmdr:
Snapmirrored
Idle - true -

4.23.6.5 Breaking the SVM disaster recovery relationship

You must break the SnapMirror relationship created between the source and the
destination SVMs for disaster recovery before reactivating the source SVM.

From the source cluster.

 Break the SVM disaster recovery relationship by using the snapmirror


break command.

© Copyright Equant 12.2018


Internal Use Only 97 of 141
Configuration & Operation Instruction

 Verify that the SnapMirror relationship between the source and the
destination SVMs is in the Broken-off state by using the snapmirror show
command.

cnasimu01::> snapmirror break svm_test_svmdr:

Notice: Volume quota and efficiency operations will be queued after "SnapMirror break"
operation is complete. To check the status, run "job show -description "Vserverdr Break
Callback
job for Vserver : svm_test_svmdr"".

cnasimu01::> snapmirror show


Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
dsvm_test_svmdr:
XDP svm_test_svmdr:
Broken-off
Idle - true -

The source SVM continues to be in the Stopped state and the subtype changes
from dpdestination to default. The state of the volumes in the source SVM
changes from DP to RW.

cnasimu01::> vserver show -vserver svm_test_svmdr

Vserver: svm_test_svmdr
Vserver Type: data
Vserver Subtype: default
Vserver UUID: 16421db1-14ec-11e9-b005-000c294463d3
Root Volume: svm_test_svmdr_root
Aggregate: simu01_fcal_001
NIS Domain: -
Root Volume Security Style: unix
LDAP Client: -
Default Volume Language Code: C.UTF-8
Snapshot Policy: default
Comment:
Quota Policy: default
List of Aggregates Assigned: -
Limit on Maximum Number of Volumes allowed: unlimited
Vserver Admin State: stopped
Vserver Operational State: stopped
Vserver Operational State Stopped Reason: admin-state-stopped
Allowed Protocols: nfs, cifs, fcp, iscsi, ndmp
Disallowed Protocols: -
Is Vserver with Infinite Volume: false
QoS Policy Group: -
Caching Policy Name: -
Config Lock: false
IPspace Name: ips_802
Foreground Process: -

4.23.6.6 Starting the source SVM

For providing data access from the source SVM after a disaster, you must
reactivate the source SVM by starting it.

From the source cluster.

 start the source SVM by using the vserver start command.

© Copyright Equant 12.2018


Internal Use Only 98 of 141
Configuration & Operation Instruction

 Verify that the source SVM is in the running state and the subtype is
default by using the vserver show command.

cnasimu01::> vserver start -vserver svm_test_svmdr


[Job 951] Job succeeded: DONE

cnasimu01::> vserver show -vserver svm_test_svmdr

Vserver: svm_test_svmdr
Vserver Type: data
Vserver Subtype: default
Vserver UUID: 16421db1-14ec-11e9-b005-000c294463d3
Root Volume: svm_test_svmdr_root
Aggregate: simu01_fcal_001
NIS Domain: -
Root Volume Security Style: unix
LDAP Client: -
Default Volume Language Code: C.UTF-8
Snapshot Policy: default
Comment:
Quota Policy: default
List of Aggregates Assigned: -
Limit on Maximum Number of Volumes allowed: unlimited
Vserver Admin State: running
Vserver Operational State: running
Vserver Operational State Stopped Reason: -
Allowed Protocols: nfs, cifs, fcp, iscsi, ndmp
Disallowed Protocols: -
Is Vserver with Infinite Volume: false
QoS Policy Group: -
Caching Policy Name: -
Config Lock: false
IPspace Name: ips_802
Foreground Process: -

4.23.6.7 Resynchronizing the destination SVM from the source SVM

You can protect the reactivated source SVM by resynchronizing the data and
configuration details from the source SVM to the destination SVM.

If a SnapMirror relationship does not exist, then create a SnapMirror relationship


by using the snapmirror create command.

From the destination cluster

 resynchronize the destination SVM from the source SVM by using the
snapmirror resync command.

 Verify that the resynchronization operation is complete and the


SnapMirror relationship is in the Snapmirrored state by using the
snapmirror show command.

cnasimu02::> snapmirror resync dsvm_test_svmdr:

© Copyright Equant 12.2018


Internal Use Only 99 of 141
Configuration & Operation Instruction

cnasimu02::> snapmirror show


Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm_test_svmdr:
XDP dsvm_test_svmdr:
Broken-off
Transferring - true -
svm_test_vmware_srm:vol_svm_srm_nas_01
XDP dsvm_test_vmware_srm:vol_dsvm_srm_nas_02
Snapmirrored
Idle - true -
2 entries were displayed.

cnasimu02::> snapmirror show


Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm_test_svmdr:
XDP dsvm_test_svmdr:
Snapmirrored
Idle - true -
svm_test_vmware_srm:vol_svm_srm_nas_01
XDP dsvm_test_vmware_srm:vol_dsvm_srm_nas_02
Snapmirrored
Idle - true -
2 entries were displayed.

4.24 MetroCluster

For service description please refer to the TSD:

http://renicm1011.equant.com/dicos/ciiengin.nsf/vTous/MDUG-9B9BMH

4.24.1 Architecture overview

© Copyright Equant 12.2018


Internal Use Only 100 of 141
Configuration & Operation Instruction

© Copyright Equant 12.2018


Internal Use Only 101 of 141
Configuration & Operation Instruction

© Copyright Equant 12.2018


Internal Use Only 102 of 141
Configuration & Operation Instruction

© Copyright Equant 12.2018


Internal Use Only 103 of 141
Configuration & Operation Instruction

4.24.2 Limitation and requirements

© Copyright Equant 12.2018


Internal Use Only 104 of 141
Configuration & Operation Instruction

© Copyright Equant 12.2018


Internal Use Only 105 of 141
Configuration & Operation Instruction

4.24.3 MetroCluster Cabling

Shelf numbering is similar to the numbering of a regular cluster, but shelf


numbering alternates between the two sites.

Because one controller might own all shelves after a disaster, all shelf numbers in
the disaster-recovery (DR) group must be unique.

© Copyright Equant 12.2018


Internal Use Only 106 of 141
Configuration & Operation Instruction

Although there are two management ports, only one is configured for use in
MetroCluster configurations. The entire bridge is a field replaceable unit (FRU).
You cannot replace individual parts.

For clarity, the images show only the input output modules (IOMs), not the entire
disk shelves. The intershelf cabling is the same as for any cluster, and the first
and final ports are each connected to a bridge.

© Copyright Equant 12.2018


Internal Use Only 107 of 141
Configuration & Operation Instruction

The switch configuration files determine which ports are used for which purposes.

© Copyright Equant 12.2018


Internal Use Only 108 of 141
Configuration & Operation Instruction

The cabling connections in the diagram conform to the settings in the NetApp
reference configuration file (RCF), or golden configuration image file, which you
load on the switches in a later exercise.

Always make cable connections to the specified ports on the FC switches and on
the ATTO bridges.

The example shows the onboard UTA2 ports on a FAS8040 system that is
configured for FC. The onboard UTA2 ports are connected to the Brocade FC
switches.

You can also install FC host bus adapters (HBAs) in expansion slots and use those
ports instead of, or in addition to, the onboard ports.

© Copyright Equant 12.2018


Internal Use Only 109 of 141
Configuration & Operation Instruction

© Copyright Equant 12.2018


Internal Use Only 110 of 141
Configuration & Operation Instruction

The classroom environment uses Brocade 6505 switches. You can also use
Brocade 6510 and Cisco 9148 switches. The ports in the diagram are as
configured by the appropriate RCF files.

You must license the correct ports when you use the switches. Not all ports are
licensed by default. You should also always check the port configuration on the
Support site.

4.24.4 ATTO FibreBridge Configuration

Before using a FibreBridge bridge in a live environment, check that the firmware
level is the most recent one. Use the Interoperability Matrix Tool (IMT) to verify
the current version.

Set all of the items as shown on the slide. The connectivity mode is set to ptp-
loop by default. You need to change that modeto
ptp.SetthebridgenametosomethingthatidentifiesitspositionintheMetroClusterconfi

© Copyright Equant 12.2018


Internal Use Only 111 of 141
Configuration & Operation Instruction

guration.Afteryou configure all of the settings, save the configuration and restart
the bridge.

After you set the IP address, you can use a browser to access the bridge.

When you log in to the FibreBridge bridge, there is no prompt symbol. The word
“Ready” is displayed. Type your command below that word.

If you type a command incorrectly and then press the Backspace key to fix it, the
command fails. If you make a mistake typing a command, retype it.

To display all of the disks, enter the sastargets command. A list of the disks
appears, followed by a list of the input output modules (IOMs) in each shelf.

4.24.5 Brocade FC Switch Configuration

Ensure that you download the correct file for the site. The Site-A file has the
zoning information. When the links are enabled, the zoning is transferred to the
Site-B switches.

© Copyright Equant 12.2018


Internal Use Only 112 of 141
Configuration & Operation Instruction

Download the appropriate reference configuration file (RCF):

1. On the NetApp Support site, navigate to the download page at NetApp


Downloads > Software.

2. In the list of products, find the row for Fibre Channel Switch.

3. From the list, select Brocade.

4. On the Fibre Channel Switch for Brocade page, click View & Download.

5. On the Fibre Channel Switch - Brocade page, click MetroCluster.

6. Follow the directions on the MetroCluster Configuration Files for Brocade


Switches Description page to download and run the files. FC switches support
trunking of Inter-Switch Links (ISLs) and the use of ISLs of different lengths and
speeds in different fabrics. However, within one fabric, ISLs must be of the same
length and the same speed.

FC switches do not support the following features:

- Time-division multiplexes (TDMs), native FC routing, and FCIP


extensions are not supported for the MetroCluster FC switch fabric.

- Third-party encryption devices are not supported on any link in the


MetroCluster FC switch fabric, including the ISL links across the WAN.

- Compression in the MetroCluster FC switch fabric is not supported.

- The Brocade Virtual Fabric (VF) feature is not supported.

Use the following switch naming convention: {Brocade|Cisco}<Model>-FAB{1|


2}-SW{1–4}-D<ID>-SITE{A|B}{1|2} For information regarding configuring a
compatible Cisco switch for a back-end fabric, see

http://mysupport.netapp.com/NOW/download/software/metrocluster_cisco/sanswi
tch/download.shtml

The RCF for domain 5 and domain 6 have the zone configurations and they are
pulled to the domain 7 and domain 8 when the ISLs are synced.

© Copyright Equant 12.2018


Internal Use Only 113 of 141
Configuration & Operation Instruction

For more information about zoning, see


https://library.netapp.com/ecmdocs/ECMLP2494091/html/GUID- C1BDC1C0-A02E-
4678-A41E-0AA3C4E7A804.html

You must manually calculate and set the ISL distance. Calculation and setting of
ISL distance is not included in the RCF file.

Due to the behavior of virtual interface over Fibre Channel (FC-VI) ports, you
must set the distance to 1.5 times the actual distance. The distance is actual
fabric cable length.

The value for the ISL is calculated as follows (in which the real_distance is
rounded up to the kilometer):

1.5 x real_distance = value.

If the value is 10 or less, use the LE parameter without specifying the value.

If the value is greater than 10 (if the distance is greater than 6 km), use the LS
parameter and specify the value.

The number one (1) in the configuration command is the vc_link_init value. A
value of 1 uses the ARB fill word, which is the default. A value of 0 uses IDLE. The
required value might depend on the link that is used.

© Copyright Equant 12.2018


Internal Use Only 114 of 141
Configuration & Operation Instruction

The fill word must be set to the same value as the fill word that is configured for
the remote port. Therefore, if the remote port link initialization and fill word
combination is idle-idle, the fill word for the long-distance link must be set to
”idle.”

If the combination of remote port link initialization and fill-word is set to arbff-
arbff, idle-arbff, or aa-then-ia, then the fill word for the long-distance link must be
set to ”arb.”

If you use dense wavelength division multiplexing (DWDM) connections, you


should use the -fecdisable parameter to disable forward error correction.

© Copyright Equant 12.2018


Internal Use Only 115 of 141
Configuration & Operation Instruction

4.24.6 MetroCluster Configuration

Create intercluster logical interfaces (LIFs) at both sites, and then create the peer
relationship.

During the cluster peering process, authentication must occur. You provide a
passphrase that you create and enter in both clusters during the peer-creation
step.

First, join cluster A:

1. Run the cluster peer create –peer-addrs <intercluster LIFs on the remote
cluster> command.

2. Type the passphrase: asdf

3. Type the passphrase again: asdf

Next, join cluster B:

1. Run the cluster peer create –peer-addrs <intercluster LIFs on the remote
cluster> command.

2. Type the passphrase: asdf

© Copyright Equant 12.2018


Internal Use Only 116 of 141
Configuration & Operation Instruction

3. Type the passphrase again: asdf

© Copyright Equant 12.2018


Internal Use Only 117 of 141
Configuration & Operation Instruction

The MetroCluster configuration process requires two data aggregates to exist on


each of the two clusters in Data ONTAP 8.3. Preferably, each aggregate is on a
separate node for resiliency reasons. The disk count in the storage aggregate
create command includes the disks for creating the mirror. In Data ONTAP 8.3.1
and later, only one node is required for each cluster. Two data aggregates are still
recommended.

The metrocluster configure command creates two system metadata volumes in


each cluster. The volumes are each 10 GB. The volumes are used to queue and
log Configuration Replication Service (CRS) transactions for configuration
replication to the other cluster over the cluster peering network. After
MetroCluster is configured, you can create more mirrored data aggregates, but
you cannot create unmirrored aggregates.

© Copyright Equant 12.2018


Internal Use Only 118 of 141
Configuration & Operation Instruction

CRS is enabled as part of the metrocluster configure command. The command


starts the process of checking the communication between the two sites. It also
starts the configuration replication. When everything is done, a confirmation
message is displayed.

To view the status of the MetroCluster environment, use the metrocluster


operation show command. The operation that was run most recently is shown.
In the example, the status of the configure operation is shown because the show
command was run immediately after the configure command.

4.24.7 check the configuration

In Data ONTAP 8.3 and later, Brocade and Cisco switches can also be included for
health-monitoring purposes. You must manually add each management IP
address and then use the storage switch show command to verify that the
switches have all been included for monitoring.

The polling interval is 15 minutes. Something might fail and you might not see it
immediately on the CLI, but AutoSupport sends an alert.

© Copyright Equant 12.2018


Internal Use Only 119 of 141
Configuration & Operation Instruction

Add the FibreBridge devices in the same way as the switches: by adding each
management-port IP address. Then verify that the bridges have all been added
correctly. Bridge monitoring supports only SNMPv1. However, currently ATTO
firmware prohibits changes to the community string.

After the MetroCluster cluster has been configured, use the metrocluster show
command to see the status of the cluster.

The site from which the command is issued is listed as the Local site, and the
other site is the Remote site. In the example, the command was issued from Site
A.

© Copyright Equant 12.2018


Internal Use Only 120 of 141
Configuration & Operation Instruction

You must use Config advisor 5.1 and later.

In the top-left corner of the main Config Advisor window, from the Profile menu,
select the MetroCluster in clustered Data ONTAP execution profile. Select the
type of switches that you are using and complete the login information for the
nodes and switches.

The information about the login page is arranged differently when you select
MetroCluster. All of the Cluster A information is in the left column, and all of the
Cluster B information is in the right column.

Save the query so that you can use it again without having to reenter all of the
information. Click Collect Data.

© Copyright Equant 12.2018


Internal Use Only 121 of 141
Configuration & Operation Instruction

© Copyright Equant 12.2018


Internal Use Only 122 of 141
Configuration & Operation Instruction

4.25 Configure Harvest user

You should use a non privileged user to connect Harvest to your storage systems.
The password for this user is available on PMR site.

Here is the required privileges and how to create a dedicated user :

Configure role

security login role create -role netapp-harvest-role -access readonly


-cmddirname "version"
security login role create -role netapp-harvest-role -access readonly
-cmddirname "cluster identity show"
security login role create -role netapp-harvest-role -access readonly
-cmddirname "cluster show"
security login role create -role netapp-harvest-role -access readonly
-cmddirname "system node show"
security login role create -role netapp-harvest-role -access readonly
-cmddirname "statistics"
security login role create -role netapp-harvest-role -access readonly
-cmddirname "lun show"
security login role create -role netapp-harvest-role -access readonly
-cmddirname "network interface show"
security login role create -role netapp-harvest-role -access readonly
-cmddirname "qos workload show"

© Copyright Equant 12.2018


Internal Use Only 123 of 141
Configuration & Operation Instruction

Configure user

 Clustered Data ONTAP <= 8.2.x

security login create -username netapp-harvest -application ontapi -role


netapp-harvest-role -authmethod password
 Clustered Data ONTAP >= 8.3

security login create -user-or-group-name netapp-harvest -application


ontapi -role netapp-harvest-role -authmethod password

4.26 Disk Sanitization

For Certification Netapp quote for PS has to be requested.

Netapp works with a Certifcation partner for these topics.

4.26.1 Disk Clearing and Disk Sanitization

Where is Disk Clearing and Disk Sanitization Defined

 US Department of Defense Standard

– “ISFO Process Manual V3 14 June 2011”

– Defacto standard for Disk Clearing and Disk Sanitization.

– Has been revised several times and has had several name changes. They
are all outdated and should no longer be referenced.

 “DOD 5220.22-M NISPOM”

 “NIST Special Publication 800-88 Guidelines for Media Sanitization”

 “ODAA Process Guide for C&A of Classified Systems under NISPOM”

What is Disk Clearing / Disk Sanitization

 Disk Clearing

– A procedure by which classified information is removed in such a manner


that known non-laboratory attacks (i.e., keyboard attacks) will be unable to
recover the information.

 Disk Sanitization

– A procedure by which classified information is completely removed and


even a laboratory attack using known techniques or analysis will not
recover any information. Sanitization of memory and media is required if a

© Copyright Equant 12.2018


Internal Use Only 124 of 141
Configuration & Operation Instruction

system is being “released” to users with access level lower than the
accreditation level.

– Note that memory is required to be overwritten as well for both. The tools
available to the NetApp PSE/PSCs don’t include a method to overwrite a
NetApp storage controller’s memory.

– Acceptable methods of disk destruction include incineration,


grinding/sanding the surface to dust, smelting, or acid.

– Shredding and degaussing are not acceptable methods of disk sanitization


through destruction.

– Requirements for tracking disks once they are sanitized is included in the
standard. NetApp doesn’t do tracking of disks once they are returned.

 The preferred term to describe the NetApp service offering is “Disk Erasure”, not
“Disk Clearing”, or “Disk Sanitization”.

How Can This be Done in DataONTAP

 Disk Sanitization Command

– Requires a special zero dollar license.

– Can not be uninstalled without reloading DataONTAP.

 Disk Clearing Operations

– Overwrite all addressable locations with a single character utilizing an


approved overwrite utility.

 Disk Sanitization Operations

– Overwrite all addressable locations with a pattern, and then its


complement, and finally with another unclassified pattern.

– Above counts as three cycles, sanitization is not complete until three cycles
are successfully completed.

– Once complete, there is a requirement to verify a sample. Tools to verify a


sample of disk are not available to NetApp PSE/PSCs.

– If any part of the disk can not be written to, the disk must be destroyed,
according to DoD standards. NetApp does not make a service available for
disk destruction; however, NetApp does have an offering for non-returning
of disks.

– An acceptable set of patterns to use is supplied in the US Department of


Defense document.

– Use of a random pattern is no longer part of the disk sanitization


requirements.

© Copyright Equant 12.2018


Internal Use Only 125 of 141
Configuration & Operation Instruction

– Three passes of a single set of writes is clearly called out in the current
standard. The documentation clarifies that the standard is not three of
each pass, for a total of 9 writes as was mistakenly assumed by numerous
implementers in the past.

What are the DataONTAP Commands

 Disk Clearing Command

disk sanitize start -f -p 0x00 -c 1 DISK

 Disk Sanitization Command

disk sanitize start -f -p 0x00110101 -p 0x11001010 -p 0x10010111 -c 1 DISK

 Important Notes

– It is only possible to run the disk sanitization command against a single


disk.

– The disk sanitization command can not be run on broken or failed disks.

– The customer may request that NetApp perform a ‘Disk Sanitization’ even
without the ability to sanitize the storage controller cluster’s memory.

– NetApp PSE/PSCs only perform “Disk Clearing”, as there are significant


requirements for tracking disks once they have been “Sanitized”.

What are the Specific Tasks?

 Get signoff from the customer to sanitize a system.

– Need to ensure that the customer understands that this operation can not
be undone.

– See sample signoff text, select the one based upon if this is a paid
engagement or not.

 Install Disk Sanitization license on the NetApp storage controller.

 Make sure that the motherboard, shelf and disk firmware are up to date.

 Remove all failed disks from the storage controller. These disk will need to be
disposed of by the customer.

 If all disks are part of a single root aggregate, you will need to build a new volume
and aggregate composed of a minimal number of disks.

– Copy the active root volume to the newly created aggregate.

– Make the new root volume the boot volume.

– Reboot the storage controller to make the change live.

 Destroy all aggregates, except for the root aggregate.

© Copyright Equant 12.2018


Internal Use Only 126 of 141
Configuration & Operation Instruction

 Destroy all volumes, except for the root volume.

 Run the appropriate DataONTAP command for each disk to start the disk clearing
or sanitization process.

 Wait for process to complete. Progress can be checked via the “disk sanitize
status” command and the “sysconfig –r” command.

 Make note of disks that fail the sanitize process. They will need to be removed
and disposed of appropriately by the customer. Note that there may be an
additional charge for non-return of disks.

 Capture the final output of the “sysconfig –r” command.

 Reboot the system to maintenance mode and perform a 4a.

 Fill out the statement of completion.

– See attached sample, select the sample text based upon if this is a paid
engagement or not.

4.26.2 Certification Templates

Authorization For Disk Erasure

The customer, REPLACE_NAME_HERE requests that disk erasure work be performed


according to US Department of Defense Standard ISFO Process Manual V3 14 June 2011
on the following NetApp storage controllers:
 REPLACE_NAME, SN# REPLACE_SSN

 REPLACE_NAME, SN# REPLACE_SSN

The customer understands that the disk erasure process is non-reversable once started
and all existing data on the storage controllers named above will be non-recoverable.
This work will be performed under NetApp purchase number REPLACE_PO_NUMBER.
Signed for Customer: _________________________
Print name: _________________________
Date: _________________________

Completion of Disk Erasure Work

Disk erasure work was performed on the following NetApp storage controllers using the
built in DataONTAP tools:
 REPLACE_NAME, SN# REPLACE_SSN

 REPLACE_NAME, SN# REPLACE_SSN

The process followed meets the disk clearing requirements detailed in the US
Government publication, “ISFO Process manual V3 14 June 2011”, the generally accepted
industry accepted authority on device erasure.
This work was performed without charge to the customer.
Signed for Customer: _________________________
Print name: _________________________
Date: ________________________

© Copyright Equant 12.2018


Internal Use Only 127 of 141
Configuration & Operation Instruction

4.27 NDMPCopy

4.27.1 Enabling SVM-scoped NDMP on the cluster

You can configure SVM-scoped NDMP on the cluster by enabling SVM-scoped NDMP mode
and
NDMP service on the cluster (admin SVM).
About this task
Turning off node-scoped NDMP mode enables SVM-scoped NDMP mode on the cluster.

Enable SVM-scoped NDMP mode by using the system services ndmp command with the
node-scope-mode parameter.

cluster1::> system services ndmp node-scope-mode off


NDMP node-scope-mode is disabled.

Enable NDMP service on the admin SVM by using the vserver services ndmp on
command.

cluster1::> vserver services ndmp on -vserver cluster1

The authentication type is set to challenge by default and plaintext authentication is


disabled.

Verify that NDMP service is enabled by using the vserver services ndmp show command.

cluster1::> vserver services ndmp show


Vserver Enabled Authentication type
------------- --------- -------------------
cluster1 true challenge
vs1 false challenge

If you are using an NIS or LDAP user, the user must be created on the respective server.
You cannot
use an Active Directory user.

Create a backup user with the admin or backup role by using the security login create
command.
You can specify a local backup user name or an NIS or LDAP user name for the -user-or-
group-name
parameter.

The following command creates the backup user backup_admin1 with the backup role:

cluster1::> security login create -user-or-group-name backup_admin1 -application


ssh
-authmethod password -role backup
Please enter a password for user 'backup_admin1':
Please enter it again:

Generate a password for the admin SVM by using the vserver services ndmp generate
password command.
The generated password must be used to authenticate the NDMP connection by the
backup application.

© Copyright Equant 12.2018


Internal Use Only 128 of 141
Configuration & Operation Instruction

cluster1::> vserver services ndmp generate-password -vserver cluster1 -user


backup_admin1
Vserver: cluster1
User: backup_admin1
Password: qG5CqQHYxw7tE57g

4.27.2 Configuring LIFs

You must identify the LIFs that will be used for establishing a data connection between
the data and
tape resources, and for control connection between the admin SVM and the backup
application. After
identifying the LIFs, you must verify that firewall and failover policies are set for the LIFs,
and
specify the preferred interface role.

Identify the intercluster, cluster-management, and node-management LIFs by using the


network interface show command with the –role parameter.

The following command displays the intercluster LIFs:

cluster1::> network interface show -role intercluster


Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
cluster1 IC1 up/up 192.0.2.65/24 cluster1-1 e0a true
cluster1 IC2 up/up 192.0.2.68/24 cluster1-2 e0b true

The following command displays the cluster-management LIF:

cluster1::> network interface show -role cluster-mgmt


Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------- ------- ----
cluster1 cluster_mgmt up/up 192.0.2.60/24 cluster1-2 e0M true

The following command displays the node-management LIFs:

cluster1::> network interface show -role node-mgmt


Logical Status Network Current Current Is
Vserver Interface Admin/Oper Address/Mask Node Port Home
----------- ---------- ---------- ------------------ ------------ ------ ------
cluster1 cluster1-1_mgmt1 up/up 192.0.2.69/24 cluster1-1 e0M true
cluster1-2_mgmt1 up/up 192.0.2.70/24 cluster1-2 e0M true

Ensure that the firewall policy is enabled for NDMP on the intercluster, cluster-
management
(cluster-mgmt), and node-management (node-mgmt) LIFs:

Verify that the firewall policy is enabled for NDMP by using the system services firewall
policy show command.

The following command displays the firewall policy for the cluster-management LIF:

cluster1::> system services firewall policy show -policy cluster


Vserver Policy Service Allowed
------- ------------ ---------- -----------------
cluster cluster dns 0.0.0.0/0
http 0.0.0.0/0
https 0.0.0.0/0

© Copyright Equant 12.2018


Internal Use Only 129 of 141
Configuration & Operation Instruction

ndmp 0.0.0.0/0
ndmps 0.0.0.0/0
ntp 0.0.0.0/0
rsh 0.0.0.0/0
snmp 0.0.0.0/0
ssh 0.0.0.0/0
telnet 0.0.0.0/0
10 entries were displayed.

The following command displays the firewall policy for the intercluster LIF:
cluster1::> system services firewall policy show -policy intercluster
Vserver Policy Service Allowed
------- ------------ ---------- -------------------
cluster1 intercluster dns -
http -
https -
ndmp 0.0.0.0/0, ::/0
ndmps -
ntp -
rsh -
ssh -
telnet -
9 entries were displayed.

The following command displays the firewall policy for the node-management LIF:

cluster1::> system services firewall policy show -policy mgmt


Vserver Policy Service Allowed
------- ------------ ---------- -------------------
cluster1-1 mgmt dns 0.0.0.0/0, ::/0
http 0.0.0.0/0, ::/0
https 0.0.0.0/0, ::/0
ndmp 0.0.0.0/0, ::/0
ndmps 0.0.0.0/0, ::/0
ntp 0.0.0.0/0, ::/0
rsh -
snmp 0.0.0.0/0, ::/0
ssh 0.0.0.0/0, ::/0
telnet -
10 entries were displayed.

If the firewall policy is not enabled, enable the firewall policy by using the system services
firewall policy modify command with the –service parameter.

The following command enables firewall policy for the intercluster LIF:

cluster1::> system services firewall policy modify -vserver cluster1 -policy


intercluster -service ndmp 0.0.0.0/0

Ensure that the failover policy is set appropriately for all the LIFs:
Verify that the failover policy for the cluster-management LIF is set to broadcast-domain-
wide, and the policy for the intercluster and node-management LIFs is set to local-only by
using the network interface show –failover command.

The following command displays the failover policy for the cluster-management,
intercluster, and node-management LIFs:

cluster1::> network interface show -failover


Logical Home Failover Failover
Vserver Interface Node:Port Policy Group
---------- ----------------- ----------------- -------------------- --------

© Copyright Equant 12.2018


Internal Use Only 130 of 141
Configuration & Operation Instruction

cluster cluster1_clus1 cluster1-1:e0a local-only cluster


Failover Targets:
.......
cluster1 cluster_mgmt cluster1-1:e0m broadcast-domain-wide Default
Failover Targets:
.......
IC1 cluster1-1:e0a local-only Default
Failover Targets:
IC2 cluster1-1:e0b local-only Default
Failover Targets:
.......
cluster1-1 cluster1-1_mgmt1 cluster1-1:e0m local-only Default
Failover Targets:
......
cluster1-2 cluster1-2_mgmt1 cluster1-2:e0m local-only Default
Failover Targets:
......

If the failover policies are not set appropriately, modify the failover policy by using the
network interface modify command with the -failover-policy parameter.

cluster1::> network interface modify -vserver cluster1 -lif IC1 -failover-policy


local-only

Specify the LIFs that are required for data connection by using the vserver services ndmp
modify command with the preferred-interface-role parameter.

cluster1::> vserver services ndmp modify -vserver cluster1 -preferred-interface-


role intercluster,cluster-mgmt,node-mgmt

Verify that the preferred interface role is set for the cluster by using the vserver services
ndmp show command.

cluster1::> vserver services ndmp show -vserver cluster1


Vserver: cluster1
NDMP Version: 4
.......
.......
Preferred Interface Role: intercluster, cluster-mgmt, node-mgmt

4.27.3 NDMP operating in 'Vserver-scope'

In 'Vserver-scope,' NDMP is 'cluster-aware' and utilizes NDMP Protocol Extensions to


establish efficient data connections throughout the entire cluster. This extension is called
Cluster Aware Backup (CAB). When CAB is being used, an NDMP connection can be made
to any node in the cluster and have all cluster resources (all volumes and all tape
devices) available. Depending on the LIF type, there are still some limitations with NDMP
and CAB. The CAB Extension is only available in clustered Data ONTAP 8.2 and later and
requires the Backup Application to support NDMP as well as the CAB Extension (not all
third-party vendors support NDMP Extensions).

The following table displays what resources are available in 'Vserver-scope' without CAB
:

LIF type Volumes available for backup or restore Tape devices available for backup or restore
Tape devices connected to the node hosting the
Node Mgmt LIF All volumes hosted by a node
node-management LIF

© Copyright Equant 12.2018


Internal Use Only 131 of 141
Configuration & Operation Instruction

Only volumes that belong to the Vserver hosted by a


Data LIF None
node that hosts the data LIF
Cluster Mgmt All volumes hosted by a node that hosts the cluster-
None
LIF management LIF
All volumes hosted by a node that hosts the InterCluster Tape devices connected to the node hosting the
InterCluster LIF
LIF InterCluster LIF

The following table displays what resources are available in 'Vserver-scope' when CAB is
supported by the Backup Application:

LIF type Volumes available for backup or restore Tape devices available for backup or restore
Tape devices connected to the node hosting the node-
Node Mgmt LIF All volumes hosted by a node
management LIF
All volumes that belong to the Vserver that hosts
Data LIF None
the data LIF
Cluster Mgmt
All volumes in the cluster All tape devices in the cluster
LIF
InterCluster LIF All volumes in the cluster All tape devices in the cluster

4.27.4 Lab Testing NDMPCopy

4.27.4.1 Benefits :

 Netapp internal « native » command


 Flows can be done through intercluster internal network (between Netapp Cluster Nodes)

4.27.4.2 Drawbacks:

 1 Full Copy + 2 incrementals maximum


 250 MB/s per stream limit
 16 streams max concurently
 Needs to be run in nodeshell (who risks to disappear in future Ontap releases)

4.27.5 NDMPCopy Pre-Requisites

Destination Volume must have sufficient room to store Source Volume data
Destination Volume must have sufficient max number of files (max files > Source Volume max files)
Command to change max files on a Volume:

Example:
Target
Total Files (for user-visible data): 21251126
Files Used (for user-visible data): 21251126
SM
Total Files (for user-visible data): 999999995
Files Used (for user-visible data): 65186433

© Copyright Equant 12.2018


Internal Use Only 132 of 141
Configuration & Operation Instruction

Command :
volume modify DV_C4U_DA6_OFF_FRA_01_target -vserver svm_fcav2_CUBE4Demo_file_nmd_01
-files 65186500

4.28 XCP Migration Tool

4.28.1 What Is XCP?

XCP is a high-performance NFSv3 migration tool for fast and reliable migrations from
third-party storage to NetApp and NetApp to NetApp transitions. The tool supports
discovery, logging, and reporting and a wide variety of sources and targets.

4.28.2 Features

XCP runs on a Linux client as a command-line tool. XCP is packaged as a single binary file
that is easy to deploy and use.

Feature Name Description


Core Engine Innovations • • Extreme performance (often ~25 times
that of comparable tools) in high file count
challenged environments
• • Multiple layers of granularity (qtrees,
subdirectoriess, criteria-based filtering)
• • Easy deployment (64-bit Linux host-based
software)

“show” Discovery of servers and file systems


“scan” Reports and listings to scope the directories,
files, and data in the file systems
“copy” Any to NetApp (third-party FSs, UNIX
SAN/DAS/NAS to FAS, E-Series, the NetApp
Mars® operating system FSs)—block third party
as target
“verify” Three levels of assurance: stats, structure, and
full data bit by bit
“resume” Fast log-based recovery of in-progress jobs
Resume (scan) If a scan operation is interrupted, the user can
resume it, provided the “-newid” option was
used in the scan to enable logging
Resume (verify) If a verify operation is interrupted, the user can
resume it
“sync” Differential incremental updates from source to
target at the file level
“multiple syncs” The sync command enables the user to perform
multiple incremental updates from the
migration source to the target
License Management Portal Easy download of online and offline licenses at
https://xcp.netapp.com
Logging and Reporting Events, sources, targets, files, data,
performance
4.28.3 Prerequisites

XCP runs on a Linux client host as a CLI tool. It is easy to use single file software and does
not involve complex installation procedures. Users can download the binary from
https://support.netapp.com/eservice/toolchest.
XCP is available for internal, partner, and customer use. Download and activate a free 90-
day renewable license from https://xcp.netapp.com/.
The following are the minimum system requirements.
• 64-bit Intel or AMD server, minimum 4 cores and 8GB RAM

© Copyright Equant 12.2018


Internal Use Only 133 of 141
Configuration & Operation Instruction

• 20MB of disk space for the xcp binary and at least 50MB for the logs
• Recent Linux distribution (RHEL 5.11 or later or kernel 2.6.18-404 or later)
• No other active applications
• Access to log in as root or run sudo commands
• Network connectivity to source and destination NFS exports

4.28.4 XCP Catalog Location

XCP saves operation reports and metadata in an NFS 3–accessible catalog directory.
Provisioning the catalog is a one-time preinstallation task requiring:
• A NetApp NFSv3 export for security and reliability.
• At least 10 disks or SSDs in the aggregate containing the export for performance.
• Storage configured to allow root access to the catalog export for the IP addresses of all
Linux clients used to run XCP (multiple XCP clients can share a catalog location).
• Approximately 1GB of space for every 10 million objects (directories + files + hard
links) to be indexed. Each copy that can be resumed or synchronized and each scan that
can be searched offline requires an index. Note: Store XCP catalogs separately. The
catalogs should not be on either the source or the destination NFS export directory. XCP
maintains metadata and reports in the catalog location specified during initial setup. The
location for storing the reports needs to be specified and updated before you run any
operation with XCP. Edit the xcp.ini file using an appropriate Linux file editor at
/opt/NetApp/xFiles/xcp/.

4.28.5 Activate

The license file must be located in the XCP local configuration directory,
/opt/Netapp/xFiles/xcp:
• Run any xcp command to allow XCP to autocreate the configuration directory (the error
“License file /opt/NetApp/xFiles/xcp/license not found” is expected).

# ./xcp show localhost


• Move the license file to /opt/NetApp/xFiles/xcp and activate XCP. Note: The licenses
allow XCP to connect to NetApp for host activations, statistics collection, and other
functions. You can configure other license options for secure sites through
xcp.netapp.com.

# mv ./license /opt/NetApp/xFiles/xcp
# ./xcp activate

4.28.6 2.3 Configure and Run

After activating XCP, configure the NFS 3 catalog location. XCP is then ready to run. 5 XCP
NFS v1.3 Migration Tool Quick Start Guide © Copyright 2017 NetApp, Inc. All rights
reserved

© Copyright Equant 12.2018


Internal Use Only 134 of 141
Configuration & Operation Instruction

• Edit or replace xcp.ini with the catalog location; for example, to use export xcat on
server atlas:

# echo -e '[xcp]\ncatalog = atlas:/xcat' > /opt/NetApp/xFiles/xcp/xcp.ini

• For basic usage and a few examples, run xcp with no arguments:

Use

xcp [scan] [options] path


xcp show [options] servers
xcp copy/verify [options] source_path target_path
xcp sync/resume [options] -id name
Path Format
server:export[:subdirectory]
Multipath Format
server1addr1,server1addr2,...:export[:subdirectory]

Documentation

commands and options: xcp help [command]


features, performance tuning, examples: xcp help info

Examples

Query a server to see its RPC services and NFS exports;


print a human-readable tree report for one of the NFS exports;
and list all the files from the root of a subdirectory:
xcp show server.abc.com
xcp scan -stats server.abc.com:/tmp
xcp scan -l server.abc.com:/tmp:/test

Copy from a local SAN or DAS filesystem (requires local NFS service):
xcp copy localhost:/home/smith cdot:/target
Three-level verification: compare stats, attributes, and full data:
xcp verify -stats localhost:/home/smith cdot:/target
xcp verify -nodata localhost:/home/smith cdot:/target
xcp verify localhost:/home/smith cdot:/target

Please run "xcp help" to see the commands and options


Please run "xcp help info" to see the user guide, including more examples...

4.28.7 XCP Lab Testing

4.28.7.1 Benefits:

Allow Incrementals run (by using Catalog indexed)


Can Run Parallel processes/tasks (7 by default)
Can be scale on different VMs (with shared Catalog)

4.28.7.2 Drawbacks:

1 or more VMs needed for XCP to run


A shared NFS Catalog needed on an independent Storage
(Independent from source/destination copied data)

NFS rights for XCP VMs on all source/destination exports needed.

© Copyright Equant 12.2018


Internal Use Only 135 of 141
Configuration & Operation Instruction

Migration flows accross VMs DATA interfaces

Free License to request (License by owner’s name)


Hard Links not managed, replaced by Soft Links
Qtrees not created/managed by xcp
Subdirectories with new filesystem (FS junction) will not be scanned.
.snapshot are ignored

4.28.8 Basic commands for XCP Copy & Sync

xcp show 10.197.185.211


List available shares

xcp scan 10.197.185.211:vol/OP_C46_DA6_PRI_CRP_54


Scan NFS share
result :
list +
36,419 scanned, 7.55 MiB in (4.30 MiB/s), 61.4 KiB out (35.0 KiB/s), 1s.

xcp copy -newid test_xcp -parallel 7 10.197.185.211:vol/OP_C46_DA6_PRI_CRP_54 10.197.185.211:OP_C46_DA6_PRI_CRP_54_target


Copy Source/Dest (parallel 7 by default)

xcp sync -id test_xcp


Sync data based on test_xcp Catalog

xcp verify 10.197.185.211:vol/OP_C46_DA6_PRI_CRP_54 10.197.185.211:OP_C46_DA6_PRI_CRP_54_target


Verify source/dest data
result:
36,419 scanned, 36,419 indexed, 100% found (35,694 have data), 35,694 compared, 100% verified (data, attrs, mods), 60.8 GiB in (204
MiB/s), 151 MiB out (506 KiB/s), 5m4s.

4.29 Volume data encryption with NVE

4.29.1 Prerequisites

1. Determine whether your cluster version supports NVE: version -v

NVE is not supported if the command output displays the text "no-DARE" (for "no
Data At Rest Encryption"),

Example : The following command determines whether NVE is supported on


cluster1.

cluster1::> version -v
NetApp Release 9.1.0: Tue May 10 19:30:23 UTC 2016 <1no-DARE>

The text "1no-DARE" in the command output indicates that NVE is not supported
on your cluster version.

2. An NVE license entitles you to use the feature on all nodes in the cluster. You must
install the license before you can encrypt data with NVE (with administrator
privilege).

Install the NVE license for a node: system license add -license-code
license_key

© Copyright Equant 12.2018


Internal Use Only 136 of 141
Configuration & Operation Instruction

Example : The following command installs the license with the key
AAAAAAAAAAAAAAAAAAAAAAAAAAAA.

cluster1::> system license add -license-code


AAAAAAAAAAAAAAAAAAAAAAAAAAAA

Verify that the license is installed by displaying all the licenses on the cluster:
system license show. For complete command syntax, see the man page for the
command.

Example : The following command displays all the licenses on cluster1:

cluster1::> system license show

The NVE license package name is "VE".

4.29.2 Configuration of the key management

Configuring external key management :

You can use one or more external key management servers to secure the keys that
the cluster uses to access encrypted data. An external key management server is a
third-party system in your storage environment that serves keys to nodes using the
Key Management Interoperability Protocol (KMIP).

Not yet validated => Contact the ENG if needed

Enabling onboard key management (with administrator privilege) :

You can use the Onboard Key Manager to secure the keys that the cluster uses to
access encrypted data. You must enable Onboard Key Manager on each cluster that
accesses an encrypted volume or a self-encrypting disk (SED).

You must run the security key-manager setup command each time you add a node
to the cluster. In MetroCluster configurations, you must run security key-manager
setup on the local cluster first, then on the remote cluster, using the same
passphrase on each. Starting with ONTAP 9.5, you must run security key-manager
setup and security key-manager setup -sync-metrocluster-config yes on the local
cluster and it will synchronize with the remote cluster.

By default, you are not required to enter the key manager passphrase when a node
is rebooted. Starting with ONTAP 9.4, you can use the -enable-cc-mode true option
to require that users enter the passphrase after a reboot.

Note: After a failed passphrase attempt, you must reboot the node again.

Starting with ONTAP 9.5, ONTAP Key Manager supports Trusted Platform Module
(TPM). TPM is a secure crypto processor and micro-controller designed to provide
hardware-based security. Support for TPM is automatically enabled by ONTAP on
detection of the TPM device driver. If you are upgrading to ONTAP 9.5, you must
create new encryption keys for your data after enabling TPM support.

Steps :

1. Start the key manager setup wizard: security key-manager setup

© Copyright Equant 12.2018


Internal Use Only 137 of 141
Configuration & Operation Instruction

cnaces84::> security key-manager setup


Welcome to the key manager setup wizard, which will lead you through
the steps to add boot information.

Enter the following commands at any time


"help" or "?" if you want to have a question clarified,
"back" if you want to change your answers to previous questions, and
"exit" if you want to quit the key manager setup wizard. Any changes
you made before typing "exit" will be applied.

Restart the key manager setup wizard with "security key-manager setup". To accept a
default
or omit a question, do not enter a value.
Would you like to configure onboard key management? {yes, no} [yes]:
Enter the cluster-wide passphrase for onboard key management. To continue the
configuration, enter the passphrase, otherwise type "exit":

the input value is invalid for type <text (size 32..256)>

Enter the cluster-wide passphrase for onboard key management

You can type "back", "exit", or "help" at any question.

Enter the cluster-wide passphrase for onboard key management. To continue the
configuration, enter the passphrase, otherwise type "exit":
Re-enter the cluster-wide passphrase:
After configuring onboard key management, save the encrypted
configuration data
in a safe location so that you can use it if you need to perform a manual
recovery
operation. Copy the passphrase to a secure location (PMR) outside the
storage system for future use.

All key management information is automatically backed up to the replicated database


(RDB) for the cluster. You should also back up the information manually for use in case
of a disaster.

To view the data, use the security key-manager backup show

cnaces84::> security key-manager key show

Node: cnaces84-01
Key Store: onboard
Key ID Used By
---------------------------------------------------------------- --------
00000000000000000200000000000100078596BDE0CDA32B81023B5E5DC6BF44
NSE-AK
0000000000000000020000000000010087EB0C61C8DDBD677F477C8557E76897
NSE-AK

Node: cnaces84-02
Key Store: onboard
Key ID Used By
---------------------------------------------------------------- --------
00000000000000000200000000000100078596BDE0CDA32B81023B5E5DC6BF44
NSE-AK
0000000000000000020000000000010087EB0C61C8DDBD677F477C8557E76897
NSE-AK

© Copyright Equant 12.2018


Internal Use Only 138 of 141
Configuration & Operation Instruction

4 entries were displayed.

4.29.3 Enabling encryption on a volume

You can enable encryption on a new volume or on an existing volume. You must have
installed the NVE license and enabled key management before you can enable volume
encryption. NVE is FIPS-140-2 level 1 compliant.

Enabling encryption on a new volume :

You can use the volume create command to enable encryption on a new volume.
Steps :
1. Create a new volume and enable encryption on the volume: volume create
-vserver SVM_name -volume volume_name -aggregate aggregate_name
-encrypt true

For complete command syntax, see the man page for the command.

Example

The following command creates a volume named vol1 on aggr1 and enables
encryption on the volume:

cluster1::> volume create -vserver vs1 -volume vol1 -aggregate aggr1


-encrypt true

The system creates an encryption key for the volume. Any data you put on the volume
is encrypted.

2. Verify that the volume is enabled for encryption: volume show -is-encrypted
true

For complete command syntax, see the man page for the command.

Example

The following command displays the encrypted volumes on cluster1:

cluster1::> volume show -is-encrypted true

Vserver Volume Aggregate State Type Size Available Used


------- ------ --------- ----- ---- ----- --------- ----
vs1 vol1 aggr2 online RW 200GB 160.0GB 20%

Enabling encryption on an existing volume with the volume encryption


conversion start command

Starting with ONTAP 9.3, you can use the volume encryption conversion start
command to enable encryption on an existing volume. Once you start a conversion
operation, it must complete. If you encounter a performance issue during the
operation, you can run the volume encryption conversion pause command to pause
the operation, and the volume encryption conversion restart command to resume the
operation.

© Copyright Equant 12.2018


Internal Use Only 139 of 141
Configuration & Operation Instruction

Note: You cannot use volume encryption conversion start to convert a SnapLock or
FlexGroup volume.

Steps :
1. Enable encryption on an existing volume: volume encryption conversion start
-vserver SVM_name -volume volume_name

For complete command syntax, see the man page for the command.

Example

cnaces84::> volume encryption conversion start -vserver svm_fca_se_file_04 -volume


volmwinad1

Warning: Conversion from non-encrypted to encrypted volume scans and encrypts all of
the data in the specified volume. It may take a significant amount of time, and
may degrade performance during that time. Are you sure you still want to continue?
{y|n}: y
Conversion started on volume "volmwinad1". Run "volume encryption conversion show
-volume volmwinad1 -vserver svm_fca_se_file_04" to see the status of this operation.
 The system creates an encryption key for the volume. The data on the volume is
encrypted.
 Verify the status of the conversion operation: volume encryption conversion show
For complete command syntax, see the man page for the command.
Example

cnaces84::> volume encryption conversion show -volume volmwinad1 -vserver


svm_fca_se_file_04

Vserver Name: svm_fca_se_file_04


Volume Name: volmwinad1
Start Time: 11/22/2018 16:30:02
Status: Phase 2 of 2 is in progress.

When the conversion operation is complete, verify that the volume is enabled for
encryption:
volume show -is-encrypted true
For complete command syntax, see the man page for the command.
Example

cnaces84::> volume show -is-encrypted true


Vserver Volume Aggregate State Type Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
svm_fca_se_file_04
volmwinad1 aggr_data_cnaces84_01_sas_01
online RW 80GB 80.00GB 0%

Enabling encryption on an existing volume with the volume move start


command

You can use the volume move start command to enable encryption on an existing
volume. You must use volume move start in ONTAP 9.2 and earlier. You can use the same
aggregate or a different aggregate.

Steps :
1. Move an existing volume and enable encryption on the volume: volume move
start -vserver SVM_name -volume volume_name -destination-aggregate
aggregate_name -encrypt-destination true|false

© Copyright Equant 12.2018


Internal Use Only 140 of 141
Configuration & Operation Instruction

For complete command syntax, see the man page for the command.

Example

The following command moves an existing volume named vol1 to the destination
aggregate aggr2 and enables encryption on the volume:
cluster1::> volume move start -vserver vs1 -volume vol1 -destination-
aggregate aggr2 -encrypt-destination true

The system creates an encryption key for the volume. The data on the volume is
encrypted.

2. Verify that the volume is enabled for encryption: volume show -is-encrypted
true

For complete command syntax, see the man page for the command.

Example

The following command displays the encrypted volumes on cluster1:


cluster1::> volume show -is-encrypted true

Vserver Volume Aggregate State Type Size Available Used


------- ------ --------- ----- ---- ----- --------- ----
vs1 vol1 aggr2 online RW 200GB 160.0GB 20%

© Copyright Equant 12.2018


Internal Use Only 141 of 141

Das könnte Ihnen auch gefallen