Beruflich Dokumente
Kultur Dokumente
NETAPP UNIVERSITY
Clustered Data
ONTAP Administration
Student Guide
Course ID: STRSW-ILT-D8CADM-REV04
Catalog Number: STRSW-ILT-D8CADM-REV04-SG
Content Version: 1.0
ATTENTION
The information contained in this course is intended only for training. This course contains information and activities that,
while beneficial for the purposes of training in a closed, non-production environment, can result in downtime or other
severe consequences in a production environment. This course material is not a technical reference and should not,
under any circumstances, be used in production environments. To obtain reference materials, refer to the NetApp product
documentation that is located at http://now.netapp.com/.
COPYRIGHT
2015 NetApp, Inc. All rights reserved. Printed in the U.S.A. Specifications subject to change without notice.
No part of this document covered by copyright may be reproduced in any form or by any meansgraphic, electronic, or
mechanical, including photocopying, recording, taping, or storage in an electronic retrieval systemwithout prior written
permission of NetApp, Inc.
TRADEMARK INFORMATION
NetApp, the NetApp logo, Go Further, Faster, ASUP, AutoSupport, Campaign Express, Customer Fitness, CyberSnap,
Data ONTAP, DataFort, FilerView, Fitness, Flash Accel, Flash Cache, Flash Pool, FlashRay, FlexCache, FlexClone,
FlexPod, FlexScale, FlexShare, FlexVol, GetSuccessful, LockVault, Manage ONTAP, Mars, MetroCluster, MultiStore,
OnCommand, ONTAP, ONTAPI, RAID DP, SANtricity, SecureShare, Simplicity, Simulate ONTAP, Snap Creator,
SnapCopy, SnapDrive, SnapIntegrator, SnapLock, SnapManager, SnapMirror, SnapMover, SnapProtect, SnapRestore,
Snapshot, SnapValidator, SnapVault, StorageGRID, Tech OnTap, and WAFL are trademarks or registered trademarks of
NetApp, Inc. in the United States and/or other countries.
Other product and service names might be trademarks of NetApp or other companies. A current list of NetApp trademarks
is available on the Web at http://www.netapp.com/us/legal/netapptmlist.aspx.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
TABLE OF CONTENTS
WELCOME .......................................................................................................................................................... 1
MODULE 1: EXPLORING DATA ONTAP STORAGE FUNDAMENTALS .................................................... 1-1
MODULE 2: HARDWARE AND INITIAL SETUP ........................................................................................... 2-1
MODULE 3: INITIAL STORAGE SYSTEM CONFIGURATION ..................................................................... 3-1
MODULE 4: STORAGE MANAGEMENT ....................................................................................................... 4-1
MODULE 5: NETWORK MANAGEMENT ...................................................................................................... 5-1
MODULE 6: IMPLEMENTING NAS PROTOCOLS ........................................................................................ 6-1
MODULE 7: IMPLEMENTING SAN PROTOCOLS ........................................................................................ 7-1
MODULE 8: SNAPSHOT COPIES .................................................................................................................. 8-1
MODULE 9: MANAGING STORAGE SPACE ................................................................................................ 9-1
MODULE 10: DATA PROTECTION .............................................................................................................. 10-1
MODULE 11: MONITORING YOUR STORAGE SYSTEM ........................................................................... 11-1
MODULE 12: UPGRADING AND TRANSITIONING TO CLUSTERED DATA ONTAP .............................. 12-1
BONUS MODULE A: INFINITE VOLUMES .................................................................................................... A-1
BONUS MODULE B: ENGAGING NETAPP SUPPORT ................................................................................ B-1
BONUS MODULE C: ONCOMMAND INSIGHT WALKTHROUGH ............................................................... C-1
BONUS MODULE D: DATA ONTAP PHYSICAL STORAGE MAINTENANCE ............................................ D-1
BONUS MODULE E: CLUSTERED DATA ONTAP ARCHITECTURE.......................................................... E-1
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Logistics
The Class
Resources
Schedule
Materials
Structure
Support
Activities
Participation rules
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
LOGISTICS
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Learn Together
Participant
Video
Learn
Share
Participate
Whiteboard
3
Polling
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
LEARN TOGETHER
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Chat
Intermediate
Advanced
Data ONTAP
NFS Administration
Advanced
Performance
Data ONTAP
SMB (CIFS) Administration
Troubleshooting
Data ONTAP
SAN Administration
Tools
Data ONTAP
Protection Administration
Enterprise
Applications
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Course Objectives
By the end of this course, you should be able to:
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
COURSE OBJECTIVES
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Course Agenda
Day 1
Module
Module
Module
Module
Day 2
Module
Module
Module
Module
5: Network Management
6: Implementing NAS Protocols
7: Implementing SAN Protocols
8: Snapshot Copies
Day 3
Module
Module
Module
Module
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
COURSE AGENDA
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Bonus Modules
Bonus Module A: Infinite Volumes
Bonus Module B: Engaging NetApp Support
Bonus Module C: OnCommand Insight Walkthrough
Bonus Module D: Data ONTAP Physical Storage
Maintenance
Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
BONUS MODULES
10
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Windows Server
2012 R2
CentOS
6.5
Remote
Desktop
Location
Username
Windows
Administrator
CentOS
root
admin
(case-sensitive)
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
12
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
13
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
http://support.netapp.com/
NetApp University
http://www.netapp.com/us/servicessupport/university/index.aspx
NetApp University
Support
http://netappusupport.custhelp.com
11
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
14
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 1
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-1
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-2
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Silo
Silo
Silo
Silo
Windows
Linux Cluster
UNIX
Future
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-3
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
New Demands on IT
Agility at Scale
SLAs
Negotiated
Provisioning
Weeks
Service-Driven
Minutes
No Outage Windows
Availability
Maintenance Windows
Economics
Exploit Data
Infrastructure
Shared, Consolidated
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-4
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Operating modes:
Earlier than 8.3: Available as either Data ONTAP operating in 7Mode or clustered Data ONTAP
8.3 and later: Available only as clustered Data ONTAP
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-5
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Nondisruptive
Operation
(NDO)
Proven
Efficiency
Seamless
Scalability
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-6
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
NetApp Storage
with FAS
NetApp and
Third-Party Arrays
with FlexArray
virtualization
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-7
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Improvements to SnapMirror
functionality
Larger Flash Pool cache sizes
Support for using Microsoft
Management Console (MMC) to
manage files and file shares
IPspaces
System Setup 3.0
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
System Setup 3.0: The System Setup tool is designed to improve the initial overall customer experience.
System Setup 3.0 supports Data ONTAP 8.3 and can be used to set up new FAS2200, FAS3200, and
FAS8000 storage systems.
OnCommand System Manager 8.3: System Manager 8.3 provides manageability support for new NetApp
storage platforms, innovative clustered Data ONTAP features, and commonly used customer workflows.
System Manager 8.3 is hosted on a Data ONTAP cluster, and it enhances the simplicity and management of
Data ONTAP 8.3 environments.
Automated NDU: Earlier releases of the Data ONTAP operating system enabled nondisruptive upgrades
(NDUs). But the Data ONTAP 8.3 operating system greatly automates and simplifies the upgrade process.
Whether processed through rolling upgrades or batch upgrades, upgrades to later versions of Data ONTAP
will be simple, nondisruptive, and automated.
Improvements to SnapMirror functionality: Data ONTAP 8.3 operating system provides key data-protection
benefits through expanded SnapMirror fan-in and fan-out ratios and improvements to SnapMirror
compression performance.
Larger Flash Pool cache sizes: The maximum supported Flash Pool cache sizes increase considerably. This
increase can help to improve the I/O performance that is so crucial for this company.
Support for using Microsoft Management Console (MMC) to manage files and file shares: With support for
MMC functionality, admins can manage elements of their NetApp storage directly from the Microsoft
Management Console, so they can spend less time managing their data and more time on strategic company
tasks.
Improvements for transitioning data onto clustered Data ONTAP (using NetApp 7-Mode Transition Tool
2.0): The 7-Mode Transition Tool is easy to use and greatly simplifies migration from Data ONTAP operating
in 7-Mode to Data ONTAP 8.3. Enhancements include the migration of MetroCluster configurations,
migration of volumes that contain LUNs, removal of the /vol path from all junctions, and the ability to keep
7-Mode volumes online during and after storage cutover.
IPv6 Enhancements: Data ONTAP enables the creation of logical interfaces (LIFs) with IPv6 addresses. New
supported features for IPv6 include support for IPspaces. intercluster peering including SnapMirror over IPv6,
support for MetroCluster, and support for DNS load balancing.
1-9
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Flash-accelerated
hybrid storage
2x price/performance
ratio
Cornerstone for
cloud services
Storage-efficiency
technologies
FAS2552
336 TB
518 TB
84 Drives
4-TB VST Flash
144 Drives
4-TB VST Flash
576 TB
144 Drives
4-TB VST Flash
FAS8040
FAS8060
2,880 TB
4,800 TB
1,200 Drives
72-TB VST Flash
720 Drives
48-TB VST Flash
FAS2554
FAS8020
1,920 TB
480 Drives
24-TB VST Flash
FAS8080EX
FAS
Portfolio
FAS2520
5,760 TB
1,440 Drives
72-TB VST Flash
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-10
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
NFS
Corporate
LAN
iSCSI
SMB
FCoE
FC
NAS
(File-Level
Access)
10
NetApp FAS
SAN
(Block-Level
Access)
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-11
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 1
11
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-12
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
12
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-13
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
High-Availability Configurations
7-Mode
Clustered
High Availability
High Availability
Fault tolerance
Provides takeover within
client timeout values
Allows seamless
giveback without client
disruption
High Availability
Ability to perform
nondisruptive operations
Hardware and software
upgrades
Hardware maintenance
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
HIGH-AVAILABILITY CONFIGURATIONS
High-availability (HA) pairs provide hardware redundancy that, along with redundant disk cabling, provides
the basis for nondisruptive operations and fault tolerance. Each node has the ability to take over its partners
storage and network traffic in case of an outage, and return it when the problem is resolved.
The controllers are connected to each other through an HA interconnect. Each node continually monitors its
partner, mirroring the data for each others nonvolatile memory (NVRAM or NVMEM). If both controllers
are in the same chassis, the interconnect is internal and requires no external cabling. Otherwise, external
cabling is required to connect the two controllers.
Clusters are built for continuous operation; no single failure on a port, disk, card, or motherboard will cause
data to become inaccessible in a system. Clustered scaling and load balancing are both transparent.
Clusters provide a robust feature set, including data protection features such as Snapshot copies, intracluster
asynchronous mirroring, SnapVault backups, and NDMP backups.
1-14
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Capacity Scaling
Rapid and seamless
deployment of new
storage or applications or
both
No required downtime
Movement that is
transparent to clients
C2 B2
B
C1
14
C
C3
B1
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
CAPACITY SCALING
In this example, more capacity is needed for Project B. You can increase the capacity by adding disks to an
HA pair, and then you can transparently move some of the data to the new storage. You can then expand the
amount of storage that is dedicated to Project B.
This expansion and movement are transparent to client machines.
1-15
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Linear Scaling
C2
A2
B2
A3
C1
C3
A
D
B1
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
LINEAR SCALING
Clustered Data ONTAP solutions can scale from 1 to 24 nodes and are primarily managed as one large
system. More importantly, to client systems, a cluster looks like a single file system. The performance of the
cluster scales linearly to multiple gigabytes per second of throughput, and the capacity scales to petabytes.
Clusters are a fully integrated solution. This example shows a 16-node cluster that includes 10 FAS systems
with 6 disk shelves each, and 10 FAS systems with 5 disk shelves each.
1-16
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Secure Multi-Tenancy
Tenants
Departments
Customers
Applications
16
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-17
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Secure Multi-Tenancy
Storage Containers
Clustered
Node SVMs
Admin SVMs
Data SVMs
Cluster Interconnect
HA
Storage virtual
machine (SVM)
vFiler unit
Data ONTAP
with vFiler0
Administrative SVM
HA
7-Mode
Data SVM
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-18
Consolidation and ease of management: Application service providers can consolidate your storage
needs. You can maintain domain infrastructure while providing multidomain storage consolidation. You
can reduce management costs while offering independent, domain-specific storage management.
Security: Security is one of the key concerns when storage is consolidated, either within an organization
or by an application service provider. Different vFiler units or SVMs can have different security systems
within the same storage system or cluster.
Delegation of management: Role-based access control (RBAC) provides administrator access that is
specific to a vFiler unit or an SVM.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
18
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-19
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 2
19
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-20
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
20
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-21
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Administrative Interfaces
Methods
7-Mode
Clustered
CLI: c1::> aggr create
GUI:
GUI:
Configuration files
21
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-22
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Administrative Interfaces
Shells
7-Mode
Clustered
Clustershell
Admin Shell
Node Shell
System
Shell
System
Shell
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
1.cluster>
2.x::storage aggregate*>
3.cluster#
4.::cluster999>
23
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-24
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Administrative Interfaces
Privilege Levels
24
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-25
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Clustershell
Command Scope
c1::> storage aggregate
c1::storage aggregate> modify
25
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-26
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Clustershell
Question Mark
c1::> storage aggregate
c1::storage aggregate> modify ?
[-aggregate] <aggregate name>
Aggregate
Disk Type
[ -free-space-realloc {on|off|no_redirect} ]
[ -ha-policy {sfo|cfo} ]
HA Policy
[ -percent-snapshot-space <percent> ]
[ -space-nearly-full-threshold-percent <percent> ]
Aggregate Nearly Full Threshold Percent
[ -space-full-threshold-percent <percent> ]
[ -hybrid-enabled {true|false} ]
Hybrid Enabled
[ -force-hybrid-enabled|-f [true] ]
[ -maxraidsize|-s <integer> ]
...
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-27
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Enter: cluster
What command scope are you in now?
Is there a show subcommand?
Enter: ?
Is a show command available?
Enter: show
How do you exit to the root command scope?
27
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-28
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Clustershell
Tab Completion
c1::storage aggregate>
modify
Tab
28
Tab
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-29
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Clustershell
Scope Return
c1::storage aggregate> ..
c1::storage> top
c1::>
29
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-30
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Clustershell
Additional Features
30
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-31
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
1.cluster>
2.x::storage aggregate*>
3.cluster#
4.::cluster999>
31
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-32
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
32
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-33
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Storage pools
Protection workflow support for version-flexible SnapMirror technology
Service Processor management support
SVM workflows
Summarizing disks
IPv6 support
On-box operation
OnCommand System Manager 3.1.x
is still a supported off-box option.
33
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-34
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
34
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-35
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
35
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-36
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
36
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
1-37
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
References
http://www.NetApp.com
https://Twitter.com/NetApp
https://www.facebook.com/NetApp
http://www.youtube.com/user/NetAppTV
37
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
REFERENCES
1-38
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
38
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
EXERCISE
Please refer to your exercise guide.
1-39
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 2
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-1
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-2
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
FAS2500
FAS8000
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
The FAS8000 series: These new midrange storage platforms enable flash and clustering to improve IT
performance and agility. The FAS3200 series offers up to 80% more performance and 100% more capacity
over the FAS2200 series. The FAS8000 series is flash-ready, with up to 18 terabytes (TB) of flash to increase
performance. The FAS8000 series uses scalable, cluster-ready architecture to meet new business demands.
2-3
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
FAS Configurations
Single-chassis standalone
configuration
e0a
e0b
e0c
e0d
e0e
0a
e0f
0b
LINK
LINK
LINK LINK
0c
0d
LINK LINK
LINK
LINK
e0a
e0b
e0c
e0d
e0e
0a
e0f
0b
LINK
LINK
LINK LINK
0c
0d
LINK LINK
LINK
LINK
High-availability (HA)
configuration
e0a
e0b
e0c
e0d
e0e
0a
e0f
LINK
LINK
LINK LINK
0b
0c
LINK LINK
0d
LINK
LINK
Single-controller
configuration with an I/O
expansion module (IOXM)
4
e0a
e0b
e0c
e0d
e0e
0a
e0f
LINK
LINK
LINK LINK
0b
0c
LINK LINK
0d
LINK
LINK
13
14
15
10
16
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
FAS CONFIGURATIONS
This slide shows an example of the FAS6200 series controller configuration. Although configurations can
vary with models, they have a few things in common:
Single-chassis configuration:
Standalone configuration with controller and blank panel: good for single-node configurations where high
availability is not a requirement, or where the controllers of an HA pair are spread out in the data center.
Standalone configuration with controller and I/O expansion module (IOXM): good for configurations that
require additional PCIe cards.
NOTE: This configuration is available only with the FAS3200 and FAS6200 series.
High-availability (HA) configuration with two controllers: Sometimes called HA-in-a-box; the HA
interconnect is handled within the chassis connection (rather than over external cables).
2-4
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SAN
Departmental
Enterprise
NAS
Departmental
NFS
LAN
Vol
Aggregate
HITACHI
CB1
D A T A SYSTEMS
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-5
Integrate data protection between different types of storage arrays from different vendors
Use storage virtualization to achieve higher use and scalability
Create NAS and SAN gateways for your current storage arrays for NFS, CIFS, iSCSI, FC, and FCoE
network protocols
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
450GB
450GB
450GB
450GB
450GB
450GB
450GB
450GB
450GB
450GB
450GB
450GB
450GB
450GB
450GB
450GB
450GB
450GB
450GB
450GB
450GB
450GB
450GB
450GB
DS4243
DS424x
4U, 24 disks
DS4486
4U, 48 disks
Tandem
drive carrier
15K SAS
7.2K SATA
SSD
DS4243
DS4246
DS4486
0
10
11
12
13
14
15
16
17
18
19
20
21
22
Disk Speeds:
23
DS2246
600GB
600GB
600GB
600GB
600GB
600GB
600GB
600GB
600GB
600GB
600GB
600GB
600GB
600GB
600GB
600GB
600GB
600GB
600GB
600GB
600GB
600GB
600GB
600GB
2U, 24 disks
DS2246
10K SAS
SSD
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
DS4486 is a 4U, 48-disk shelf (using tandem drive carriers) with dual 6-Gbps I/O modules.
Supports high-capacity (7.2K RPM) HDDs
NOTE: Although Data ONTAP supports FC shelves, they are no longer sold with new systems.
2-6
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
FAS Configurations
Single-Node Cluster
Advantages:
Controller or Node 1
Low cost
Controller and disk
shelves have redundant
fans and power supplies
to keep hardware
running
Disadvantages:
Storage system is the
single point of failure
Loss of a controller or
cable could lead to loss
of data or loss of data
access
Disk Shelves
Storage System
SAS or FC (simplified)
* Multipath HA (dual-path cabling) is recommended.
7
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
Represents the failure of a single hardware component that can lead to loss of data access or loss of data
Does not include multiple or rolling hardware errors, such as triple disk failure or dual disk-shelf module
failure
All hardware components that are included with the storage system have demonstrated very good reliability
with low failure rates. If a hardware component such as a controller or adapter fails, you can replace the failed
component, but client applications and users will not be able to access their data until the system is repaired.
2-7
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
FAS Configurations
Switchless Two-Node Cluster
Advantages:
Fault tolerance:
When a node fails,
a takeover occurs
and the partner
node continues to
serve the failed
nodes data
NDO: During
maintenance and
upgrades, takeover
occurs while the
partner is being
upgraded
Controller or Node
1
Controller or Node
2
Disk Shelves
Disk Shelves
No shelf-loss
protection (data
duplication) is
included
HA Pair
SAS or FC (simplified)
HA interconnect
Cluster interconnect
Disadvantage:
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-8
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Node 2
Node 3
Node 4
Disk Shelves
SAS or FC (simplified)
HA interconnect
Cluster interconnect
Disk Shelves
Disk Shelves
Cluster Interconnect
Switches
9
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-9
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
10
Supports up to 12-node
configurations
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
This switch is recommended for clusters larger that 12 nodes or platforms that support four cluster
interconnects per node.
Two switches per cluster are required for redundancy and bandwidth.
Eight ISLs are required between the switches.
Although they are supported, the Cisco Nexus 5010 and Nexus 5020 switches are no longer being sold. The
NetApp CN1610 and Cisco Nexus 5596 replace the 5010 and 5020 respectively. NetApp clusters do not
support the Cisco Nexus 5548.
For switch setup and configuration information:
Clustered Data ONTAP Switch Setup Guide for Cisco Switches
CN1601 and CN1610 Switch Setup and Configuration Guide
2-10
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
7-Mode Configurations
Mirrored HA Pairs and MetroCluster Software
Advantages:
Mirrored HA pairs
maintain two
complete copies of
all mirrored data
MetroCluster
software provides
failover to another
site that contains a
nearly real-time
copy of the data at
the failed site
Controller
1
Controller
2
node 2
data
node 1
data
node 2
mirror
node 1
mirror
Disk Shelves
Disadvantages:
Cost is higher
Each node is
managed
separately and
has its own
disks, resources,
and mirrors of
the other node
Disk Shelves
HA Pair
SAS or FC (simplified)
HA interconnect
11
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-11
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Cluster A in
Data Center A
Cluster B in
Data Center B
MetroCluster
12
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-12
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
13
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-13
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Hardware Universe
Hardware Universe is:
A web-based tool for
employees, partners,
and customers
A consolidated hardware
specifications tool for:
14
Controllers
Adapters
Shelves
Disks
Cabinets
Cables
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
HARDWARE UNIVERSE
Hardware Universe (HWU) is a web-based tool that replaces the System Configuration Guide. HWU provides
you with a visual presentation of the complete NetApp line of hardware products. HWU provides a powerful
configuration resource for NetApp employees, partners, and customers by consolidating hardware
specifications for the following products and components:
You can make a side-by-side comparison of the various controllers in terms of system capacity, memory size,
maximum spindle count, and other features so that you can decide which controllers will meet your
requirements.
In addition, you can save your personal queries for re-use, or draw from your last 20 queries. This
functionality is a convenient way to revisit your favorite configurations over time.
Hardware Universe is also available for iOS and Android mobile phones and tablets. To download the mobile
HWU apps, visit http://app.netapp.com/public/hardware.html.
2-14
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
15
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-15
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
16
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-16
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
This Module
17
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
If the system is new and does not require a software upgrade (or downgrade), simply start the setup
process.
If the system requires an upgrade or downgrade for any reason, install the software first. After the
software installation is complete, initialize the disks. (This initializing will take a while.)
When the system boots completely, you will run a setup procedure to set up and configure the system or
cluster. After the configuration is complete, you can create storage resources.
2-17
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Hardware Setup
Connect:
HA interconnect
Controllers to disk shelves
Controllers to networks
Any tape devices
Controllers and disk shelves to power
18
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
HARDWARE SETUP
2-18
Connect controllers to disk shelves. Verify that shelf IDs are set properly.
If required for your controller type, connect NVRAM HA cable between partners. The connections can be
through the chassis, 10-gigabit Ethernet (10-GbE), or InfiniBand, depending on your storage controllers.
Connect controllers to networks.
If present, connect any tape devices. (This task can be performed later.)
Connect controllers and disk shelves to power.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
HA Interconnect Links
Can be either:
External cables (dedicated 10-GbE or InfiniBand)
Internal interconnect (over the backplane in the chassis)
19
Failover
Disk firmware
Heartbeats
Version information
Virtual Target Interconnect (VTIC) for FC SAN
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
HA INTERCONNECT LINKS
HA interconnects connect the two nodes of each HA pair for all controllers. These connections are internally
provided over the backplane in the chassis of a dual-controller configuration. For chassis with single
controllers, a dedicated InfiniBand or 10-GbE link is required, depending on the model and enclosure. Visit
the NetApp Support site to see the appropriate hardware configuration guide for your model storage
controller.
The following types of traffic flow over the HA interconnect links:
Failover: The directives are related to performing storage failover (SFO) between the two nodes,
regardless of whether the failover is:
Disk firmware: Nodes in an HA pair coordinate the update of disk firmware. While one node is updating
the firmware, the other node must not perform any I/O to that disk.
Heartbeats: Regular messages demonstrate availability.
Version information: The two nodes in an HA pair must be kept at the same major and minor revision
levels for all software components.
For 7-Mode only, the HA interconnect also provides Virtual Target Interconnect (VTIC), which connects the
two nodes in an HA pair. In FC SAN environments, VTIC enables LUNs to be served through target ports on
both nodes. For example, the output of igroup show v displays the FC initiator that is logged in on
physical ports and a port that is called vtic.
2-19
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
on
on
h C
C
h C
C
cb
0a
LINK/ACT
LINK/ACT
Chelsio
Communications
LINK/ACT
X1107A
LINK/ACT
0a
X1107A
Chelsio
Communications
N
u
ab
KL
1
c
KL
e0
00
Controller 2
e0
CK
0d
T/
CA
C
T
X A
2
0
S
P
A
O
N
R
P
S
X
1
O
A
R
N
A
2
N
6MO I
K N L
K N L
6MO I
K N L
6MO I
6MO I
CCAD
K N L
6MO I
CCAD
K N L
K N L
6MO I
K N L
6MO I
6MO I
CCAD
K N L
6MO I
K N L
K N L
6MO I
CCAD
x
2
K N L
K N L
6MO I
Shelf 1
x
2
CCAD
2
2
x
2
Stack 1
6MO I
Shelf 2
CCAD
K N L
6MO I
K N L
CCAD
Shelf 2
6MO I
K N L
CCAD
6MO I
K N L
Shelf 1
6MO I
1
A
T/
CK
A1
0d
/I
AN
0X
/I
00
71
Controller 1
AN
ab
K
cb
Disk-Shelf Cabling
x
2
Stack 2
HA
20
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
DISK-SHELF CABLING
This diagram combines SAS and ACP shelf cabling and the HA cabling for controller 1 and controller 2 of an
HA pair. For a complete course in Data ONTAP Cluster-Mode cabling, see the Data ONTAP 8 Cabling
course (STRHW-WBT-DOTCABL).
ACP is a protocol that enables Data ONTAP to manage and control a SAS-connected storage shelf
subsystem. It uses a separate network (an alternate path) from the data path, so management communication
is not dependent on the data path being intact and available.
You do not need to actively manage the SAS-connected storage shelf subsystem. Data ONTAP automatically
monitors and manages the subsystem without operator intervention. However, you must provide the required
physical connectivity and configuration parameters to enable the ACP functionality.
NOTE: You can install SAS-connected storage shelves without configuring ACP. However, for maximum
storage availability and stability, you should always have ACP configured and enabled.
After you enable ACP, you can use the storage show acp and acpadmin list_all commands
(available through the node shell in clustered Data ONTAP) to display information about your ACP
subsystem.
Because ACP communication is on a separate network, it does not affect data access.
NOTE: Although FC shelves are supported in Data ONTAP, they are no longer sold with new systems.
2-20
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Best practices:
Single-controller
configuration must use
a dual path
FAS22xx systems with
external storage must
use a dual path
Dual path is
recommended for
greater resiliency
21
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-21
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Best Practices:
HA pair configuration must
use multipath high
availability (MPHA)
22
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-22
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
23
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-23
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
24
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-24
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Networks
Cluster Interconnect*
ISLs
e0b
node4
LINK/ACT
Chelsio
Communications
Chelsio
Communications
0a
LNK
c0a
0c
c0b
0d
e0a
1
2
0b
LNK
SLOT2
Chelsio
Communications
PS2
PS1
Chelsio
Communications
e0b
STAT
LINK/ACT
e0a
0d
LINK/ACT
0c
LNK
X1107A
c0a
c0b
0b
LINK/ACT
0a
LNK
LINK/ACT
Chelsio
Communications
LINK/ACT
LINK/ACT
node3
Chelsio
Communications
X1107A
LINK/ACT
X1107A
e0b
LINK/ACT
e0a
0d
LINK/ACT
0c
LNK
X1107A
c0a
c0b
0b
X1107A
0a
LNK
LINK/ACT
Chelsio
Communications
LINK/ACT
Chelsio
Communications
X1107A
LINK/ACT
node2
LINK/ACT
e0b
X1107A
e0a
0d
LINK/ACT
0c
LNK
LINK/ACT
c0a
c0b
0b
X1107A
node1
0a
LNK
L1
L2 MGMTO
MGMT1
CONSOLE
10
11
12
13
14
15
16
17
10
19
20
Data Network
ISLs
(Ethernet, FC,
or Converged)
Management
Network
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
NETWORKS
Data ONTAP operating in 7-Mode and clustered Data ONTAP begin to differ the most when it comes to
networking. Because clustered Data ONTAP is essentially a cluster of HA pairs, a cluster network or cluster
interconnect is needed for all the nodes to communicate with each other.
On this slide you see a four-node cluster and three distinct networks. 7-Mode and clustered Data ONTAP both
require data and management connectivity, which could coexist on the same network.
In multinode configurations, clustered Data ONTAP also requires a cluster interconnect for cluster traffic. In a
two-node configuration, the cluster interconnect can be as simple as two cables between the nodes, or a
cluster network if expansion is desired. In clusters of more than two nodes, a cluster network is required.
Single-node clusters do not require a cluster interconnect unless the cluster is expanded later.
Two cluster connections to each node are typically required for redundancy and improved cluster traffic flow.
For larger clusters that use higher-end platforms (FAS8040 or FAS8060) that are running clustered Data
ONTAP 8.2.1 or later, four cluster interconnects are recommended.
For proper configuration of the NetApp CN1601 and CN1610 switches, refer to the CN1601 and CN1610
Switch Setup and Configuration Guide.
2-25
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Communication Connections
Console connection (using ANSI-9600-8N1)
Remote management device connection (dependent on
model):
Service Processor (SP)
Remote LAN Module (RLM)
Console
Management
26
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
COMMUNICATION CONNECTIONS
Each controller should have a console connection, which is required to get to the firmware and the boot menu
(for the setup, installation, and initialization options, for example). A remote management device connection,
although not required, is helpful in the event that you cannot get to the UI or console. Remote management
enables remote booting, the forcing of core dumps, and other actions.
Use a serial console port to set up and monitor the storage system. When you set up your system, use a serial
cable to connect to your PC. An RJ45 port that is marked IOIOI is located on the rear panel. Connect the DB9
end to a serial port on a host computer.
Properties:
Each node must have two connections to the dedicated cluster network. Each node should have at least one
data connection, although these data connections are necessary only for client access. Because the nodes are
clustered together, it is possible to have a node that participates in the cluster with its storage and other
resources but doesnt field client requests. Typically, however, each node has data connections.
The cluster connections must be on a network that is dedicated to cluster traffic. The data and management
connections must be on a network that is distinct from the cluster network.
2-26
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Management Interfaces
e0M interface:
Dedicated for management traffic
Used for Data ONTAP administrative tasks
RLM or SP interface:
Is used to manage and provide remote management capabilities for the
storage system
Provides remote access to console, and monitoring,
troubleshooting, logging, and alerting features
Remains operational
Command to set up SP:
system node service-processor
27
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
MANAGEMENT INTERFACES
Some storage system models include an e0M interface. This interface is dedicated to Data ONTAP
management activities. An e0M interface enables you to separate management traffic from data traffic on
your storage system for better security and throughput.
To set up a storage system that has the e0M interface, remember this information:
The Ethernet port that is indicated by a wrench icon on the rear of the chassis connects to an internal
Ethernet switch.
Follow the Data ONTAP setup script.
To manage LAN in environments where dedicated LANs isolate management traffic from data traffic,
e0M is the preferred interface.
Configure e0M separately from the RLM or SP configuration.
Both configurations require unique IP and MAC addresses to enable the Ethernet switch to direct traffic
to either the management interfaces or the RLM or SP.
For more information on configuring remote support, refer to the Data ONTAP System Administration Guide
and Data ONTAP Remote Support Agent Configuration Guide.
Enhancements to SP in Data ONTAP 8.2.1:
2-27
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2-28
Fans
Field replaceable unit (FRU) tracking
Advanced sensor management
SP enhancements are available for the FAS8000, FAS6200, FAS3200, and FAS2200 platforms.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Administrative Interfaces
Use these methods to administer the storage system:
CLI connections:
GUI connections:
28
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
ADMINISTRATIVE INTERFACES
Types of administrative interfaces:
Most of the time, you use the NetApp OnCommand System Manager program as your UI connection.
OnCommand System Manager is a web-based graphical management interface that enables you to manage
storage systems and storage objects, such as disks, volumes, and aggregates.
You can also administer NetApp storage systems through management software such as:
SnapManager
SnapProtect
Snap Creator Framework
Third-party management software using the NetApp Manageability Software Development Kit (SDK)
The CLI enables you to execute all Data ONTAP administrative commands, except some Windows server
administrative commands.
The management device enables you to remotely execute all Data ONTAP administrative commands.
2-29
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
29
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-30
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Powering On a System
1. Power on network switches.
30
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
POWERING ON A SYSTEM
This order is the recommended order for powering on the hardware devices in a cluster.
2-31
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Firmware
Use LOADER firmware.
31
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
FIRMWARE
1.
2.
3.
4.
CompactFlash
USB flash
2-32
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-33
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Deletes all data on the disks that are owned by the controller
Creates a new root aggregate and root volume for configuration
33
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-34
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
34
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-35
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
35
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-36
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Congratulations!
36
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-37
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
37
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-38
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Review
Basic Steps for Setting Up a System
This Module
4. Initialize disks.
5. System setup: Create a cluster on the first node, then join additional nodes to the cluster.
6. Complete the initial configuration.
38
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-39
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Additional Training
Clustered Data ONTAP Installation Workshop
System installation and configuration for clustered Data ONTAP
39
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
ADDITIONAL TRAINING
2-40
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
40
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2-41
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
References
Clustered Data ONTAP System Administration Guide for
Cluster Administrators
Clustered Data ONTAP High-Availability Configuration Guide
Clustered Data ONTAP Remote Support Agent Configuration
Guide
Clustered Data ONTAP Switch Setup Guide for Cisco
Switches
41
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
REFERENCES
2-42
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
Module 2:
Hardware and Initial Setup
42
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
EXERCISE
Please refer to your exercise guide.
2-43
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 3
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-1
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-2
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
This Module
4. Initialize disks.
5. System setup: Create a cluster on the first node, then join additional nodes to the
cluster.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-3
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 1
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-4
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Administrative Access
Default system administrator account:
Data ONTAP operating in 7-Mode: root
Clustered Data ONTAP: admin
Clustered Data ONTAP administrators are associated with cluster or data SVMs.
Administrator accounts are created with role-based access control (RBAC):
Data ONTAP 7-Mode
system> useradmin
Clustered Data ONTAP
c1::> security login role create
5
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
ADMINISTRATIVE ACCESS
You can use the default system administration account for managing a storage system, or you can create
additional administrator user accounts to manage administrative access to the storage system.
You might want to create an administrator account for these reasons:
3-5
You can specify administrators and groups of administrators with differing degrees of administrative
access to your storage systems.
You can limit an administrators access to specific storage systems by providing an administrative
account on only those systems.
Creating different administrative users enables you to display information about who is performing which
commands on the storage system.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
RBAC:
Manages a set of capabilities for users and administrators
Enables you to monitor user and administrator actions
To implement RBAC:
Create a role with specific capabilities
Create a group with one or more assigned roles
Create one or more users, and assign them to a group or groups
Groups
Roles
Capabilities
Users
6
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
User: Locally created or from a domain; must be assigned to a group when it is created
Role: Set of capabilities assigned to a group
Capability: Privilege granted to a role to execute commands: login, CLI, API, and security rights
Groups: Collection of users or domains that are granted one or more roles
You use RBAC to define sets of capabilities. You then assign a set of capabilities to one or more users or
groups of users.
3-6
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
RBAC
Predefined Roles in Clustered Data ONTAP
Cluster-scoped roles
admin
backup
readonly
autosupport
none
c1::> security login role show vserver c1
vsadmin-backup
vsadmin-volume
vsadmin-readonly
vsadmin-protocol
c1::> security login role show vserver svm1
7
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-7
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
RBAC
Custom Roles
Role name
Command directory
Query
c1::> security login role create
c1::> security login modify vserver svm1 user ken -role svm1vols
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-8
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-9
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Administrative Security
Use the security login command to configure role-based
administrative access to the cluster
Configure by application: console, HTTP, SNMP, SSH, and the ONTAPI
interface library
To enable and disable security audit logging, use:
c1::> security audit modify -cliset on -httpset on
cliget on -httpget on
s-
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
ADMINISTRATIVE SECURITY
cliset: Allow user to create or modify settings by using the clustershell
cliget: Allow user to view settings by using the clustershell
httpset: Allow user to create or modify settings with management tools that use the http protocol
httpget: Allow user to view settings with management tools that use the http protocol
3-10
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
11
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-11
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 2
Licensing
12
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
LESSON 2: LICENSING
3-12
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-13
If you delete a previously installed license and want to reinstall it in Data ONTAP 8.2 or later
If you perform a controller replacement procedure for a node in a cluster that is running Data ONTAP 8.2
or later
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
License Types
Standard license
Locked to a node
Feature functions with one licensed node if
a licensed node is running
Site license
Single license that enables the feature on
the entire cluster
Is not carried with nodes that are removed
from the cluster
Evaluation license
Also known as a demo license
Temporary license with an expiration date
Cluster-side and not locked to a node
14
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
LICENSE TYPES
3-14
Standard license: A standard license is a node-locked license. It is issued for a node with a specific
system serial number and is valid only for the node that has the matching serial number. Installing a
standard, node-locked license entitles a node to the licensed functionality. It does not entitle the entire
cluster to use the feature. For the cluster to be enabled, though not entitled, to use the licensed
functionality, at least one node must be licensed for the functionality. However, if only one node in a
cluster is licensed for a feature, and that node fails, then the feature will no longer function on the rest of
the cluster until the licensed node is restarted.
Site license: A site license is not tied to a specific system serial number. When you install a site license,
all nodes in the cluster are entitled to the licensed functionality. The system license show
command displays site licenses under the cluster serial number. If your cluster has a site license and you
remove a node from the cluster, the node does not carry the site license with it, and that node is no longer
entitled to the licensed functionality. If you add a node to a cluster that has a site license, the node is
automatically entitled to the functionality that is granted by the site license.
Evaluation license: An evaluation license is a temporary license that expires after a certain period of
time. It enables you to try certain software functionality without purchasing an entitlement. It is a clusterwide license, and it is not tied to a specific serial number of a node. If your cluster has an evaluation
license for a package and you remove a node from the cluster, the node does not carry the evaluation
license with it.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
License Commands
c1::> license ?
(system license)
add
clean-up
delete
show
status>
15
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
LICENSE COMMANDS
Data ONTAP enables you to manage feature licenses in the following ways:
3-15
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Licenses on the
cluster for the
selected feature
16
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-16
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-17
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 3
18
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-18
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Ken Asks
19
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
KEN ASKS
3-19
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Policy examples:
Firewall and security
Export, quota, file, and data
Snapshot copy and SnapMirror
Quality of service (QoS)
20
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-20
Firewall
System health
SnapMirror
Volume efficiency
Volume FlexCache
Volume quota
Volume Snapshot
SVM CIFS group
SVM data
SVM export
SVM fpolicy
SVM security file-directory
QoS policy-group
Failover
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Policy Example
policyA
Rule1 criteria1
Rule2 criteria2
Rule3 criteria3
policyB
property
Rule1 criteria1
property
Rule2 criteria2
property
Rule3 criteria3
property
property
property
fwall_policy1
192.168.1.0/24 ssh
192.168.1.0/24 http
Rule3 criteria3
fwall_policy2
property
192.168.21.0/24 ssh
property
192.168.22.0/24 ssh
property
192.169.23.0/24 ssh
allow
The example is a firewall to allow or deny access to a protocol for specific IP address ranges.
21
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
POLICY EXAMPLE
A policya concept that is specific to clustered Data ONTAPis a collection of rules that are created and
managed by the cluster or SVM administrator. Policies are predefined as defaults or they are created to
manage the properties of various types of objects. Examples of policy use include firewall, security, export,
quota, storage quality of service, and replication rules.
Some rules are indexed, meaning that you can specify the order in which each rule is considered for use in a
particular situation. Rule indexes can be modified, and additional rules can be inserted in the list by
specifying the rules new position in the list.
In Data ONTAP operating in 7-Mode, you have to create rules at the object level. As a result, there is a
separate rule for every instance of the object, even if the rule is identical to others. With clustered Data
ONTAP policies, you create the set of rules one time, and then you associate it with all objects that adhere to
the same set of rules.
3-21
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Job Schedules
Job schedules can be used:
Globally (by all virtual storage systems and SVMs)
For functions that can be automated
For SnapShot, SnapMirror, and SnapVault events, for example
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
JOB SCHEDULES
Schedules apply to Data ONTAP 7-Mode and clustered Data ONTAP. Schedules are used to control events
that are automated and time-based. The most common examples are data replication and AutoSupport
messages.
3-22
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 4
23
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-23
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-24
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The cluster-wide ntp command does not work until the entire
cluster is running Data ONTAP 8.3 or later.
25
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-25
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
26
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-26
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 5
27
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-27
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Ken Asks
28
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
KEN ASKS
3-28
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
HTTP or HTTPS
SMTP
Email Server
AutoSupport messages are generated:
When triggering events occur
When you initiate a test message
When the system reboots
Daily (logs only)
Weekly (logs, configuration, and health data)
29
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-29
Is automatically triggered by the kernel once a week to send information to the email addresses that are
specified in the autosupport.to option. In addition, you can use the options command to manually
invoke the AutoSupport mechanism to send this information.
Sends a message in response to events that require corrective action from the system administrator or
NetApp technical support.
Sends a message when the system reboots.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
BATTERY_LOW
Disk failure
DISK_FAIL!!!
OVER_TEMPERATURE_SHUTDOWN!!!
REBOOT
SHELF_FAULT
WEEKLY_LOG
Unsuccessful HA takeover
HA takeover of a node
HA giveback occurred
30
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-30
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Configuring AutoSupport
Data ONTAP 7-Mode
1.
2.
3.
4.
5.
6.
7.
8.
system>
system>
system>
system>
system>
system>
system>
system>
options
options
options
options
options
options
options
options
autosupport.support.enable on
autosupport.support.transport [smtp|http|https]
autosupport.mailhost xx.xx.xx.xx
autosupport.from bob@learn.local
autosupport.to support@netapp.com
autosupport.noteto tom@learn.local
autosupport.enable on
autosupport.doit testing asup
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
CONFIGURING AUTOSUPPORT
AutoSupport configuration involves identifying the transport of choice (SMTP, HTTP, or HTTPS) and the
details that are necessary to transport the message to NetApp. The steps to configure AutoSupport on Data
ONTAP operating in 7-Mode are quite different from the steps on clustered Data ONTAP, but they both
involve basically the same information, including mail host, the from email address, and any recipients,
including NetApp Support. You can also use the noteto option to send notifications to internal and
external recipients without sending the entire AutoSupport payload.
After configuring AutoSupport, always send a test message to verify that you get the desired result. For
testing your AutoSupport configuration on 7-Mode, NetApp recommends that you use the message TEST or
TESTING. In clustered Data ONTAP, sending a message of the type test is sufficient.
3-31
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
32
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-32
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
My AutoSupport
Key Features
NetApp Systems
AutoSupport
Messages
NetApp
SSC Partners
and Customers
My AutoSupportMobile App
AutoSupport
Data
Warehouse
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
MY AUTOSUPPORT
My AutoSupport is a suite of web-based applications hosted on the NetApp Support site and accessible via
your web browser. Using the data from AutoSupport, My AutoSupport proactively identifies storage
infrastructure issues through a continuous health-check feature and automatically provides guidance on
remedial actions that help increase uptime and avoid disruptions to your business.
My AutoSupport provides four primary functions.
First, it identifies risks and provides best practice tips. For example, My AutoSupport might find a
configuration issue, a bad disk drive, or version incompatibility on your system.
Second, My AutoSupport can compare your hardware and software versions and alert you to potential
obsolescence. For example, My AutoSupport alerts you about end-of-life (EOL) issues or an upcoming
support contract expiration date.
Third, My AutoSupport provides performance and storage utilization reports to help you proactively plan
capacity needs.
Finally, My AutoSupport provides new system visualization tools and transition advisor tools for clustered
Data ONTAP systems.
If you plan any changes to your controllers, NetApp recommends manually triggering an AutoSupport
message before you make the changes. This manually triggered AutoSupport message provides a before
snapshot for comparison, in case a problem arises later.
3-33
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
References
Clustered Data ONTAP System Administration Guide for
Cluster Administrators
Clustered Data ONTAP Software Setup Guide
34
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
REFERENCES
3-34
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
35
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
3-35
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
36
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
EXERCISE
Please refer to your exercise guide.
3-36
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 4
Storage Management
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-1
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
This Module
1.
2.
3.
4.
Initialize disks.
5.
System setup: Create a cluster on the first node, then join additional nodes to the cluster.
6.
7.
8.
9.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-2
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-3
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 1
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-4
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Logical Layer
FlexVol volumes
Aggregate
RAID Groups
Physical Layer
Disks
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A traditional volume is contained by a single, dedicated aggregate. Because a traditional volume is tightly
coupled with its containing aggregate, no other volumes can get their storage from an aggregate that
contains a traditional volume. Only Data ONTAP operating in 7-Mode uses traditional volumes, and they
are not recommended.
A FlexVol volume allocates only a portion of the available space within an aggregate. One or more
volumes can be on an aggregate. This type is the default volume type.
An infinite volume is a single, scalable volume that can store up to 2 billion files and tens of petabytes of
data. An infinite volume uses storage from multiple aggregates on multiple nodes. (Only clustered Data
ONTAP uses infinite volumes.)
Data ONTAP is optimized for writes. It can write any file system block (except the one that contains the root
inode) to any location on disk, it can write blocks to disk in any order, and it improves RAID performance by
writing to multiple blocks in the same stripe by creating a full-stripe write.
4-5
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Physical Layer
Disks
Unowned
Disks
Automatically
Manually
Be assigned to aggregates
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-6
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Physical Layer
Disk Types
Data ONTAP
Disk Type
Industry-Standard
Disk Type
Disk Class
Description
BSAS
Capacity
SATA
FSAS
Capacity
NL-SAS
Near-line SAS
mSATA
Capacity
SATA
SAS
Performance
SAS
Serial-attached SCSI
SSD
Ultraperformance
SSD
Solid-state drive
ATA
Capacity
SATA
Serial ATA
FC-AL
Performance
FC
Fibre Channel
LUN
N/A
LUN
Array LUN
SAS
N/A
VMDK
Virtual disks
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
SAS, BSAS, FSAS, solid-state drive (SSD), and MSATA disks use the SAS connection type.
SAS-connected storage shelves are connected to the controller on a daisy chain that is called a stack.
FC and ATA disks use the FC connection type with an arbitrated-loop topology (FC-AL).
FC-connected storage shelves are connected to the controller on a loop.
Data ONTAP also supports storage arrays and virtual storage (Data ONTAP-v):
Array LUNs use the FC connection type, with either point-to-point or switched topology.
An array LUN is a logical storage device backed by storage arrays and used by Data ONTAP as a disk. These
LUNs are referred to as array LUNs to distinguish them from the LUNs that Data ONTAP serves to clients.
The disk show command displays these as a LUN disk type.
NetApp Cloud ONTAP runs as a virtual machine and uses Virtual Machine Disk (VMDK).
You cannot combine different connection types in the same loop or stack. However, for MetroCluster
configurations, the FC and SAS connection types can be combined in a bridged connection, with FC on the
controller side and SAS on the shelf side. The bridged connection can be used in either a direct-attached
topology or a switched topology.
4-7
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Physical Layer
Array LUNs
Unowned
Array
LUNs
Aggregate
Storage
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A LUN must be created on the enterprise storage array by using the vendors best practices.
A logical relationship must be created manually between the array LUN and Data ONTAP, where Data
ONTAP is the owner.
An array LUN can only be part of a RAID 0 aggregate. RAID protection for the array LUN is on the
enterprise storage array, not Data ONTAP.
NOTE: Array LUN reconfiguration, such as resizing the array LUN, must be done from the storage array.
Before such activities can occur, you must release Data ONTAP ownership of the array LUN.
4-8
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Physical Layer
RAID Groups
RAID Group
DoubleParity Parity
Disk Disk
Hot spares
9
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
If this guideline is impossible to follow, any RAID group with fewer disks should have only one disk less
than the largest RAID group.
NOTE: The SSD RAID group size can be different from the RAID group size for the HDD RAID groups in a
flash pool aggregate. Usually, you should ensure that you have only one SSD RAID group for a flash pool
aggregate, to minimize the number of SSDs that are required for parity.
The reliability and smaller size (faster rebuild times) of performance HDDs can support a RAID
group size of up to 28, if needed.
4-9
NetApp recommends that you do not mix 10K RPM and 15K RPM disks in the same aggregate.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Mixing 10K RPM disks with 15K RPM disks in the same aggregate effectively throttles all disks down to
10K RPM. This throttling results in longer times for corrective actions such as RAID reconstructions.
Recommendations about spares vary by configuration and situation. For information about best practices for
working with spares, see Technical Report 3437: Storage Subsystem Resiliency Guide.
4-10
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Physical Layer
RAID Types
RAID 4
RAID 4
Data
Disk
Parity
Disk
RAID-DP
RAID-DP (default)
Data
Disk
Parity dParity
Disk
Disk
RAID 0
One
or
Many
Array LUNs
10
RAID 0 (striping)
Storage array provides RAID
protection; Data ONTAP does not
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
RAID-DP uses two parity disks to ensure data recoverability even if two disks within the RAID group
fail.
RAID 4 uses one parity disk to ensure data recoverability if one disk within the RAID group fails.
For array LUNs, Data ONTAP stripes data across the array LUNs using RAID 0. The storage arrays, not Data
ONTAP, provide the RAID protection for the array LUNs that they make available to Data ONTAP.
RAID 0 does not use any parity disks; it does not provide data recoverability if any disks in the RAID group
fail.
NOTE: NetApp imposes a five-disk minimum for RAID-DP, and a four-disk minimum for RAID4. This
minimum is enforced at the aggregate level, not at the RAID group level.
4-11
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Physical Layer
Aggregates
Composed of disks or array LUNs in
RAID groups (rg)
Storage System
Aggregate (aggrA)
Types:
Plex (plex0)
rg0
rg1
pool0
Hot Spares
11
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
When you create a new aggregate, the default is a 64-bit format aggregate.
64-bit aggregates have much larger size limits than 32-bit aggregates (16 TB).
64-bit and 32-bit aggregates can coexist on the same storage system.
NOTE: NetApp recommends using only 64-bit aggregates in clustered Data ONTAP 8.2 and later.
For information about best practices for working with aggregates, see Technical Report 3437: Storage
Subsystem Resiliency Guide.
4-12
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Physical Layer
Aggregate Types
Aggregate types:
Root Aggregate
Root (aggr0)
Automatically created during
system initialization
Should only contain the node
root volume with log files and
configuration information
Should not contain user data
Data
Data Aggregate
rg0
Data Disks
12
Parity dParity
Disk Disk
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-13
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-14
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
14
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-15
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Logical Layer
FlexVol Volumes
FlexVol volumes:
LUN
Files
FlexVol
FlexVol
Aggregate
15
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-16
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
FlexVol
volume 1
vol1
16
FlexVol
volume 2
FlexVol
volume 3
vol2
vol3
RG1
Aggregate
RG2
aggr1
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
Because the volume and the aggregate are managed separately, you can create small FlexVol volumes (20
MB or larger) and then increase or decrease the size of the volumes in increments as small as 4 KB.
You can create FlexVol volumes almost instantaneously.
You can guarantee space reservations (full or thick provisioning), so any client user or machine is
guaranteed the ability to write to the full size of the volume.
Blocks are not allocated until they are needed (in other words, you are guaranteeing space in the
aggregates, but not the actual blocks).
If you do not guarantee space reservations (by using thin-provisioning), space is not guaranteed for the
client user or machine.
You can increase or decrease the size of a FlexVol volume without disruption in a few seconds, using only
one command.
4-17
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Logical Layer
Files and LUNs
LUN
Files
FlexVol
FlexVol
Aggregate
17
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-18
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Logical Layer
Qtrees and Directories
/vol
/vol1
/tree1
/vol2
/dir1
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
Quotas: You can limit the size of the data used by a particular project by placing all of that project's files
into a qtree and applying a tree quota to the qtree.
Security style: If you have a project that needs to use NTFS-style security because the members of the
project use Windows files and applications, you can group the data for that project in a qtree and set its
security style to NTFS, without requiring that other projects also use the same security style.
CIFS oplocks settings: If a project uses a database that requires CIFS oplocks to be off, you can set CIFS
oplocks to off for that project's qtree while allowing other projects to retain CIFS oplocks.
Backups (7-Mode only): You can use qtrees to keep your backups more modular, to add flexibility to
backup schedules, or to limit the size of each backup to one tape.
Qtrees are similar to directories in that they partition volumes and can have quotas set. In most cases,
directories can be created on a FlexVol volume that is being shared to clients by the users or administrator.
Use of qtrees, directories, or neither depends on the use case and administrative requirements.
NOTE: NetApp encourages the use of volumes rather than qtrees in clustered Data ONTAP.
4-19
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
19
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-20
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 2
20
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-21
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
21
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-22
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
inode
22
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
When a write comes into system memory, the write is not put straight down to the disk.
The write is written into the NVRAM, which is battery-backed.
Then the write is sent to the NVRAM in the HA partner.
These writes are collected in system memory and NVRAM from different LUNs or different files.
When enough writes are collected, or every 10 seconds (whichever comes first), the WAFL file system
looks at the disk subsystem and chooses a place with enough free space.
WAFL chooses a segment across all the disks in the RAID group.
WAFL puts a group of writes that are from the same LUN or file next to each other.
The writes need to be located together later, when they are read back.
RAID is calculated in memory, which helps to maintain fast write performance. The operation by which the
calculated RAID stripes are written to disk is called a consistency point (CP). At the moment when the CP
occurs, system memory and disks are consistent.
A CP occurs:
4-23
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
LIF
LIF
23
Volume
Volume
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-24
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Host
Node 2
2a
2b
2c
Node 3
1
24
Node 4
3
Write is acknowledged to
the host.
Write is sent to storage in a
consistency point (CP).
Volume
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-25
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Node 1
1
4
Host
Node 2
3a
3b
3c
Node 3
1
Node 4
3
Volume
4
5
25
Write is simultaneously
processed into system
memory (3a) and logged in
NVRAM (3b) and in the
NVRAM mirror of the
partner node of the HA pair
(3c).
Write is acknowledged to
the host.
The write is sent to storage
in a CP.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-26
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Consistency Points
Certain circumstances trigger a CP:
Block A
26
New Snapshot
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
CONSISTENCY POINTS
Certain circumstances trigger a CP:
An NVRAM buffer fills up, and it is time to flush the writes to disk.
A ten-second timer runs out.
A resource is exhausted or hits a predefined scenario that indicates that it is time to flush the writes to
disk.
In the latter case, all other CP types occur. This situation can happen if Snapshot copies are created or the
system halts.
4-27
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
CP
27
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
When you terminate services gracefully, the storage system commits all write requests to disk and clears
NVRAM.
When you boot the storage system, the boot process checks whether the shutdown was clean.
In a dirty shutdown:
When power is suddenly removed from the storage system, the NVRAM battery preserves the contents of
the memory.
When you boot the storage system, the NVRAM is signaled to replay its content into system memory.
4-28
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Host
Node 2
2a
Node 3
1
28
Node 4
3b
3
3a
Volume
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-29
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Node 1
4c
1
3b
Host
4b
Node 2
3a
Node 3
1
29
Node 4
4b
4
4a
Volume
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
The read request is sent from the host to the storage system through a NIC or an HBA.
The read request is sent to the storage controller that owns the volume.
If the read is in system memory, it is sent to the host; otherwise, the system keeps looking for the data.
Flash Cache (if it is present) is checked and, if the blocks are present, they are brought into memory and
then sent to the host; otherwise, the system keeps looking for the data.
5. The block is read from storage, brought into memory, and then sent to the host.
Due to the asymmetric logical unit access (ALUA) multipath I/O configuration on the host, SAN access is
always direct if the system is configured properly.
4-30
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 3
30
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-31
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
31
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-32
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Server
Flash Cache
Flash Pool
Storage
32
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
The Flash Cache feature is controller-based, provides acceleration of random-read operations, and
generally provides the highest performance solution for file services workloads.
The Flash Pool feature is implemented at the disk-shelf level, allowing SSDs and traditional HDDs to be
combined in a single Data ONTAP aggregate. In addition to read caching, Flash Pool technology also
provides write caching and is particularly well-suited for OLTP workloads, which typically have a higher
percentage of write operations.
Both VST technologies improve overall storage performance and efficiency and are simple to deploy and
operate.
4-33
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
33
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-34
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Capacity
Performance
+
HDD
Flash Pool
34
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-35
Improved cost performance with fewer spindles, less rack space, and lower power and cooling
requirements
Highly available storage with a simple administrative model
Improved cost-to-performance and cost-to-capacity ratios compared to those of an SSD and SATA
combination with pure FC SAS
Predictable and better degraded mode operation across controller failures and with takeover and giveback
Automatic, dynamic, policy-based placement of data on appropriate tiers of storage (hard disks or SSDs)
at WAFL-block granularity for either data or system metadata
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Read-cached blocks:
Are a cached copy of the
blocks from the hard disk tier
Still exist on the hard disk tier
Write-cached blocks:
35
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-36
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
36
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-37
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Storage Pool
(STORAGEPOOL1)
Allocation Unit
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-38
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Storage Pool
(STORAGEPOOL1)
Node1
Node2
1
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-39
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Storage Pool
(STORAGEPOOL1)
Aggr2
Aggr1
HDD rg0
HDD rg1
SSD rg2
HDD rg0
DATA
DATA
DATA
DATA
PARITY
PARITY
DATA
DATA
DATA
DATA
PARITY
PARITY
DATA
DATA
DATA
DATA
PARITY
PARITY
HDD rg1
HDD rg2
SSD rg3
SSD rg4
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-40
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Aggr5
Aggr6
X X X X
X X X X
X X X X
10
11
12
13
14
15
Aggr7
16
17 18
RAID 4
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-41
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
41
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-42
SSD storage pools can contain only SSDs. HDDs cannot be added to an SSD storage pool.
SSD storage pools can contain between 2 and 28 SSDs. If an SSD storage pool contains more SSDs than
the maximum RAID 4 group size for SSDs, then that pool cannot be used for a Flash Pool aggregate with
a RAID 4 cache.
All SSDs in an SSD storage pool must be owned by the same HA pair.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SyncMirror technology
Physical SSDs
42
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
The loss of one SSD affects all RAID groups that include a partition of that SSD. In this situation, every
Flash Pool aggregate that has cache allocated from the SSD storage pool that contains the affected SSD
has one or more RAID groups in reconstruction.
If the Flash Pool cache is not properly sized, then contention for the cache can exist between the Flash
Pool aggregates that share that cache. This risk can be mitigated through proper cache sizing and qualityof-service (QoS) controls.
Storage pools are another storage object to manage. In addition, when multiple aggregates share a storage
resource, customers must take that into account whenever they operate on the shared resource. For
example, suppose that the customer wants to destroy an aggregate to free up its storage and move that
storage to a different node. The customer cannot move the SSDs in the storage pool until the customer
destroys every aggregate to which storage was allocated from that storage pool, as well as destroying the
storage pool itself.
Data ONTAP 8.3 SSD partitioning for Flash Pool cache support has a few limitations in a clustered
environment:
4-43
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
FLASH POOL
What is it?
Storage-level, RAID-protected cache
(specific to aggregates)
A plug-and-play device
What does it do?
43
With random-overwrite-heavy
workloads; for example, OLTP
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-44
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
6290
24 tebibytes (TiB)
96 TiB
18 TiB
72 TiB
12 TiB
48 TiB
8020
6 TiB
24 TiB
6210, 3250
4 TiB
16 TiB
3270
2 TiB
8 TiB
3220
1.6 TiB
6.4 TiB
3240
1.2 TiB
3.2 TiB
800 GiB
2240, 2220
44
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-45
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
45
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-46
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 4
46
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-47
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
47
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-48
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Aggregate
3. Create Aggregate
48
2. Assign Ownership
Spare Disks
1. Add Disks
Unowned Disks
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-49
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Identifying Disks
The shelf ID and bay designate the specific shelf and bay
number where the disk is located.
Shelf ID
Bay 0
Bay 4
Bay 20
3.0TB
3.0TB
Bay1
Bay 5
3.0TB
3.0TB
Bay 2
Bay 6
3.0TB
3.0TB
Bay 3
Bay 7
3.0TB
DS4486
3.0TB
3.0TB
3.0TB
3.0TB
3.0TB
3.0TB
3.0TB
3.0TB
3.0TB
3.0TB
3.0TB
3.0TB
3.0TB
3.0TB
Bay 21
3.0TB
Bay 22
3.0TB
Bay 23
3.0TB
DS4486
49
Before 8.3:
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
IDENTIFYING DISKS
Disks are numbered in all storage systems. Disk numbering enables you to interpret messages that are
displayed on your screen, such as command output or error messages, and to quickly locate a disk that is
associated with a displayed message
With Data ONTAP 8.2.x and earlier, disks are numbered based on a combination of their node name, slot
number, and port number, and either the loop ID for FC-AL-attached shelves or the shelf ID and bay number
for SAS-attached shelves.
With Data ONTAP 8.3 and later, when a node is part of a functioning cluster, the disk name is independent of
the nodes to which the disk is physically connected and from which the client accesses the disk.
Data ONTAP assigns the stack ID. Stack IDs are unique across the cluster and they start with 1.
The shelf ID is set on the storage shelf when the shelf is added to the stack or loop. If a shelf ID conflict exists
for SAS shelves, then the shelf ID is replaced with the shelf serial number in the drive name.
The bay is the position of the disk within its shelf. Clients can find the bay map in the administration guide for
the storage shelf. The position is used only for multidisk carrier storage shelves. For carriers that house two
disks, the position can be 1 or 2.
During system boot, before the node has joined the cluster or if certain cluster components become
unavailable, drive names revert to the classic format, based on physical connectivity.
4-50
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
4-51
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Disk Ownership
A disk is not useable until it is assigned to a controller
Disk ownership determines which controller owns a disk:
Ownership is automatically assigned (default)
Ownership can be manually assigned or changed
Software disk ownership is made persistent by writing the ownership information onto the
disk
50
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
DISK OWNERSHIP
Disk are not useable in Data ONTAP until ownership is assigned to a controller. Fortunately, Data ONTAP
automatically assigns disks to a controller in the initial setup and checks occasionally to see if new disks are
added. When the disk is assigned, the disk ownership information is written to the disk so the assignment
remains persistent.
Ownership can be modified or removed. A disks data contents are not destroyed when the disk is marked as
unowned; only the disks ownership information is erased. Unowned disks that reside on an FC-AL, where
the owned disks exist, have ownership information applied automatically to guarantee that all disks on the
same loop have the same owner.
Automatic ownership assignment is enabled by default and is invoked at the following times:
The automatic ownership assignment can also be manually initiated by using the disk assign command
with the auto parameter.
If your system is not configured to assign ownership automatically, or if your system contains array LUNs,
you must assign ownership manually.
NOTE: It is a NetApp best practice to unassign only spare disks.
4-52
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Spare Disks
Spare disks are used to:
Increase aggregate capacity
Replace failed disks
Data Disks
51
Spare Disks
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
SPARE DISKS
You can add spare disks to an aggregate to increase its capacity or to replace a failed disk. If the spare is
larger than the other data disks, it becomes the parity disk. However, it does not use the excess capacity unless
another disk of similar size is added. The second largest additional disk has full use of additional capacity.
Zeroing used disks:
After you assign ownership to a disk, you can add that disk to an aggregate on the storage system that owns it,
or leave it as a spare disk on that storage system. If the disk has been used previously in another aggregate,
you should use the disk zero spares command to zero the disk to reduce delays when the disk is used.
Zeroing disks in Data ONTAP 7-Mode:
Use the disk zero spares command.
Zeroing disks in clustered Data ONTAP:
Use the storage disk zerospares command.
4-53
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Larger Size:
Unused Capacity
Exact
Match
52
Different Speed:
Performance
!!
Degraded Mode:
No replacement
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
If the available hot spares are not the correct size, Data ONTAP uses one that is the next size up, if there
is one.
The replacement disk is downsized to match the size of the disk it is replacing; the extra capacity is not
available.
If the available hot spares are not the correct speed, Data ONTAP uses one that is a different speed.
Using disks with different speeds within the same aggregate is not optimal. Replacing a disk with a
slower disk can cause performance degradation, and replacing a disk with a faster disk is not costeffective.
If no spare exists with an equivalent disk type or checksum type, the RAID group that contains the failed disk
goes into degraded mode; Data ONTAP does not combine effective disk types or checksum types within a
RAID group.
NOTE: Degraded mode is intended to be a temporary condition until an appropriate spare disk can be added.
Do not run in degraded mode for more than 24 hours.
4-54
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Aggregates
Aggregate: Pool of storage
Plex: Used for mirrored aggregates
Aggregate
plex0
rg0
rg1
53
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
AGGREGATES
Aggregates provide storage to volumes. They are composed of RAID groups of disks or array LUNs, but not
both. The Data ONTAP operating system organizes the disks or array LUNs in an aggregate into one or more
RAID groups. RAID groups are then collected into one or two plexes, depending on whether RAID-level
mirroring (SyncMirror technology) is in use.
The Data ONTAP storage architecture contains:
Aggregates: Each aggregate contains a plex or plexes, a RAID configuration, and a set of assigned
physical disks to provide storage to the volumes that the aggregate contains.
Plexes: Each plex is associated with an aggregate and contains RAID groups. Typically, an aggregate has
only one plex. Aggregates that use SyncMirror technology have two plexes (plex0 and plex1); plex1
contains a mirror of the plex0 data.
RAID groups: Each RAID group contains physical disks and is associated with a plex. A RAID group
has either a RAID 4 or RAID-DP configuration.
Disks: Disks play different roles at different times, depending on the state of the disk.
Disk states:
4-55
Data
Parity
Double-parity
Spare
Broken
Unowned
Uninitialized (not zeroed)
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Flash Pool
Parity dParity
Disk Disk
HDD rg
Data Disks
Parity dParity
Disk Disk
SSD rg
Data Disks
54
Parity
Disk
Flash pools:
SSD RAID group size and type
can be different from HDD RAID
group size
dParity
Disk
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
All RAID groups in an aggregate should have the same number of disks. If this guideline is impossible to
follow, any RAID group with fewer disks should have only one disk less than the largest RAID group.
The recommended range of HHD RAID group size is between 12 and 20 disks.
The reliability of performance disks can support a RAID group size of up to 28 disks, if needed.
The recommended range of SSD RAID group size is between 20 and 28. The reason for a higher SSD
recommendation is to minimize the number of SSDs required for parity.
If you can satisfy the first guideline with multiple RAID group sizes, you should choose the larger size.
Guidelines for SSD RAID groups in Flash Pool aggregates: The SSD RAID group size can be different
from the RAID group size for the HDD RAID groups in a Flash Pool aggregate. Usually, you should ensure
that you have only one SSD RAID group for a Flash Pool aggregate, to minimize the number of SSDs
required for parity.
For information about best practices for working with aggregates, see Technical Report 3437: Storage
Subsystem Resiliency Guide.
To see the physical and usable capacity for a specific disk, see the Hardware Universe at hwu.netapp.com.
4-56
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
55
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-57
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
56
FAS2554
FAS2552
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-58
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
57
NODE1 ROOT
USER AGGR
USER AGGR
SPARE
SPARE
NODE2 ROOT
Data2 Disks
/ Total
= Efficiency
Data HDDs
and 12Disks
HDDs Total
= 17%
10
11
12
100%
0%
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-59
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
<- SPARE
<- SPARE
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
PARITY
PARITY
SPARE
FAS2520
10
11
12
Root Partition
Data Partition
58
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-60
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SPARE
PARITY
PARITY
DATA
DATA
DATA
DATA
DATA
DATA
DATA
DATA
Data Partition
DATA
Root Partition
0%
59
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-61
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Array LUNs
Virtual disks that are created for use with Data ONTAP-v
Disk types that are unavailable as internal disks: ATA, FC-AL, and mSATA
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-62
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Aggregate Formats
64-bit aggregate:
Maximum size: dependent on controller model and Data ONTAP
version
Is the default
32-bit
32-bit aggregate:
64-bit
Aggregate
Maximum size: 16 TB
Not supported in Data ONTAP 8.3 and later
61
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
AGGREGATE FORMATS
Aggregates have two formats, with different maximum sizes:
For more information about 64-bit aggregates, see Technical Report 3786, A Thorough Introduction to 64-Bit
Aggregates.
4-63
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
62
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-64
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
63
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-65
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
64
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-66
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 5
65
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-67
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Data SVM
With FlexVol Volumes
data
LIFs
-----
mgmt
LIF
FlexVol
Volumes
SVM Admin
NFS
CIFS
iSCSI
FC
Client Access
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
FlexVol Volumes
Overview
67
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-69
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
FlexVol Volumes
Types
LUN
FlexVol
FlexVol
Data
NAS: Contain file systems for user
data
SAN: Contain LUNs
Aggregate
68
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-70
As node root volumes to hold state data for the node and for the cluster
As the root of an SVM namespace
To store user data within an SVM
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Create
/vol
Resize
system> vol size vol1 [[+|-]<size>[b|b|g|t]]
Destroy
system> vol destroy vol1
Must be offline
Create
system> volume create vs svm1 name vol1 aggr aggr1
-size size[kb|mb|gb|tb]
Resize
system> vol modify vs svm1 name vol1 size [+|-] 10gb
Destroy
system> vol delete vserver svm1 name vol1
69
Must be offline
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-71
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
70
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-72
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
71
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4-73
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
References
Data ONTAP 8.2 System Administration Guides
72
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
REFERENCES
4-74
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
Module 4: Storage
Management
73
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
EXERCISE
Refer to your exercise guide.
4-75
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 5
Network Management
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-1
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-2
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Networks
Cluster Interconnect*
One or two
cluster ports
per switch
node1
Two ISLs
node2
Management
Network
node3
Two management
ports per node
Four or eight
Inter-Switch
Links (ISLs)
Data Network
(Ethernet, FC, or
Converged)
node4
Redundant
networking
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
NETWORKS
Networking is where Data ONTAP operating in 7-Mode and clustered Data ONTAP differ most. Because a
clustered Data ONTAP system is essentially a cluster of high-availability (HA) pairs, you need a cluster
network or cluster interconnect for all the nodes to communicate with each other. You should always keep
this principle in mind: If a node cannot see the cluster interconnect, it is not part of the cluster. Therefore, the
cluster interconnect requires adequate bandwidth and resiliency.
This graphic shows a four-node cluster and three distinct networks. 7-Mode and clustered Data ONTAP
require both data and management connectivity, which can coexist on the same data network.
In multi-node configurations, clustered Data ONTAP also requires a cluster interconnect for cluster traffic. In
a two-node configuration, the cluster interconnect can be as simple as cabling the two nodes or using switches
if expansion is desired. In clusters of more than two nodes, switches are required. Single-node clusters do not
require a cluster interconnect if the environment does not require high availability and nondisruptive
operations (NDO).
Two cluster connections to each node are typically required for redundancy and improved cluster traffic flow.
For the larger clusters that use higher-end platforms (FAS8040, FAS8060, and FAS8080) that are running
clustered Data ONTAP 8.2.1, four cluster interconnects are the default. Optionally, a FAS8080 can be
configured to use 6 cluster interconnect ports with expansion 10-gigabit Ethernet network interface cards (10GbE NICs).
For proper configuration of the NetApp CN1601 and CN1610 switches, refer to the CN1601 and CN1610
Switch Setup and Configuration Guide.
5-3
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 1
Network Ports
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-4
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Virtual
LIF
smv1-mgmt
smv1-data1
VLAN
a0a-50
a0a-80
Ifgrp
Physical
a0a
Port
e2a
e3a
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-5
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Port Types
Physical port
Ethernet
FC
Unified Target Adapter (UTA)
UTA is a 10-GbE port
UTA2 is configured as either:
10-GbE
or 16-Gbps FC
Virtual port
Interface group (ifgrp)
Virtual LAN (VLAN)
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
PORT TYPES
Port types can be either physical or virtual.
Physical:
Ethernet port: 1-Gb or 10-Gb Ethernet (10-GbE) ports that can be used in NFS, CIFS, and iSCSI
environments
FC port: 4-Gbps, 8-Gbps, or 16-Gbps port that can be used as a target in FC SAN environment. It can be
configured as an initiator for disk shelves or tape drives.
Unified Target Adapter (UTA) port: 10-GbE port that can be used in NFS, CIFS, iSCSI and FCoE
environments
Unified Target Adapter 2 (UTA2) port: Configured as either a 10-GbE Ethernet or 16-Gbps FC port
10-Gb ports can be used in NFS, CIFS, iSCSI, and FCoE environments
16-Gbps FC ports can be used as targets in FC SAN environments
NOTE: UTA2 FC ports are not supported with DS14 disk shelves or FC tape drives.
Virtual:
5-6
Interface group: An interface group implements link aggregation by providing a mechanism to group
together multiple network interfaces (links) into one logical interface (aggregate). After an interface group
is created, it is indistinguishable from a physical network interface.
VLAN: Traffic from multiple VLANs can traverse a link that interconnects two switches by using VLAN
tagging. A VLAN tag is a unique identifier that indicates the VLAN to which a frame belongs. A VLAN
tag is included in the header of every frame that is sent by an end-station on a VLAN. On receiving a
tagged frame, a switch identifies the VLAN by inspecting the tag, then forwards the frame to the
destination in the identified VLAN.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SAS
ACP
Console port
(also SP)
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
4 x Unified Target Adapter 2 (UTA2) ports can be configured as either 10-GbE or 16-Gbps FC for data
e0e/0e and e0f/0f, and e0g/0g and e0h/0h, are port pairs
Set port mode command is ucadmin (7-Mode and clustered Data ONTAP)
1 x private management port that is used as an alternate control path (ACP) for SAS shelves
1 x console port (can be configured for SP)
5-7
Supported: two cluster interconnects (e0a and e0c) and two data (e0b and e0d)
Recommended: four cluster interconnects (switched clusters only)
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
7-Mode configuration:
Same as Data ONTAP, except that 4 x 10-GbE ports are used for data.
Open slots can be used for Flash Cache, FC-VI, UTA2, 10-GbE, 4-Gbps, 8-Gbps, 16-Gbps FC or GbE cards.
5-8
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-9
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-10
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
10
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
When the adapter type is initiator, use the storage disable adapter command to bring the
adapter offline.
When the adapter type is target, use the fcp config command to bring the adapter offline.
When the adapter type is initiator, use the run local storage disable adapter command to
bring the adapter offline.
When the adapter type is target, use the network fcp adapter modify command to bring the
adapter offline.
For more information about configuring FC ports, refer to the Data ONTAP SAN Administration Guide for
your release, or attend the NetApp University SAN Implementation course.
5-11
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Interface Groups
Interface groups enable link aggregation of one or more
Ethernet interfaces:
Single-mode (active-standby)
Multimode
1-GbE
Interfaces
Static (active-active)
Dynamic (LACP*)
10-GbE
Interfaces
Interface Groups
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
INTERFACE GROUPS
The following network terms are described as they are implemented within Data ONTAP:
Be aware that different vendors refer to interface groups by the following terms:
Virtual aggregations
Link aggregations
Trunks
EtherChannel
In single-mode link aggregation, one interface is active, and the other interface is inactive (on standby).
In multimode, all links in the link aggregation are active.
A dynamic multimode interface group can detect loss of link status and data flow.
Multimode requires a compatible switch to implement configuration.
Data ONTAP link aggregation complies with the IEEE 802.3ad static standard and multimode dynamic link:
Link Aggregation Control Protocol (LACP).
5-12
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Interface group
name must be in
an
"a<number><lett
er> format.
12
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
Interface groups that are created by using the ifgrp create and ifgrp favor commands are not
persistent across reboots unless they are added to the /etc/rcfile or unless you use System Manager.
5-13
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
For more information about load balancing, please refer to TR-4182: Ethernet Storage Best Practices for
Clustered Data ONTAP Configurations.
5-14
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
13
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-15
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
There are two methods to achieve path redundancy when using iSCSI in clustered Data ONTAP: by using
interface groups or by configuring hosts to use multipath I/O over multiple distinct physical links. Because
use multipath I/O is required, interface groups might have very little value.
For more information, refer to TR-4182: Ethernet Storage Best Practices for Clustered Data ONTAP
Configurations.
5-16
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
VLANs
Switch 1
Switch 2
Router
Mgmt
Switch
VLAN70
Clients
14
VLAN172
Tenant B
VLAN171
Tenant A
VLAN170
Mgmt
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
VLANS
A port or interface group can be subdivided into multiple VLANs. Each VLAN has a unique tag that is
communicated in the header of every packet. The switch must be configured to support VLANs and the tags
that are in use. In Data ONTAP, a VLAN's ID is configured into the name. For example, VLAN "e0a-70" is a
VLAN with tag 70 configured on physical port e0a. VLANs that share a base port can belong to the same or
different IP spaces, and it follows that the base port could be in a different IP space than its VLANs.
Different configurations of LIFs, failover groups, VLANs, and interface groups are possible in a clustered
Data ONTAP environment. The best practice recommendation is to use a configuration that takes advantage
of the cluster-wide failover capabilities of failover groups, the port aggregation functionality of interface
groups, and the security aspects of VLANs.
For more examples, refer to the Clustered Data ONTAP 8.2 Network Management Guide and TR-4182:
Ethernet Storage Best Practices for Clustered Data ONTAP Configurations.
5-17
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Creating VLANs
Data ONTAP 7-Mode
system> vlan create e4 10 20 30
system> ifconfig -a
e0a: flags=0x80e08866<BROADCAST,RUNNING,MULTICAST,VLAN> mtu 1500
ether 00:0c:29:56:54:7e (auto-1000t-fd-up) flowcontrol full
15
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
CREATING VLANS
You can create a VLAN for ease of administration, confinement of broadcast domains, reduced network
traffic, and enforcement of security policies.
7-Mode:
In 7-Mode, You can use the vlan create command to include an interface in one or more VLANs, as
specified by the VLAN identifier, enable VLAN tagging, and optionally enable GVRP (enabled with the g
option). VLANs that you create by using the vlan create command are not persistent across reboots
unless you add them to the /etc/rcfile or you use System Manager.
Clustered Data ONTAP:
In clustered Data ONTAP, interface groups are handled in a similar way.
5-18
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
vlan
vlan
ifgrp
port
port
vlan
port
vlan
ifgrp
port
16
port
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-19
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 2
IPspaces
17
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
LESSON 2: IPSPACES
The MultiStore feature for Data ONTAP software was created to enable service providers to partition the
resources of a single storage system so that it appears as multiple virtual storage systems on a network.
The IPspace feature was created for MultiStore to enable a single storage system to be accessed by clients
from more than one disconnected network, even if those clients are using the same IP address.
Clustered Data ONTAP has had a feature similar to MultiStore virtual storage systems, and IPspaces were
introduced to clustered Data ONTAP in version 8.3.
Conceptually, IPspaces in 7-Mode and clustered Data ONTAP are similar, but the configuration is very
different. In this lesson only, clustered Data ONTAP 8.3 examples are discussed. For information on how to
configure IPspaces for MultiStore environments in 7-Mode, refer to the MultiStore Management Guide For 7Mode for the version of Data ONTAP that you are configuring.
5-20
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
18
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-21
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
IPspaces
Overview
Default
Company A
Company B
SVM 1
SVM_A-1
SVM_B-1
SVM 2
SVM_A-2
SVM_B-2
Default
routing
table
Company A
routing
table
Company B
routing
table
IPspace
Storage service
provider (SSP)
point of
presence
19
IPspace
IPspace
Default: 192.168.0.0
Company A: 10.0.0.0
Company B: 10.0.0.0
192.168.0.5
10.1.2.5
10.1.2.5
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
IPSPACES: OVERVIEW
The IPspace feature enables a storage system or cluster to be accessed by clients from more than one
disconnected network, even if those clients are using the same IP address.
An IPspace defines a distinct IP address space in which virtual storage systems can participate. IP addresses
that are defined for an IPspace are applicable only within that IPspace. A distinct routing table is maintained
for each IPspace. No cross-IPspace traffic routing happens. Each IPspace has a unique loopback interface that
is assigned to it. The loopback traffic on each IPspace is completely isolated from the loopback traffic on
other IPspaces.
Example:
A storage service provider (SSP) needs to connect customers of companies A and B to a storage system on
the SSP's premises. The SSP creates storage virtual machines (SVMs) on the clusterone per customerand
provides a dedicated network path from one SVM to As network and one from the other SVM to Bs
network.
This deployment should work if both companies use nonprivate IP address ranges. However, because the
companies use the same private addresses, the SVMs on the cluster at the SSP location have conflicting IP
addresses.
To overcome this problem, two IPspaces are defined on the clusterone per company. Because a distinct
routing table is maintained for each IPspace, and no cross-IPspace traffic is routed, the data for each company
is securely routed to its respective network, even if the two SVMs are configured in the 10.0.0.0 address
space.
Additionally, the IP addresses that are referred to by the various configuration files (such as the /etc/hosts file,
the /etc/hosts.equivfile, and the /etc/rcfile) are relative to that IPspace. Therefore, the IPspaces enable the SSP
to use the same IP address for the configuration and authentication data for both SVMs, without conflict.
5-22
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
IPspaces
Defaults
Two IPspaces and two system SVMs are created automatically when the
cluster is initialized:
Default IPspace
A container for ports, subnets, and SVMs that serve data for configurations that
do not need separate IPspaces for clients
Also contains the cluster management and node management ports
For the Default IPspace, a system SVM named after the cluster is created
Cluster IPspace
20
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
IPSPACES: DEFAULTS
Two special IPspaces are created by default when the cluster is first created, and a special SVM is created for
each IPspace.
Default IPspace
This IPspace is a container for ports, subnets, and SVMs that serve data. If your configuration does not
need separate IPspaces for clients, all SVMs can be created in this IPspace. This IPspace also contains the
cluster management and node management ports.
Cluster IPspace
This IPspace contains all cluster ports from all nodes in the cluster. It is created automatically when the
cluster is created. It provides connectivity to the internal private cluster network. As additional nodes join
the cluster, cluster ports from those nodes are added to the Cluster IPspace.
A system SVM exists for each IPspace. When you create an IPspace, a default system SVM of the same name
is created:
The system SVM for the Cluster IPspace carries cluster traffic between nodes of a cluster on the
internal private cluster network. It is managed by the cluster administrator, and it has the name Cluster.
The system SVM for the Default IPspace carries management traffic for the cluster and nodes,
including the intercluster traffic between clusters. It is managed by the cluster administrator, and it uses
the same name as the cluster.
The system SVM for a custom IPspace that you create carries management traffic for that SVM. It is
managed by the cluster administrator, and it uses the same name as the IPspace
One or more SVMs for clients can exist in an IPspace. Each client SVM has its own data volumes and
configurations, and it is administered independently of other SVMs.
5-23
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
IPspaces
Managing IPspaces
You can create IPspaces when you need your SVMs to have their own
secure storage, administration, and routing:
c1::> network ipspace create ipspace IPspace_A
c1::> network ipspace create ipspace IPspace_B
NOTE: A system SVM with the same name as the IPspace name is automatically
created.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
If required, you can change the name of an existing IPspace (except for the two system-created IPspaces)
by using the network ipspace rename command.
If you no longer need an IPspace, you can delete it by using the network ipspace delete
command.
NOTE: There must be no broadcast domains, network interfaces, or SVMs associated with the IPspace you
want to delete. The system-defined Default and Cluster IPspaces cannot be deleted.
You can display the list of IPspaces that exist in a cluster, and you can view the SVMs, broadcast domains,
and ports that are assigned to each IPspace.
After you create an IPspace and before you create its SVMs, you must create a broadcast domain that defines
the ports that will be part of the IPspace.
5-24
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
IPspaces
Verifying IPspaces
To view IPspaces:
c1::> network ipspace show
IPspace
Vserver List
---------- ---------------------------Cluster
Cluster
Default
svm1, svm2, c1
IPspace_A SVM_A-1, SVM_A-2, IPspace_A
IPspace_B SVM_B-1, SVM_B-2, IPspace_B
Broadcast Domains
------------------Cluster
Default
bcast_A
bcast_B
IPspace_A and IPspace_B
SVMs are system SVMs.
Broadcast
domain and port
assignments
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-25
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
23
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-26
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Broadcast Domains
Overview
Broadcast domains enable you to group network ports that belong to the same
layer 2 network
The ports in the group can then be used by an SVM for data or management traffic
Default
Broadcast Domain
Company A
Broadcast Domain
Company B
Broadcast Domain
24
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
The Default broadcast domain, which was created automatically during cluster initialization, has been
configured to contain a port from each node in the cluster.
The Company A broadcast domain has been created manually, and it contains a one port each from the
nodes in the first HA pair.
The Company B broadcast domain has been created manually, and it contains a one port each from the
nodes in the second HA pair.
The Cluster broadcast domain is also created automatically during cluster initialization, but it is not
shown on this slide.
The two broadcast domains were created by the system administrator specifically to support the customer
IPspaces.
5-27
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Broadcast Domains
Defaults
Its ports are used for cluster communication and include all cluster ports
from all nodes in the cluster
25
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
The Default broadcast domain contains ports that are in the Default IPspace. These ports are used
primarily to serve data. Cluster management and node management ports are also in this broadcast
domain.
The Cluster broadcast domain contains ports that are in the Cluster IPspace. These ports are used for
cluster communication and include all cluster ports from all nodes in the cluster.
If you have created unique IPspaces to separate client traffic, you need to create a broadcast domain in each of
those IPspaces. If your cluster does not require separate IPspaces, then all broadcast domains, and all ports,
reside in the system-created Default IPspace.
5-28
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Broadcast Domains
Managing Broadcast Domains
26
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
Broadcast domains that are created can be renamed or deleted; however, the system-created Cluster and
Default broadcast domains cannot be renamed or deleted.
To make system configuration easier, a failover group of the same name is created automatically, and it
contains the same ports. All failover groups related to the broadcast domain are removed when you delete the
broadcast domain.
5-29
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Broadcast Domains
Verifying Broadcast Domains
27
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-30
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
1. broadcast domain
2. SVM
3. port
4. root volume
28
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-31
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Subnets
Overview
29
Default
Broadcast Domain
subnet
Company A
Broadcast Domain
subnet
Company B
Broadcast Domain
subnet
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
SUBNETS: OVERVIEW
Subnets enable you to allocate specific blocks, or pools, of IP addresses for your Data ONTAP network
configuration. This allocation enables you to create LIFs more easily when you use the network
interface create command, by specifying a subnet name instead of having to specify IP address and
network mask values.
A subnet is created within a broadcast domain, and it contains a pool of IP addresses that belong to the same
layer 3 subnet. IP addresses in a subnet are allocated to ports in the broadcast domain when LIFs are created.
When LIFs are removed, the IP addresses are returned to the subnet pool and are available for future LIFs.
It is recommended that you use subnets because they make the management of IP addresses much easier, and
they make the creation of LIFs a simpler process. Additionally, if you specify a gateway when defining a
subnet, a default route to that gateway is added automatically to the SVM when a LIF is created using that
subnet.
5-32
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Subnets
Managing Subnets
To create a subnet:
c1::> network subnet create subnet-name subnet_A broadcast-domain bcast_A
ipspace IPspace_A subnet 10.1.2.0/24 gateway 10.1.2.1 -ip-ranges 10.1.2.9110.1.2-94 -force-updatelif-associations true
NOTE: The broadcast domain and IPspace where you plan to add the subnet
must already exist, and subnet names must be unique within an IPspace.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-33
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Subnets
Subnets and Gateways
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-34
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Subnets
Verifying Subnets
subnet_A
10.1.2.0/24
subnet_B
10.1.2.0/24
Broadcast
Avail/
Domain
Gateway
Total Ranges
--------- ------------ ------ -----Default
192.168.0.1 10/50
192.168.0.101-192.168.0.150
bcast_A
10.1.2.1
4/4
10.1.2.91-10.1.2-94
bcast_B
10.1.2.1
4/8
10.1.2.91-10.1.2.98
32
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-35
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
33
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-36
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 3
Network Interfaces
34
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-37
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
35
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-38
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Network Interfaces
Overview
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-39
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Logical
Virtual
LIF
svm1-mgmt
svm1-data1
VLAN
a0a-50
a0a-80
ifgrp
Physical
37
a0a
Port
e2a
e3a
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-40
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Logical Interfaces
Overview
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-41
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Logical Interfaces
Managing LIFs
role
All IP-based LIFs (except cluster LIFs) are compatible with physical
ports, interface groups, and VLANs
Cluster LIFs can only be on physical ports
39
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
The underlying physical network port must be configured to the administrative up status.
If you are planning to use a subnet name to allocate the IP address and network mask value for a LIF, the
subnet must already exist.
You can create IPv4 and IPv6 LIFs on the same network port.
You cannot assign NAS and SAN protocols to a LIF.
The supported protocols are CIFS, NFS, FlexCache, iSCSI, and FC.
The data-protocol parameter must be specified when the LIF is created, and it cannot be modified
later.
If you specify none as the value for the data-protocol parameter, the LIF does not support any data
protocol.
The home-node parameter is the node to which the LIF returns when the network interface
revert command is run on the LIF.
The home-port parameter is the port or interface group to which the LIF returns when the network
interface revert command is run on the LIF.
All the name mapping and host-name resolution servicessuch as DNS, Network Information Service
(NIS), Lightweight Directory Access Protocol (LDAP), and Active Directorymust be reachable from
the data, cluster-management, and node-management LIFs of the cluster.
A cluster LIF should not be on the same subnet as a management LIF or a data LIF.
When using a subnet to supply the IP address and network mask, if the subnet was defined with a
gateway, a default route to that gateway is added automatically to the SVM when a LIF is created using
that subnet.
5-42
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
40
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-43
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Logical Interfaces
Nodes, Ports, and LIFs Example
Examine on
next slide
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
Node 1 has two data LIFs that are assigned to the first port, one for each SVM. The IP address is also
listed.
Not shown: There are LIFs on the first port of the other three nodes, one for each SVM.
Node 2 has two data LIFs that are assigned to the second port, one for each SVM that is assigned to
Company A.
Not shown: There are two LIFs on the first node also. It is recommended to put LIFs on both nodes of an
HA pair.
Node 4 has two data LIFs that are assigned to the second port, one for each SVM that is assigned to
Company B.
Not shown: There are two LIFs on the third node also. It is recommended to put LIFs on both nodes of an
HA pair.
In a NAS environment, the name is not the actual host name that is associated with the IP address. The name
is an internal name that can be used as the host name for the IP address in the DNS. In a NAS environment,
all these IP addresses can share one host name, such that a DNS round robin picks an IP address every time
that the host name is used; for example, for an NFS mount command.
This graphic shows how an environment can randomly distribute client connections across a cluster while the
cluster looks to every user and every client as if there is only one storage host.
5-44
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Logical Interfaces
LIF Attributes
To view a LIF:
c1::> network interface show vserver lif SVM_A-1_lif2
Vserver Name: SVM_A-1
Logical Interface Name: SVM_A-1_lif2
Role: data
Data Protocol: nfs
Home Node: c1-02
Output edited
Home Port: e0f
for readability
Current Node: c1-02
Current Port: e0f
...
Is Home: true
Network Address: 10.1.2.92
Netmask: 255.255.255.0
...
Subnet Name: subnet_A
...
IPspace of LIF: IPspace_A
42
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-45
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Network Components
43
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
NETWORK COMPONENTS
This graphic shows clustered Data ONTAP 8.3 from a data network component perspective. Clustered Data
ONTAP requires data and management connectivity, which could coexist on the same network. In multinode
configurations, clustered Data ONTAP also requires a cluster interconnect for cluster traffic.
Two cluster connections to each node are typically required for redundancy and improved cluster traffic flow.
5-46
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 4
Nondisruptive LIF
Configuration
44
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
Data ONTAP 8.0: Failover rules (network interface failover) were the primary way to
control failover based on port role and priority.
Data ONTAP 8.1: Failover groups (network interface failover-groups) became the
primary method to control failover. Failover rules were deprecated.
Data ONTAP 8.3: Failover groups and failover policies were changed to work with broadcast domains.
There are fewer failover groups and more failover policies.
Conceptually, LIF failover is similar in the different versions of clustered Data ONTAP, but the configuration
is very different. This lesson discusses only examples of clustered Data ONTAP 8.3. For more information
about how to configure LIF failover in older versions of clustered Data ONTAP, refer to the Network
Management Guide for the version of clustered Data ONTAP that you are configuring.
5-47
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
The ports that are added to a failover group can be network ports, VLANs, or interface groups.
All the ports that are added to the failover group must belong to the same broadcast domain.
A single port can reside in multiple failover groups.
If you have LIFs in different VLANs or broadcast domains, you must configure failover groups for each
VLAN or broadcast domain.
Failover groups do not apply in SAN iSCSI or FC environments.
You can configure a LIF to fail over to a specific group of network ports by applying a failover policy and a
failover group to the LIF. You can also disable a LIF from failing over to another port. Failover policies can
be:
NOTE: LIFs for SAN protocols do not support failover; therefore, these LIFs are always set to disabled.
5-48
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Failover Groups
Broadcast DomainBased
These failover groups are created automatically based on the network ports
that are present in the particular broadcast domain:
Additional failover groups are created for each broadcast domain that you
create
The failover group has the same name as the broadcast domain, and it contains
the same ports as those in the broadcast domain
46
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-49
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Failover Groups
User-Defined
47
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-50
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Failover Policies
48
Failover Policy
Details
broadcast-domainwide
system-defined
local-only
sfo-partner-only
disabled
Recommended for
nondisruptive software
updates
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
FAILOVER POLICIES
These default policies should be used in most cases.
5-51
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Failover
Managing Failover Groups and LIFs
You can also add and remove targets from a failover group:
network interface failover-groups add-targets
network interface failover-groups remove-targets
49
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-52
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
50
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-53
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 5
Network Management
51
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
Clustered Data ONTAP 8.3: You control how LIFs in an SVM use your network for outbound traffic by
configuring routing tables and static routes. If you have defined a default gateway when creating a subnet,
a default route to that gateway is added automatically to the SVM that uses a LIF from that subnet.
Earlier than clustered Data ONTAP 8.3: You can control how LIFs in an SVM use your network for
outbound traffic by configuring routing groups and static routes. A set of common routes are grouped in a
routing group that makes the administration of routes easier.
This lesson discusses only clustered Data ONTAP 8.3 examples. For more information on how to configure
LIF failover in earlier versions of clustered Data ONTAP, refer to the Network Management Guide for the
version of clustered Data ONTAP that you are configuring.
5-54
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
52
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-55
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Routing Management
Overview
Outbound traffic of LIFs in an SVM can be controlled by using route tables and static
routes:
Route tables:
Route tables are routes that are automatically created in an SVM when a service or
application is configured for the SVM
Routes are configured for each SVM, identifying the SVM, subnet, and destination
Because route tables are per-SVM, routing changes to one SVM do not pose a risk of
corrupting another SVM route table
The system SVM of each IPspace has its own route table
Static routes:
A static route is a defined route between a LIF and a specific destination IP address
The route can use a gateway IP address
NOTE: If a default gateway is defined when creating a subnet, a default route to that gateway is
added automatically to the SVM that uses a LIF from that subnet.
53
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
Route tables:
Static route:
5-56
Routes are configured for each SVM, identifying the SVM, subnet, and destination.
Because route tables are per-SVM, routing changes to one SVM do not pose a risk of corrupting another SVM
route table.
Routes are automatically created in an SVM when a service or application is configured for the SVM. Like data
SVMs, the system SVM of each IPspace has its own route table because LIFs can be owned by system SVMs
and the system SVMs might need route configurations that are different from those on data SVMs.
A static route is a defined route between a LIF and a specific destination IP address; the route can use a gateway
IP address.
If you have defined a default gateway when creating a subnet, a default route to that gateway is added
automatically to the SVM that uses a LIF from that subnet.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Routing Management
Managing Routes
Metric
------20
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-57
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Host-Name Resolution
Overview
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
You can use the vserver services dns hosts command for configuring the hosts table that resides in
the root volume of the admin SVM.
By default, the order of lookup for the admin SVM is hosts table first and then DNS.
It is best to configure DNS on the admin SVM at the time of cluster creation.
If you want to configure DNS later, use the vserver services dns create command.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Host-Name Resolution
Table Entries
56
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-59
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 6
57
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-60
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
58
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-61
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-62
Different client connections use different bandwidth; therefore, LIFs can be migrated based on the load
capacity.
When new nodes are added to the cluster, LIFs can be migrated to the new ports.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
If all LIFs in a load balancing zone have the same weight, LIFs are selected with equal probability.
When manually assigning load balancing weights to LIFs, you must consider conditions such as load, port
capacity, client requirements, CPU usage, throughput, open connections, and so on.
For example, in a cluster that has 10-GbE and 1-GbE data ports, the 10-GbE ports can be assigned a
higher weight so that the port is returned more frequently when any request is received.
5-63
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
client
4. An appropriately
loaded LIF will be
chosen.
1
DNS
Delegated zone/forwarder
SVM1.NETAPP.COM
LIF1
LIF2
10
LIF3
LIF4
e0e
e0e
e0e
e0e
c1::> net int create -vserver svm1 -lif lif1 -role data -home-node c1-01 -home-port e0e
-address 192.168.0.131 -netmask 255.255.255.0 -dns-zone svm1.netapp.com
61
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-64
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
client
LIF1
2. Mount client by
using host name.
LIF2
3. Configure DNS
Server for roundrobin load
balancing.
LIF3
e0e
e0e
e0e
3
1
62
IN
IN
IN
IN
A
A
A
A
DNS
<LIF1
<LIF2
<LIF3
<LIF4
IP
IP
IP
IP
LIF4
e0e
Address>
Address>
Address>
Address>
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-65
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2. Create a failover
group with each port.
3. Modify LIF to
subscribe to failover
group and enable
automatic LIF
rebalancing.
LIF1
NFSv3
client
e0e
e0e
1
e0e
2
3
63
Example for
node1 port e0e
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-66
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
64
Load balancing
feature is
protocol
dependent.
DNS zone
true for
automatic LIF
rebalancing
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-67
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
65
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-68
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
References
Clustered Data ONTAP Network Management Guide
66
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
REFERENCES
5-69
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
67
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
5-70
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
Module 5: Network
Management
68
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
EXERCISE
Please refer to your exercise guide.
5-71
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 6
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-1
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Unified Storage
Review
NFS
Corporate
LAN
iSCSI
CIFS
FCoE
File System
(NAS)
NAS
(File-Level
Access)
FC
SAN
(Block-Level
Access)
NetApp FAS
2
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-2
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-3
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
4. Initialize disks.
5. System setup: Create a cluster on the first node, then join additional nodes
to the cluster.
This Module
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-4
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 1
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-5
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Client
UNIX1
WIN1
/mnt/NFSvol
Disk 1 (C:)
Disk 2 (E:) \\system\SMBvol
Server
SMB
volume
NFS
volume
6
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-6
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
File
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-7
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
NFSv3 Implementation
Targets and Access in Clustered Data ONTAP
Create a projects volume under the SVMs root:
c1::> volume create vserver vsNFS
-aggregate aggr1_system_01 volume Projects
-size 20MB state online type RW
policy Default security-style unix
-junction-path /Projects junction-active true
Or:
Projects
Theseus
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-8
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Junctions
From the storage system: c1::> volume show vserver
vs1 volume * fields junction-path
vserver
vs1
vs1
vs1
vs1
vs1
volume
acct
pro_1
pro_2
pro_3
vs1_root
junction path
/acct
/project1
/project2
/project3
/
unix1# drwxr-xr-x.
2014 ..
unix1#
unix1#
unix1#
unix1#
root
root
root
root
acct
project1
project2
project3
drwxr-xr-x.
drwxr-xr-x.
drwxr-xr-x.
drwxr-xr-x.
2
2
2
2
root
root
root
root
4096
4096
4096
4096
Mar
Mar
Mar
Mar
15
15
15
15
2014
2014
2014
2014
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
JUNCTIONS
Volume junctions are a way to join individual volumes together into a single logical namespace. Volume
junctions are transparent to CIFS and NFS clients. When NAS clients access data by traversing a junction, the
junction appears to be an ordinary directory.
A junction is formed when a volume is mounted to a mount point below the root and is used to create a filesystem tree. The top of a file-system tree is always the root volume, which is represented by a slash mark (/).
A junction points from a directory in one volume to the root directory of another volume.
A volume must be mounted at a junction point in the namespace to allow NAS client access to contained data.
Although specifying a junction point is optional when a volume is created, data in the volume cannot be
exported and a share cannot be created until the volume is mounted to a junction point in the namespace. A
volume that was not mounted during volume creation can be mounted post-creation. New volumes can be
added to the namespace at any time by mounting them to a junction point.
NOTE: Mounting volumes to junction paths is accomplished on the storage system.
6-9
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Unmounting
c1::> volume unmount -vserver vs1 volume
pro1
pro1 -
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-10
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
11
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-11
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
root
/project3
/acct
acct
/project1
pro3
/project2
pro2
pro1
12
SVM
volume
junction path
vs1
vs1
acct
pro1
/acct
/project1
vs1
pro2
/project2
vs1
pro3
/project3
vs1
vs1_root
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-12
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
/project pro3
/project/pro3
project
pro2
/project/pro1
/project/pro2
pro1
13
SVM
volume
junction path
vs1
acct
/acct
vs1
vs1
project
pro1
/project
/project/pro1
vs1
pro2
/project/pro2
vs1
pro3
/project/pro3
vs1
vs1_root
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-13
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
root
/acct
project
acct
/project/pro3
pro3
pro1
/project/pro11
14
pro2
/project/pro2
SVM
volume
junction path
vs1
vs1
acct
pro1
/acct
/project/pro1
vs1
pro2
/project/pro2
vs1
pro3
/project/pro3
vs1
vs1_root
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-14
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
15
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-15
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 2
Deploying NFS
16
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-16
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
NFS
Client
UNIX1
/mnt/NFS
Server
NFSvol
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
NFS
NFS is a distributed file system that enables users to access resources, such as volumes that are located on
remote storage systems, as if the resources were located on their local computer system.
NFS provides its services through a client-server relationship.
Storage systems that allow their file systems and other resources to be available for remote access are
called servers.
The computers that use a server's resources are called clients.
The procedure of making file systems available is called exporting.
The act of a client accessing an exported file system is called mounting.
When a client mounts a file system that a server exports, users on the client machine can view and interact
with the mounted file systems on the server within the permissions granted.
6-17
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
18
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-18
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
NFSv3 Implementation
Enable NFS
Data ONTAP 7-Mode
system> options nfs.v3.enable on
Best Practice:
Configure NAS protocols
with OnCommand System
Manager.
19
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-19
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Client Specification
Host: Use the host name or IP address.
/vol/acct rw=unix1
/vol/acct rw=192.168.0.10
Netgroup: Use the group name.
/vol/acct rw=@mygroup
Subnet: Specify the subnet address.
/vol/acct rw=192.168.0.0/24
/vol/acct rw=192.168.0.0
255.255.255.0
DNS Subdomain:
/vol/acct rw=.learn.netapp.local
20
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
CLIENT SPECIFICATION
Data ONTAP controls access to its exported resources according to the authentication-based and file-based
restrictions that are specified. With authentication-based restrictions, you can specify which client machines
can connect to the storage system.
When the storage system receives a request to mount an exported resource, it looks up the name of the client
that is making the request. The storage system takes the client IP address and looks up the corresponding host
name that matches that address. Data ONTAP relies on correct resolution of client names and IP addresses to
provide basic connectivity for storage systems on the network. If you are unable to access the storage system
data or establish sessions, there might be problems with host-name resolution on your storage system or on a
name server.
Host: Typically, the UNIX host system that is connected to the storage system
Netgroup: A network-wide group of machines that are granted identical access to certain network
resources for security and organizational reasons
Subnet: A physical grouping of connected network devices. Nodes on a subnet tend to be located in close
physical proximity to each other on a LAN.
DNS subdomain: A domain that is part of a larger domain. A DNS hierarchy consists of the root-level
domain at the top, underneath which are the top-level domains, followed by second-level domains, and
finally the subdomains.
With a netgroup, each element is listed in a triple format: host name, user name, domain name. The host name
entry must be fully qualified if the specified host is not in the local domain. The user name is ignored because
it is used only for mounts. The domain name is either empty or the local domain name. The @ symbol is used
in 7-Mode to indicate that the name following the @ symbol is a netgroup, not a host name.
The following netgroup file contains three netgroups:
6-20
trustedhosts (host1,,)(host2,,)
untrustedhosts (host3,,)(host4,,)(host5,,)
allhosts trustedhosts untrustedhosts
Clustered Data ONTAP Administration: Implementing NAS Protocols
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-21
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
7-Mode
Exporting
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-MODE: EXPORTING
You can export or unexport a file system path, making it available or unavailable to NFS clients, by editing
the /etc/exportsfile or running the exportfs command. To specify which file system paths Data ONTAP
exports automatically when NFS starts, edit the /etc/exportsfile.
6-22
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
7-Mode
Rules for Exporting Resources
23
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-23
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
You do not need to create a separate export entry for each export
Apply a single policy to many exports
24
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-24
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Export project/pro1:
25
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-25
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
26
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-26
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
27
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-27
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Mounts
Use the mount command on the client to mount an exported
NFS resource from the storage system.
unix1# mkdir /mnt/project1
28
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
MOUNTS
To enable an NFS client, you mount a remote file system after NFS is started. Usually, only a privileged user
can mount file systems with NFS. However, you can enable users to mount and unmount selected file systems
by using the mount and umount commands if the user option is set in /etc/fstab. This setting can reduce
traffic by having file systems mounted only when they are needed. To enable user mounting, create an entry
in /etc/fstab for each file system to be mounted.
6-28
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Verifying Mounts
To verify exports on a target:
mount
When used without options, this command displays all
mounted files
29
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
VERIFYING MOUNTS
To verify exported resources, use the mount command in UNIX systems:
In versions earlier than Data ONTAP 8.3, clients cannot use the showmount -e command to view the NFS
exports list. Instead, only the root volume (/) is displayed.
6-29
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
type
Sticky
bit
read
write
exec
read
write
exec
read
write
exec
Owner Permissions
1
1
Group Permissions
4
World Permissions
4
nobody
5274
Oct 3
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-30
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
31
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-31
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 3
32
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-32
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Disk 1 (C:)
Disk 2 (E:) \\system\SMBvol
Server
SMBvol
33
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-33
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
34
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-34
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
35
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-35
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SMB Implementation
Enable SMB
Data ONTAP 7-Mode
system> cifs setup
Best Practice:
Configure NAS protocols
with OnCommand System
Manager.
36
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-36
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
37
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-37
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
38
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-38
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Share Permissions
Share permissions can be managed by:
The CLI
OnCommand System Manager
Microsoft Management Console (MMC), such as Computer
Management (clusters, starting with 8.3 and 8.x 7-Mode)
39
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
SHARE PERMISSIONS
Share permissions apply only to users who access the resource over the network. They apply to all files and
folders in the shared resource.
Full Control: Full Control is the default permission that is assigned to the Administrators group on the
local computer. Full Control allows all Read and Change permissions, plus Changing permissions (NTFS
files and folders only).
Read: Read is the default permission that is assigned to the Everyone group. Read allows:
Change: Change is not a default permission for any group. The Change permission allows all Read
permissions, plus:
6-39
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
40
Name
Mount Point
Description
----
-----------
-----------
datatree1
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-40
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
41
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-41
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Creating Shares
Data ONTAP 7-Mode
system> cifs shares -add <share_name> <path>
system> cifs shares change <share_name> <path>
[-comment description]
[-forcegroup name]
[-maxusers n]
42
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
CREATING SHARES
A CIFS share is a named access point in a volume that enables CIFS clients to view, browse, and manipulate
files on a file server. There are certain guidelines that you should take into consideration when creating CIFS
shares.
When you create a share, you must provide all of the following information:
When you create a share, you can optionally specify a description for the share. The share description appears
in the Comment field when you browse the shares on the network.
6-42
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
43
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-43
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
UI
Use Run dialog box
Map a drive
44
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-44
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SMB Sessions
A client establishes a session with a storage system upon the
first share access
Access is based on user authentication and share access
rules
WIN1
\\system\pro1
qtree_pro1
Bob
45
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
SMB SESSIONS
An SMB session is established between an authenticated user on an SMB client and an SMB server.
6-45
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
46
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-46
Full Control: Users can see the contents of a file or folder, change existing files and folders, create new
files and folders, and run programs in a folder.
Modify: Users can change existing files and folders but cannot create new ones.
Read and Execute: Users can see the contents of existing files and folders and can run programs in a
folder.
Read: Users can see the contents of a folder and open files and folders.
Write: Users can create new files and folders and make changes to existing files and folders.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
47
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-47
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
MMC Support
Features
Close a session
Close a file
48
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-48
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
MMC Support
Limitations
49
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-49
MMC does not update instantaneously, so customers might experience a lag between the moment that
they create a share or set security permissions and the moment that MMC displays that share.
SMB sessions and file enumeration are managed through a node-scoped view. For example, an SMB
administrator who connects to a LIF that is hosted on node 3 and who tries to view open files in MMC
will not see a file that was opened by an SMB user who is connected to a LIF that is hosted on node 8.
Some MMC features are not supported. These features include management of local users and groups,
Windows Performance Monitor (PerfMon), and Live View audit.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-50
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Namespace References
Clustered Data ONTAP File Access Management Guide for
NFS
TR-4129: Namespaces in Clustered Data ONTAP
51
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
NAMESPACE REFERENCES
6-51
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
NFS References
Clustered Data ONTAP File Access Management Guide for NFS
Clustered Data ONTAP NFS Configuration Express Guide
TR-4067: Clustered Data ONTAP NFS Implementation Guide
Additional training:
Data ONTAP NFS Administration instructor-led training
52
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
NFS REFERENCES
6-52
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SMB References
Clustered Data ONTAP File Access Management Guide for CIFS
Clustered Data ONTAP CIFS/SMB Configuration Express Guide
TR-4191: Best Practices Guide for Clustered Data ONTAP Windows
File Services
Additional training:
Data ONTAP SMB (CIFS) Administration instructor-led training
53
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
SMB REFERENCES
6-53
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
54
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
6-54
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
55
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
EXERCISE
Refer to your exercise guide.
6-55
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 7
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-1
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Unified Storage
Review
NFS
Corporate
LAN
iSCSI
CIFS
FCoE
File System
(NAS)
NAS
(File-Level
Access)
FC
SAN
(Block-Level
Access)
NetApp FAS
2
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-2
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-3
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
This Module
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-4
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 1
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-5
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Solaris
AIX
LUNs
LUNs
LUNs
LUNs
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-6
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SAN Protocols
Which protocols are used in a
Data ONTAP SAN?
FC
iSCSI
FCoE
FCoE uses Data Center
Bridging Ethernet (DCB
Ethernet) capabilities to
encapsulate the FC frame
FC
TCP
FCoE
IP
Ethernet
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
SAN PROTOCOLS
LUNs on a NetApp storage system can be accessed through either of the following:
In all cases, the transport portals (FC, FCoE, or iSCSI) carry encapsulated SCSI commands as the data
transport mechanism.
7-7
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-8
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
a. 4
b. 8
c. 16
d. 24
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-9
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
What Is a LUN?
A logical representation of a SCSI disk
Logical Blocks: 512 bytes
LUN
10
SCSI Disk
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
WHAT IS A LUN?
A LUN in Data ONTAP is a logical representation of an attached SCSI disk. As we learned earlier, SAN is
often called block-based storage. The block refers to the logical blocks that the host writes tojust as it
would write to an attached SCSI disk. Traditionally, these logical blocks are 512 bytes per sector.
Hard disk manufacturers have started using 4096-byte (4-KB) sectors, called Advanced Format, in new hard
disk platforms. At this time, Data ONTAP LUNs are using the traditional SCSI standard of 512 bytes per
sector.
NOTE: This slide is meant to simplify the understanding of a LUN.
7-10
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Application
File System
SCSI Driver
Eth
FC
Disk 1 (C:)
Disk 2 (E:) LUN
Connected
through a switch
e0a
0a
SAN Services
HA
WAFL
Target
(controller
or SVM)
LUN
11
FlexVol
Volume
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-11
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
HA
e1a
Cluster Interconnect
Data SVM
LIF1
LIF1
e1b
Data SVM
HA
LIF2
LIF2
12
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-12
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
iSCSI Architecture
Multipathing software is required.
Initiator
Disk 1 (C:)
Disk 2 (E:)
Ethernet
Target
Portal
Groups
My_IP_igroup
iqn.1999-04.com.a:system
Protocol: iSCSI
OS Type: Windows
ALUA: true
Port set: myportset
LIF LIF
vs_iscsi
LUN
13
The LUN
FlexVol
Volume
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
ISCSI ARCHITECTURE
Data is communicated over ports. In an Ethernet SAN, the data is communicated by means of Ethernet ports.
In an FC SAN, the data is communicated over FC ports. For FCoE, the initiator has a converged network
adapter (CNA) and the target has a unified target adapter (UTA).
7-13
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Three
vFiler units
HA
e0a
LIF1
0a
HA
LIF2
LUN
root
Data SVM
14
FlexVol
Volume
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-14
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
iSCSI Nodes
Each node has a unique name that is
called an iSCSI Qualified Name (IQN).
Initiator
iqn.1995- 02.com.microsoft:
base.learn.netapp.local
vs_iscsi
iqn.1992-08.com.netapp:sn.000:vs
LUN
15
FlexVol
Volume
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
ISCSI NODES
7-15
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
0a
LIF1
0a
LIF2
Data SVM
16
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-16
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
WWN Format
Clustered Data ONTAP Identification
1. Verify the port details:
LIF1
LIF2
0a
Node: cluster1-02
Adapter: 0a
0a
Data SVM
Status
Vserver Target Name
------- ----------------------svm1
20:37:00:a0:98:13:d5:d4
Admin
----up
Logical
Vserver Interface
-------- ----------svm1
n1_fc_lif1
n1_fc_lif2
17
Status
Admin/Oper
---------up/up
up/up
Network
Current
Address/Mask
Node
--------------------------20:38:00:a0:98:13:d5:d4 cluster1-01
20:39:00:a0:98:13:d5:d4 cluster1-02
Current
Port
------0a
0a
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-17
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 2
18
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-18
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
19
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-19
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Ethernet
Microsoft Multipath
I/O (MPIO)
Ethernet
HA
Device-Specific
Module (DSM)
LUNa
20
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-20
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Multipath Driver
fc0
0d
fc1
0e
0d
0e
HA
LUNa
21
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-21
Data ONTAP 7-Mode FC active/non-optimized paths are over the high-availability (HA) interconnect.
Clustered Data ONTAP active/non-optimized paths are over the cluster interconnect.
E-Series controller active/non-optimized paths are over the dual-active storage system back plane.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
ALUA Multipathing
Asymmetric logical unit access (ALUA) identifies a group of
target ports that provide a common failover behavior for a LUN.
Access states:
Active/optimized
Active/non-optimized
Standby (not used by Data ONTAP)
Unavailable
22
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
ALUA MULTIPATHING
ALUA, also called Target Port Group Support (TPGS), identified a set of one or more SCSI targets that are
unified by a purpose.
Active/optimized: While in the active/optimized state, all the target ports in the Target Port Group can
immediately access the LUN.
Active/non-optimized: While in the active/non-optimized state, the device server support all commands
that the LUN supports. The execution of specific commands, especially those that involve data transfer or
caching, might operate with lower performance than they would if the Target Port Group were in the
active/optimized state.
Unavailable: The target port returns a CHECK CONDITION status with the sense key set to NOT
READY and an additional sense code of LOGICAL UNIT NOT ACCESSIBLE, TARGET PORT IN
UNAVAILABLE STATE.
NOTE: Do not confuse a Target Port Group (a group of target ports) with a portal group (sometimes called a
Target Portal Group on the storage), which is a list of IP addresses and ports that listen for iSCSI
connections.
7-22
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Direct
LUN
23
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-23
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Indirect
Indirect
Indirect
LUN
24
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-24
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
25
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-25
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
26
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-26
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
27
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-27
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 3
LUN Access
28
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-28
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
29
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-29
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Initiator
Ethernet
IP
SAN
Target
(controller
or SVM)
30
Ethernet
HA
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-30
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
31
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
For 7-Mode: Enter the IP address for one of your interfaces on the storage system or target vFiler unit and click
OK.
For clustered Data ONTAP: Enter the IP address for one of your LIFs on the target SVM and click OK.
5. Click the Targets tab, then select the discovered targets IQN and click Connect.
6. In the Connect To Target dialog box, select Enable multi-path and click OK.
7. Verify that the target now has a status of Connected (this step is shown on the next slide).
7-31
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
32
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-32
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
1. Select the
IQN.
2. Click Connect.
33
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-33
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
1. Click Properties.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-34
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
35
Curr
Port
---e0c
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-35
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Verifying Session
Verify sessions:
Data ONTAP 7-Mode
system> iscsi session show
36
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
VERIFYING SESSION
7-36
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
37
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-37
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Creating an igroup
Create an igroup:
Data ONTAP 7-Mode
system> igroup create -i t windows ig_myWin2 iqn.1991-05.com.microsoft:winfrtp2qb78mr
Verify an igroup:
Data ONTAP 7-Mode
system> igroup show
38
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
CREATING AN IGROUP
7-38
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Verifying igroups
Verify igroups:
Data ONTAP 7-Mode
system> igroup show v
ALUA: true
Initiators: iqn.1991-05.com.microsoft:winfrtp2qb78mr (logged in)
39
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
VERIFYING IGROUPS
7-39
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Creating a LUN
Create a fully provisioned LUN:
40
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
CREATING A LUN
7-40
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Mapping a LUN
Map a LUN to an igroup:
Data ONTAP 7-Mode
system> lun map /vol/vol1/lun1 ig_myWin2
41
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
MAPPING A LUN
7-41
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
LIF1 LIF2
D D
LIF1 LIF2
LIF1 LIF2
LIF1 LIF2
Local nodes: The node that owns the LUN and its partner report
the LUN to the host (also called reporting nodes).
Remote nodes: All other nodes in the cluster do not report.
For information about Selective LUN Mapping and LUN Mobility,
see the SAN Implementation course.
42
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-42
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-43
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
In Windows, a LUN
appears as a disk.
44
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-44
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
45
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-45
Bring the disk online by right-clicking the box to the left of the disk and selecting Online.
To initialize the disk, right-click again and select Initialize Disk.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
46
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-46
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
47
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-47
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
4. Verify the
configuration and click
Finish.
48
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-48
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
49
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-49
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SAN References
Clustered Data ONTAP SAN Administration Guide
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
SAN REFERENCES
7-50
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
51
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
7-51
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
52
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
EXERCISE
Please refer to your exercise guide.
7-52
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 8
Snapshot Copies
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-1
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Ken Asks
My company backs up all data to tape, but
tape is expensive and time consuming.
We need a quick, inexpensive, spaceefficient way to instantly back up the data that
we use every day. We would also like our
users to retrieve backed-up data without
needing a storage administrator to intervene.
How can we accomplish all of that?
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
KEN ASKS
8-2
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 1
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-3
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Snapshot Technology
A Snapshot copy is a local read-only image of the active file system at a
point in time.
The benefits of Snapshot technology are:
Nearly instantaneous application data backups
Fast recovery of data that is lost due to:
Accidental data deletion
Accidental data corruption
SnapRestore
SnapDrive
FlexClone
SnapProtect
SnapManager
SnapMirror
SnapVault
Deduplication
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
SNAPSHOT TECHNOLOGY
Snapshot technology is a key element in the implementation of the WAFL (Write Anywhere File Layout) file
system:
The Data ONTAP operating system automatically creates and deletes Snapshot copies of data in volumes to
support commands that are related to Snapshot technology.
8-4
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Restore Through
NAS Client
UNIX:
.snapshot directory
Entire volume or
individual file
Windows:
~snapshot directory
License required
Restore Through
SnapRestore
Can be restored
manually or by using
management tools
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-5
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SNAP 1
SNAP 2
Prod
S1
S2
SNAP #1
SNAP #2
WRITE
WRITE
E
E
E
E
F
F
F
F
F
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-6
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
d. Only when the file system data changes after the Snapshot
copy is created
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-7
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-8
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The Snapshot copy points to the same disk blocks as the root
inode
New Snapshot copies consume only the space that is required for
the inode itself
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-9
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
10
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-10
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 2
11
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-11
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Ken Asks
How can I make sure that my volumes
dont fill up with Snapshot copies?
12
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
KEN ASKS
8-12
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Snapshot Reserve
Snapshot reserve
Aggregate Space
Active File
System
5%
95%
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
SNAPSHOT RESERVE
The snap reserve command determines the percentage of the storage space that is set aside for Snapshot
copies.
You can change the percentage of storage space that is set aside for the Snapshot copies of a volume. By
default, volume Snapshot copies are stored in the Snapshot reserve storage space. The Snapshot reserve space
is not counted as part of the volumes disk space that is allocated for the active file system. When a Snapshot
copy is first created, none of the Snapshot reserve is consumed. The Snapshot copy is protecting the active
file system at the point in time when the Snapshot copy was created. As the Snapshot copy ages, and the
active file system changes, the Snapshot copy begins to own the data blocks that were deleted or changed by
the current active file system. The Snapshot copy begins to consume the Snapshot reserve space. The amount
of disk space that is consumed by Snapshot copies can grow, depending on the length of time that a Snapshot
copy is retained and the rate of change of the volume.
In some cases, if the Snapshot copy is retained for a long period of time, and the active file system has a high
rate of change, the Snapshot copy can consume 100% of the Snapshot reserve, which is the full 5% of the disk
space that is set aside for Snapshot copies. If the Snapshot copy is not deleted, the Snapshot copy can
consume a portion of the disk space that is intended for the active file system. You monitor and manage
Snapshot copies so that disk space is properly managed.
NOTE: Even if the Snapshot reserve is set to 0%, you can still create Snapshot copies. If there is no Snapshot
reserve, Snapshot copies, over time, consume blocks from the active file system.
8-13
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Aggregate Space
Active File
System
95%
Snapshot
reserve
5%
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-14
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
15
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-15
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 3
16
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-16
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-17
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Ken Asks
When are Snapshot copies triggered? Can I
create one myself on demand? Do I have
control over a schedule? Can I back up
different volumes on different schedules?
18
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
KEN ASKS
8-18
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Snapshot Commands
These basic 7-Mode and clustered Data ONTAP commands enable you to
create and manage Snapshot copies.
snap Command
Action
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
SNAPSHOT COMMANDS
Here are the basic Snapshot commands that you use on the storage system CLI:
8-19
To create and delete Snapshot copies, use the snap create command for 7-Mode or the volume
snapshot create command for clustered Data ONTAP.
To modify the Snapshot reserve on 7-Mode, use the snap reserve command. On a cluster, you
modify the Snapshot reserve at the volume level.
Use snap sched on 7-Mode to manipulate Snapshot schedules. Clustered Data ONTAP uses snapshot
policies to apply schedules to volumes.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
20
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-20
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
21
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-21
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
daily
hourly
Action
Displays the Snapshot schedule
for the named volume
Changes the Snapshot
schedule for the named volume
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-22
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Snapshot Policies
Create a job schedule
c1::> job schedule cron create -name 4hrs -dayofweek all
-hour 4 -minute 0
23
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
SNAPSHOT POLICIES
Two Snapshot policies are automatically created: default and none. If a volume uses none as its Snapshot
policy, no Snapshot copies of it are created. Create Snapshot policies by using the volume snapshot
policy create command and cluster-level schedules.
8-23
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 4
24
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-24
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Ken Asks
Snapshot copies are a great feature. How
can clients find where the copies are stored
and how can they restore lost files?
25
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
KEN ASKS
8-25
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Recovering Data
Recover Snapshot
Data
Use SnapRestore
Technology
Requires a
SnapRestore license
Use SnapRestore
data recovery
software
Restores entire
volumes
26
Copy from a
Snapshot Copy
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
RECOVERING DATA
You can use Snapshot copies to recover data in two ways:
8-26
Copy a file from a Snapshot directory: To copy a lost or corrupted file from a Snapshot copy, navigate to
the Snapshot directory on the client host and locate the Snapshot copy that contains the correct version of
the file. You can copy the file to the original location and overwrite existing data or copy the file to a new
location.
Use the SnapRestore feature to recover data: To revert a volume or a file from a Snapshot copy, you need
the SnapRestore license. You can revert a volume or file from the storage CLI or from the OnCommand
System Manager interface. You can also revert a volume or file by using NetApp data protection software
solutions such as SnapManager, Snap Creator Framework, SnapProtect, or SnapDrive.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
27
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-27
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
28
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-28
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
weekly.2014-09-15_0015
daily.2014-09-18_0010
daily.2014-09-19_0010
hourly.2014-09-19_0605
hourly.2014-09-19_0705
hourly.2014-09-19_0805
hourly.2014-09-19_0905
hourly.2014-09-19_1005
hourly.2014-09-19_1105
hourly.2014-09-19_1205
snapmirror.3_2147484677.2014-09-19_114126
29
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-29
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
vol0
etc
usr
var
home
.snapshot
daily.2014-09-18_0010
daily.2014-09-17_0010
daily.2014-09-18_0010 daily.2014-09-17_0010
Files on home
(as of previous
midnight)
30
Files on home
(as of night
before last)
Files on vol0
(as of previous
midnight)
Files on vol0
(as of night
before last)
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-30
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
31
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-31
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SNAP 1
SNAP 2
Prod
32
S1
S2
SNAP #1
SNAP #2
WRITE
WRITE
E
E
E
E
F
F
F
F
F
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-32
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SNAP
PROD1
SNAP 2
E
E
F
F
F
F
A
Prod
B
C
D
33
Prod
S1
S2
Production:
SNAP #1
Active
File System
SNAP #2
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-33
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
34
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
The Data ONTAP operating system displays a warning message and prompts you to confirm your decision to
revert the file. Press Y to confirm that you want to revert the file. If you do not want to proceed, press Ctrl+C
or press N for no.
If you confirm that you want to revert the file that already exists in the active file system, it is overwritten by
the version in the Snapshot copy.
8-34
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-35
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
References
Clustered Data ONTAP Data Protection Guide
36
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
REFERENCES
8-36
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
37
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
8-37
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
38
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
EXERCISE
Please refer to your exercise guide.
8-38
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 9
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-1
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-2
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 1
Thin Provisioning
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-3
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
vol29
vol27
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-4
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
vol30
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
Thick provisioning of volumes uses a space guarantee for a volume or file. A guarantee of volume
requires that space in the aggregate be reserved for the volume when the volume is created. A guarantee
of file guarantees space for LUNs in the volume. Thick provisioning is a conservative approach that
prevents administrators from overcommitting space to an aggregate. It simplifies storage management at
the risk of wasting unused space.
Thin provisioning of volumes uses a space guarantee of none. It does not require that space within the
aggregate be reserved for the volume when the volume is created. It is a more aggressive approach that
makes it is possible to overcommit an aggregate. This approach requires more complex storage
management.
9-5
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Thin Provisioning
App 2
waste
App 3
40% use
waste
Typical:
8 spindles
Shared
capacity
6 spindles
App 1
waste
App 3
6 spindles
12 spindles
App 2
App 1
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
THIN PROVISIONING
If you compare the NetApp storage use approach to the competitions approach, you find one feature that
stands out. Flexible dynamic provisioning with FlexVol technology provides high storage use rates and
enables customers to increase capacity without the need to physically reposition or repurpose storage devices.
NetApp thin provisioning enables users to oversubscribe data volumes, which results in high use models. You
can think of this approach as just-in-time storage.
To manage thin provisioning on a cluster, use the volume command.
9-6
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 2
Deduplication and
Compression
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-7
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
vol01
~/doug
~/phil
~/stu
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-8
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Deduplication
NetApp deduplication:
NetApp Deduplication
Is application agnostic:
Before
After
Primary storage
Backup data
Archival data
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
DEDUPLICATION
Deduplication can be thought of as the process of unduplicating data. The term deduplication was first
coined by database administrators many years ago, as a way of describing the process of removing duplicate
records after two databases had been merged.
In the context of disk storage, deduplication refers to any algorithm that searches for duplicate data objects
(for example, blocks, chunks, and files) and discards those duplicates. When duplicate data is detected, it is
not retained, but instead a data pointer is modified so that the storage system references an exact copy of the
data object that is already stored on disk. This deduplication feature works well with datasets that have a lot of
duplicated date (for example, full backups).
When NetApp deduplication is configured, it runs as a background process that is transparent to any client
that accesses data from a storage system. This feature allows a reduction of storage costs by reducing the
actual amount of data that is stored over time. For example, if a 100-GB full backup is made on the first night,
and then a 5-GB change in the data occurs during the next day, the second nightly backup only needs to store
the 5 GB of changed data. This amounts to a 95% spatial reduction on the second backup. A full backup can
yield more than a 90% spatial reduction with incremental backups averaging about 30% of the time. With
nonbackup scenarios, such as with virtual machine images, gains of up to 40% space savings can be realized.
To estimate your own savings, visit the NetApp deduplication calculator at http://www.secalc.com.
9-9
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Deduplication in Action
Example: Three files in three different home directories on a single volume
presentation.ppt
= Identical blocks
With NetApp deduplication,
30 total blocks
Original file
20 blocks
presentation.ppt
Identical file
20 blocks
presentation.ppt
Edited file
10 blocks added
10
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
DEDUPLICATION IN ACTION
In this example, one user creates a Microsoft PowerPoint presentation (presentation.ppt) that includes 20
blocks of data. Then a second user copies the presentation to another location. Finally, a third user copies the
presentation to a third location and edits the file, adding 10 blocks.
When the files are stored on a storage system for which deduplication is configured, the original file is saved,
but the second copy (because it is identical to the original file) merely references the original files location on
the storage system. The edits to the file in the third location (the additional 10 blocks) are saved to the storage
system, but all unedited blocks are referenced back to the original file.
With NetApp deduplication, 30 blocks are used to store 70 blocks of data, and the space that is required for
storage is reduced by 58%.
9-10
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Configuring Deduplication
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
CONFIGURING DEDUPLICATION
Deduplication improves physical storage-space efficiency by eliminating redundant data blocks within a
FlexVol volume. Deduplication works at the block level on an active file system and uses the WAFL (Write
Anywhere File Layout) block-sharing mechanism. Each block of data has a digital signature that is compared
with all the other blocks in the data volume. If an exact match is identified, the duplicate block is discarded,
and a data pointer is modified so that the storage system references the copy of the data object that is stored
on disk. The deduplication feature works well with datasets that have large quantities of duplicated data or
white space. You can configure deduplication operations to run automatically or according to a schedule. You
can run deduplication on new or existing data on any FlexVol volume.
9-11
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Postprocess compression
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-12
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Data Compression
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-13
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
14
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-14
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 3
FlexClone Volumes
15
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-15
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
aggr21
16
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-16
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
17
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
Manipulation
Projection operations
Upgrade testing
The Data ONTAP operating system enables you to create a volume duplicate in which the original volume
and clone volume share the same disk space for storing unchanged data.
9-17
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
aggr01
vol01
Parent
Snapshot
copy of
parent
Clone
18
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A FlexClone volume is a point-in-time, writable copy of the parent volume. Changes that are made to the
parent volume after the FlexClone volume is created are not reflected in the FlexClone volume.
You can clone only FlexVol volumes. To create a copy of a traditional volume, you must use the vol
copy command, which creates a distinct copy with its own storage.
FlexClone volumes are fully functional volumes that are managed, as is the parent volume, by using the
vol command. Likewise, FlexClone volumes can be cloned.
FlexClone volumes always exist in the same aggregate as parent volumes.
FlexClone volumes and parent volumes share the same disk space for common data. Therefore, creating a
FlexClone volume is instantaneous and requires no additional disk space (until changes are made to the
clone or parent).
A FlexClone volume is created with the same space guarantee as the parent.
You can sever the connection between the parent and the clone. This severing is called splitting the
FlexClone volume. Splitting removes all restrictions on the parent volume and causes the FlexClone to
use its own storage.
IMPORTANT: Splitting a FlexClone volume from its parent volume deletes all existing Snapshot copies
of the FlexClone volume and disables the creation of new Snapshot copies while the splitting operation is
in progress.
9-18
Quotas that are applied to a parent volume are not automatically applied to the clone.
When a FlexClone volume is created, existing LUNs in the parent volume are also present in the
FlexClone volume, but these LUNs are unmapped and offline.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Cloning
vol1
Aggregate
Unique Clone
Data Blocks
Shared Data Blocks
Unique vol1
Data Blocks
vol1
clone
19
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
CLONING
A FlexClone volume is a point-in-time, space-efficient, writable copy of the parent volume. The FlexClone
volume is a fully functional standalone volume. Changes that are made to the parent volume after the
FlexClone volume is created are not reflected in the FlexClone volume, and changes to the FlexClone volume
are not reflected in the parent volume.
FlexClone volumes are created in the same virtual server and aggregate as the parent volume, and FlexClone
volumes share common blocks with the parent volume. While a FlexClone copy of a volume exists, the parent
volume cannot be deleted or moved to another aggregate. You can sever the connection between the parent
and the FlexClone volume by executing a split operation.
A FlexClone split causes the FlexClone volume to use its own disk space, but the FlexClone split enables you
to delete the parent volume and to move the parent or the FlexClone volume to another aggregate.
To manage cloning on a cluster, use the volume clone command.
9-19
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Splitting Volumes
With a volume and a Snapshot
copy of that volume, create a
clone of the volume
Volume 1
Snapshot Copy
of Volume 1
Cloned
Volume
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
SPLITTING VOLUMES
Splitting a FlexClone volume from its parent removes any space optimizations that are currently employed by
the FlexClone volume. After the split, both the FlexClone volume and the parent volume require the full
space allocation that is specified by their space guarantees. After the split, the FlexClone volume becomes a
normal FlexVol volume.
When splitting clones, consider these important facts:
9-20
When you split a FlexClone volume from its parent, all existing Snapshot copies of the FlexClone volume
are deleted.
During the split operation, no new Snapshot copies of the FlexClone volume can be created.
Because the clone-splitting operation is a copy operation that could take some time to complete, the Data
ONTAP operating system provides the vol clone split stop and vol clone split
status commands to stop clone-splitting or to check the status of a clone-splitting operation.
The clone-splitting operation is executed in the background and does not interfere with data access to
either the parent or the clone volume.
If you take the FlexClone volume offline while clone-splitting is in progress, the splitting operation is
suspended. When you bring the FlexClone volume back online, the splitting operation resumes.
After a FlexClone volume and its parent volume have been split, they cannot be rejoined.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
21
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-21
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-22
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 4
Quotas
23
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
LESSON 4: QUOTAS
9-23
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
~alan
~stu
~phil
~doug
24
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-24
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Quotas
Limit resource use
vol1
qtree3
Quota policies
qtree2
qtree1
25
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
QUOTAS
Quotas provide a way for you to restrict and track the disk space and number of files that are used by users,
groups, and qtrees. You apply quotas to specific volumes and qtrees. Clustered Data ONTAP enables you to
apply user and group quota rules to qtrees.
You can use quotas to:
9-25
Limit the amount of disk space or the number of files that can be used by a user or group
Limit the amount of disk space or the number of files that can be contained by a qtree
Track the amount of disk space or the number of files that are used by a user, group, or qtree without
imposing a hard limit
Warn users when their disk use or file use reaches a predefined threshold
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
quota rule:
rule vol1
vol2
quota
quota rule
rule vol3
vol2
quota
quota
rule vol2
vol3
quota
quota
quotarule:
rule vol3
quota
quota
quota
quota
quota
quota
quota
quota
quota
quota
A set of quota rules for all the volumes of a storage virtual machine (SVM)
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-26
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
b. False
27
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-27
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 5
28
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-28
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
29
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-29
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
node2
node3
aggr27
a1
aggr42
b3
b1 a3
c5
disk shelves
node4
c5
c5
disk shelves
disk shelves
disk shelves
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-30
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
defer_on_failure
abort_on_failure
force
wait
7. The client accesses the destination volume, and the source volume is
cleaned up.
31
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-31
If the default action, defer_on_failure, is specified, the job tries to cut over until the cutover
attempts are exhausted. If it fails to cut over, it moves into the cutover deferred state. The volume move
job waits for the user to issue a volume move trigger-cutover command to restart the cutover
process.
If the abort_on_failure action is specified, the job tries to cut over until cutover attempts are
exhausted. If the system fails to cut over, it performs a cleanup and ends the operation.
If the force action is specified, the job tries to cut over until the cutover attempts are exhausted, and
then forces the cutover to occur at the expense of disrupting the clients.
If the wait action is specified, when the job reaches the decision point, it does not cut over
automatically. Instead, the job waits for the user to issue a volume move trigger-cutover
command as the signal to try the cutover.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Transparent to clients
node2
aggr27
a1
node3
node4
aggr42
b3
b1 a3
c5
32
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-32
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
33
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-33
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 6
Growing Aggregates
34
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-34
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Growing Aggregates
aggr1
rg1
rg0
Add 3 disks
aggr2
rg1
rg0
rg2
Add 6 disks
Data ONTAP 7-Mode
system> disk show -n
system> aggr add aggr1 10
Clustered Data ONTAP
c1::> storage disk show -spare -owner node3
c1::> storage aggregate add-disks aggr aggr1 disks 3
35
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
GROWING AGGREGATES
You can add disks to an aggregate so that it can provide more storage to its associated volumes. You do this
by adding available spare disks to an existing aggregate. When adding disks, you should consider the size of
your RAID groups and plan to fill complete RAID groups to maximize that amount of useable space that is
gained in comparison to the number of disks that are used for parity. In the aggr2 example, six disks are added
to the aggregate, but only one more data disk adds capacity to the aggregate compared to adding three disks.
Other points to consider when adding disks:
9-35
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
36
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-36
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 7
37
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-37
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Volume Autosize
Overview
vol01
vol01
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-38
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Volume Autosize
Configuration
39
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-39
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
40
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-40
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
-volume
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
[-commitment {try|disrupt|destroy}]
Specifies which Snapshot copies and LUN clones can be automatically deleted to reclaim space
[-defer-delete {scheduled|user_created|prefix|none}]
Determines the order in which Snapshot copies can be deleted
[-delete-order {newest_first|oldest_first}]
Specifies whether the oldest Snapshot copy and the oldest LUN clone, or the newest Snapshot copy and
the newest LUN clone, are deleted first
[-defer-delete-prefix <text>]
Specifies the prefix string for the -defer-delete prefix parameter. The option is not applicable
for LUN clones.
[-target-free-space <percent>]
Specifies the free space percentage at which the automatic deletion of Snapshot copies and LUN clones
must stop. Depending on the -trigger, Snapshot copies and LUN clones are deleted until you reach
the targeted free space percentage.
[-trigger {volume|snap_reserve|space_reserve}]
Specifies the condition that starts the automatic deletion of Snapshot copies and LUN clones
9-41
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
[-destroy-list <text>]
Clustered Data ONTAP only: Specifies a comma-separated list of data backing functions that are affected
if the automatic deletion of the Snapshot copy that is backing that service is triggered
9-42
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
References
Clustered Data ONTAP Logical Storage Management Guide
TR-4148-0313: Operational Best Practice - Thin Provisioning
TR-3966: Compression and Deduplication for Clustered Data ONTAP,
Deployment and Implementation Guide
42
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
REFERENCES
9-43
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
43
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
9-44
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
44
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
EXERCISE
Refer to your exercise guide.
9-45
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 10
Data Protection
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-1
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-2
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Backup
Storage
ApplicationConsistent
Backup
Disaster Recovery
Storage
Archive
Servers
Compliance
Storage
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-3
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Feature
Key
Benefits
SnapMirror
SnapVault
MetroCluster
Provide continuous
data access by
transferring data
service of an
unavailable
controller to the
surviving partner
A disaster recovery
solution that mirrors
data to a different
storage controller or
cluster
A data protection
solution that
provides extended
and centralized diskto-disk backup for
storage systems
A self-contained HA
disaster recovery
solution that
achieves continuous
data availability for
mission-critical
applications
Transparent to
clients
Multiple paths to
all storage
shelves
Nondisruptive
software upgrade
Reduced
bandwidth
utilization
Thin replication
Replication
management
across storage
tiers with a single
tool
Drastically
reduced backup
times
Smaller backup
footprint
Fast application
and virtual
machine recovery
Automated,
transparent siteto-site failover
Continuous
availability and
zero data loss
Easy deployment
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-4
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-5
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 1
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-6
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
HA Pairs
A high-availability (HA) pair contains two nodes
whose controllers are directly connected through an
HA interconnect
A node can take over its partner's storage to provide
continued data service if the partner goes down
HA pairs are components of the cluster, but only the
nodes in the HA pair can take over each others
storage
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
HA PAIRS
HA pair controllers are connected to each other through an HA interconnect. This connection allows one node
to serve data that resides on the disks of its failed partner node. Each node continually monitors its partner,
mirroring the data for each others NVRAM or NVMEM. The interconnect is internal and requires no
external cabling if both controllers are in the same chassis.
Takeover is the process in which a node takes over the storage of its partner.
Giveback is the process in which that storage is returned to the partner.
HA pairs are components of the cluster in clustered Data ONTAP. Although both nodes in the HA pair are
connected to other nodes in the cluster through a cluster interconnect, only the nodes in the HA pair can take
over each others storage.
Although single-node clusters are supported, clusters that contain two or more nodes must be arranged in HA
pairs. If you join two single nodes into a Data ONTAP cluster, you must configure the two nodes as an HA
pair.
10-7
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
A unique namespace
Resource pool (disks, aggregates, volumes)
Network access (interfaces)
MultiStore units, which are confined to a node
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-8
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Data aggregates are treated a little differently. Data can still be served from the node that has taken over.
Additionally, the client might not even be mounted to the node in the HA pair that is failing over. When the
system creates an aggregate, it assumes that the aggregate is for data and assigns the storage failover (SFO)
HA policy to the aggregate. With the SFO policy, the data aggregates will fail over first and fail back last in a
serial manner.
Hardware-assisted takeover speeds up the takeover process by using a nodes remote management device
(Service Processor [SP] or Remote LAN Module [RLM]) to detect failures and quickly initiate the takeover,
rather than waiting for Data ONTAP to recognize that the partner's heartbeat has stopped. Without hardwareassisted takeover, if a failure occurs, the partner waits until it notices that the node is no longer giving a
heartbeat, confirms the loss of heartbeat, and then initiates the takeover.
10-9
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Node 1
Cluster Interconnect
n1_aggr0
aggr1
aggr2
H
A
Node 2
n2_aggr0
aggr3
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-10
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Node 1
STOP
10
Cluster Interconnect
n1_aggr0
aggr1
aggr2
H
A
Node 2
n2_aggr0
aggr3
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-11
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Automatic or manual
giveback
is initiated with the
storage failover
giveback command
n1_aggr0 is given back
to node 1 to boot the
node
Data aggregate giveback
occurs one aggregate at
a time
11
n1_aggr0
aggr1
aggr2
H
A
Node 2
n2_aggr0
aggr3
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-12
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
HA Failover Summary
HA Event
Event Description
12
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
HA FAILOVER SUMMARY
10-13
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Select the statement that is true about giveback with storage failover in
clustered Data ONTAP.
1. The surviving partner simultaneously returns ownership of all the
aggregates to its partner node.
2. The surviving partner returns ownership of the all the aggregates and the
data LIFs to its partner.
3. The surviving partner returns ownership of the root aggregate to its
partner node first, and then returns the other aggregates.
4. I/O resumes only when all aggregates are returned to the partner node.
13
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-14
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 2
SnapMirror Software
14
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-15
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SnapMirror Technology
Source
15
Destination
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
SNAPMIRROR TECHNOLOGY
SnapMirror copies are disk-to-disk online backups. Data protection mirror copies are simpler, faster, more
reliable, and easier to restore than tape backups are, although data protection mirror copies are not portable for
storing offsite. A typical use of data protection mirror copies is to put them on aggregates of SATA disks that
use RAID-DP technology and then mirror data to them daily during the least active time in the cluster.
Data protection mirror copies are not meant for client access, although they can be mounted into the
namespace by an administrator. Junctions cannot be followed in a data protection mirror copy, so access is
given to only the data that is contained in that data protection mirror copy, not to any other volumes that are
mounted to the source read/write volume.
10-16
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SnapMirror Features
Qtree
replication
Volume
replication
16
SVM-to-SVM
replication
Data protection
mirror
copies
Load-sharing
mirror
copies
Asynchronous
mirroring
Cluster-tocluster replication
Synchronous
mirroring
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
SNAPMIRROR FEATURES
SnapMirror software provides asynchronous data protection mirror copies on the volume level.
Data ONTAP operating in 7-Mode adds support for replication at the qtree level, and also semi-synchronous
and synchronous mirror replication in real time.
Clustered Data ONTAP adds support for replication among SVMs and among clusters. It also adds the ability
to balance loads among nodes in a cluster with load-sharing mirrors.
10-17
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Intercluster Replication
Replication between clusters for Disaster Recovery
Data transfers on intercluster network
RW
Source volume
Intercluster LIF
connection
WAN
Intercluster
network
DP
17
Destination volume
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
INTERCLUSTER REPLICATION
Intercluster SnapMirror replication, as opposed to traditional intracluster mirroring, gives you the flexibility
to create an asynchronous SnapMirror volume on a cluster other than the source volumes cluster, for data
protection. The replication is carried out across the WAN by using intercluster LIFs. You can use intercluster
SnapMirror replication to store online copies of your data offsite, for disaster recovery.
To use intercluster SnapMirror replication, you must license the feature on both participating clusters.
You need a full mesh intercluster network to support node failover and volume moves of the source or
destination volumes. For the network to be full mesh, every intercluster LIF on every node in the cluster must
be able to connect to every intercluster LIF on every node in the peer cluster.
10-18
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
/vol_b
/.admin/vol_b
B Origin volume
18
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-19
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
19
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-20
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 3
SnapVault Software
20
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-21
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
SnapVault Backups
Reduce backup times from hours to days or minutes
Provide 100% success rates for backup reliability
21
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
SNAPVAULT BACKUPS
SnapVault software leverages block-level incremental replication for a reliable, low-overhead backup
solution. It provides efficient data protection by copying only the data blocks that have changed since the last
backup, instead of copying entire files. As a result, you can back up more often while reducing your storage
footprint because no redundant data is moved or stored.
With direct backups between NetApp systems, disk-to-disk vault backups minimize the need for external
infrastructure and appliances. By default, vault transfers retain storage efficiency on disk and over the
network, further reducing network traffic. You can also configure additional deduplication, compression, or
both on the destination volume. However, if additional compression is configured on the destination volume,
storage efficiencies from source to destination are not retained over the network.
The key advantages of vault backups for clusters include reduction of backup times from hours or days to
minutes, 100% success rates for backup reliability, reduction of disk capacity requirements by 90% or more,
simplified management across enterprise applications, and minimized network traffic.
For more information about backing up FlexVol volumes to a backup vault, see the Clustered Data ONTAP
Data Protection Guide.
10-22
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
22
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-23
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
23
SnapVault Functions
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-24
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 4
MetroCluster Software
24
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-25
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Cluster A in
Data Center A
Cluster B in
Data Center B
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-26
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Existing 7-Mode
MetroCluster
Customers
26
New MetroCluster
Customers
Existing Clustered
Data ONTAP
Customers
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-27
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
27
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-28
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
1. Stretch
2. Fabric
3. Hybrid
4. Cloud
28
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-29
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
29
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-30
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 5
30
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-31
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Unified
Manager*
Data
Protection
Interfaces
SnapProtect
SnapManager
* OnCommand
Unified Manager
31
SnapDrive
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-32
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
1. SnapDrive
2. SnapManager
3. SnapMirror
4. SnapProtect
5. SnapVault
32
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-33
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-34
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
References
Clustered Data ONTAP Data Protection Guide
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
REFERENCES
10-35
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
35
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
10-36
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
36
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
EXERCISE
Please refer to your exercise guide.
10-37
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 11
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-1
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-2
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 1
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-3
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
OnCommand
System Manager
OnCommand
Unified Manager
OnCommand
Workflow
Automation
OnCommand
Performance
Manager
OnCommand
Insight
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-4
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-5
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-6
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Unified Manager
Discovery Process
Unified Manager
Server
Cluster management
IP address
Pings
Cluster management
logical interface (LIF)
Management LIF
response
No
Yes
Cluster objects
added
Cluster not
added
Check for LIF reachability
or
Check if nodes are down
Nodes
Aggregates
Cluster
Volumes
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-7
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
Availability
Capacity
Performance
Protection
The Quick Takes area provides information about the health of your storage objects.
The Unresolved Incidents and Risks area displays events that are categorized as incidents and risks.
Incidents refer to issues that have already affected the storage objects.
Risks refer to issues that may impact the storage objects.
You can integrate OnCommand Workflow Automation with Unified Manager to execute workflows for your
storage classes. You can also monitor SVMs that have an infinite volume but do not have storage classes.
When Unified Manager is integrated with Workflow Automation, the reacquisition of Workflow Automation
cached data is triggered.
NOTE: A storage class is a definition of aggregate characteristics and volume settings. You can define
storage classes, and you can associate one or more storage classes with an infinite volume.
11-8
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Manage Annotations
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
MANAGE ANNOTATIONS
Annotation types enable you to annotate storage objects based on the priority of the data that they contain.
You can annotate volumes, clusters, and SVMs. Data-priority is the default annotation type; it has the values
mission-critical, high, and low. You can create custom annotations. You can also view custom annotation
information in an alert email and in the Event details page and Object details page.
11-9
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
10
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-10
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
11
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-11
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
12
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-12
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Reporting
13
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
REPORTING
Unified Manager reports display the current status of the storage so that you can make important decisions,
such as storage procurement based on the current usage. Reports provide an full view of storage objects, such
as a list of volumes, disk shelves, and aggregates. You can run reports, delete reports, create custom reports,
save a customized report, and import reports. Reports can be scheduled and shared to multiple recipients.
11-13
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 2
Event Management
14
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-14
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Notifications of Events
The system:
15
Message name
Severity level
Description
Corrective action, if applicable
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
NOTIFICATIONS OF EVENTS
The system collects and displays information about events that occur on your cluster. You can manage the
event destination, event route, mail history records, and SNMP trap history records. You can also configure
event notification and logging.
11-15
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
ALERT
CRITICAL
ERROR
WARNING
NOTICE
INFORMATIONAL
DEBUG
16
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-16
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Event Notifications
A three-step process to set up:
1.Modify the event configuration to contain the mail host and other
attributes.
2.Create at least one event destination.
3.Modify routes to use a destination.
Examples:
c1::> event config modify -mailfrom bob@learn.local
-mailserver xx.xx.xx.xx
c1::> event destination create -name crits -mail
tom@learn.local
c1::> event route modify -messagename coredump*
-destinations crits
17
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
EVENT NOTIFICATIONS
On clustered Data ONTAP, you can further configure the system to send notifications to certain destinations
when an event of interest occurs on the cluster. Unlike AutoSupport messages, the event message is only a
notification rather than complete system diagnostic information. The notification can be associated with any
event.
The event route associates a given event message with an event destination. You modify a messages
destination value to indicate the email address to which the notification should be sent. You can perform this
action on all notifications at the same time by using a regular expression when specifying the event name in
the event route modify command.
11-17
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Event Destinations
An event destination is a named combination of any or all of
the following:
The email destination
The SNMP destination
The syslog destination
18
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
EVENT DESTINATIONS
An event destination is a named combination of the email destination, the SNMP destination, or the syslog
destination. You can associate a named destination with a specific event message by using an event route.
11-18
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Event Routes
Are associations between event messages and event
destinations
Allow for frequency thresholds and time thresholds:
Prevent floods of event notifications
Stop notifications for a specific number of iterations or for a
period of time (for example, if you know that a disk is bad and you
want to be reminded only once a day)
19
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
EVENT ROUTES
Event routes have nothing to do with network routes but are merely associations between event messages and
receivers of notifications that are associated with the messages.
11-19
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 3
20
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-20
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
21
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-21
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Data Storage
Are the aggregates online?
Data ONTAP 7-Mode
system> aggr status
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
DATA STORAGE
For the most part, these commands are self-explanatory. Most show commands provide a view of whats
happening in a particular area of the cluster. Also, most show commands have some powerful query
capabilities which, if you take the time to learn them, can help you to pinpoint potential problems.
In the clustered Data ONTAP command volume show -state !online, the exclamation point means
not (negation). Therefore, this command shows all volumes that do not have a state of online. Because
youll want to know about other states that exist, it is important to use !online rather than offline.
11-22
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Storage Failover
Is storage failover happy?
Data ONTAP 7-Mode
system> cf status
23
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
STORAGE FAILOVER
When the aggregates of one node fail over to the HA partner node, the aggregate that contains the vol0
volume of that node goes, too. Each node needs its vol0 to boot, so when the rebooted node begins to boot, it
signals the partner to do a giveback of that one aggregate and then waits for that to happen. If storage failover
(SFO) is working properly, giveback happens quickly, the node has its vol0 and can boot. When it gets far
enough in its boot process, the rest of the aggregates are given back. If problems exist, you probably see the
rebooted node go into a waiting for giveback state. If this happens, it is possible that its aggregates are stuck
in a transition state between the two nodes and might not be owned by either node. In this situation, contact
NetApp Global Support.
11-23
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Networking
Data ONTAP 7-Mode
24
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
NETWORKING
You can verify that all the network configuration, the ports, and the interfaces are functioning properly.
On clustered Data ONTAP, if the physical ports are fine, verify that the LIFs are working properly and note
which ones are home and which ones arent home. If the LIFs are not home, it doesnt mean that a problem
exists, but this condition might give you a sense of what is happening.
11-24
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-25
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 4
26
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-26
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
27
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-27
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
volume
Object
28
vol1
avg_latency:54.6us
vol2
avg_latency:84.1us
vol3
avg_latency:53.8us
Instance
Counter
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-28
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Command Syntax
Using Counter Manager commands:
Collect a sample; use wildcards to collect all objects or instances:
Clustered Data ONTAP
c1::> statistics start | stop
c1::> statistics samples show
7-Mode
system> stats start
system> stats stop
7-Mode
system> stats show
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
COMMAND SYNTAX
To begin using Counter Manager, use the start and stop options to collect a measured sample of data. You can
use object and instance parameters to narrow down the data that is being collected, or you can use wildcards
with these parameters to collect all objects or instances. When you have a sample to work with, you can use
the show commands with various object, instance, and counter values to further filter results. In clustered
Data ONTAP, you can use wildcards to specify subsets or thresholds of values for objects, instances, and
counters.
You can also simply collect instantaneous Snapshot copies of current statistics by using the i parameter in
7-Mode or the show-periodic option in clustered Data ONTAP. Use ? and tab completion, or view the
man pages for more details.
11-29
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Example:
Object = volume
Instance = accountsPayVol
Counter = total_ops
30
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-30
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Filtering Statistics
Show groups of counters or individual counters
Can show multiple iterations
31
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
FILTERING STATISTICS
Displaying statistics in Data ONTAP 7-Mode requires the use of the stats show command on the most
recently collected sample. The stats show command shows groups of counters or individual counters and
can show multiple iterations of counters. The stats show command can be very useful for protocol
latencies.
11-31
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-32
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 5
33
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-33
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
System Logs
Log messages can be sent to:
The console
7-Mode: /etc/messages file
Clustered Data ONTAP: /mroot/etc/log/mlog/messages.log file
34
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
SYSTEM LOGS
The system log contains information and error messages that the storage system displays on the console and
logs in message files. In 7-Mode, use an NFS or CIFS client to access the /etc/messages file. In clustered Data
ONTAP, use the debug log command to access the /mroot/etc/log/mlog/messages.log file. You can use
OnCommand System Manger to access system logs in either 7-Mode or clustered Data ONTAP.
11-34
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
35
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-35
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Core Files
Data ONTAP 7-Mode
Located in /etc/crash
Named as core.<n>.nz
Clustered Data ONTAP
User-space core files are:
Located in /mroot/etc/crash/cores
Named as <procname>.core.<pid>
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
CORE FILES
7-Mode core files are stored in /etc/crash. The n in the file name is a number that can be matched with a
date and time of panic based on the panic message in the /etc/messages log file.
User-space core dumps are named according to the process name (for example, mgwd) and use the process ID
(pid) of the instance of the process that generates the core file.
Kernel core dumps include the sysid, which is not the node name but a numerical representation of the node.
The date and time in the core dump name indicate when the panic occurred.
The Remote LAN Module (RLM) is an out-of-band connection to a node that allows for some management of
the node, even when the node is inaccessible from the console and UI. The RLM connection has a separate IP
address and its own shell. Examples of RLM commands are system power off, system power on,
system reset, and system console.
11-36
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
If the node is in bad shape, from the Remote LAN Module session
or the Storage Partition Management session, enter:
RLM> system core
37
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-37
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-38
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
39
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-39
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
To access logs:
http://cluster-mgmt-ip/spi/cluster1-01/etc/log/
40
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-40
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
41
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-41
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
References
TR-4211: NetApp Storage Performance Primer for Clustered Data ONTAP
TR-4150-0313: Operational Best Practice AutoSupport
NetApp Knowledge Base: https:\\kb.netapp.com
42
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
REFERENCES
You can find the technical triage templates at
https://kb.netapp.com/support/index?page=content&cat=TRIAGE&channel=HOW_TO.
11-42
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
43
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
11-43
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Exercise
44
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
EXERCISE
Please refer to your exercise guide.
11-44
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Module 12
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
12-1
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 1
Nondisruptive Upgrade
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
12-2
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
12-3
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Measure of NDO
100%
0%
Lifecycle
Operations
Planned
Event
Unplanned
Event
Capacity and
performance
management
Maintenance
Operations
Software upgrade,
hardware
replacement and
upgrade
Infrastructure
Resiliency
Resiliency during
hardware and
software failure
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
Lifecycle operations: These are operations that a customer performs to optimize the storage environment
to meet business SLAs while maintaining the most cost-optimized solution. These operations include
moving datasets around the cluster to different tiers of storage and storage controllers to optimize the
performance level of the dataset and manage capacity allocations for future growth of the dataset.
Maintenance operations: At the next level of NDO, components of the storage subsystem are
maintained and upgraded without incurring any outage of data. Examples include replacing any hardware
component, from a disk or shelf fan to a complete controller head, shelf, or system. The idea is that data is
immortal and potentially lives forever, but hardware does not, so maintenance and replacement of
hardware will happen one or more times over the lifetime of a dataset.
Infrastructure resiliency: Infrastructure resiliency is the basic building block for the storage subsystem.
It prevents a customer from having an unplanned outage when a hardware or software failure occurs.
Infrastructure resiliency is based on redundant field replaceable units (FRUs), multipath high-availability
(HA) controller configurations, RAID, and WAFL (Write Anywhere File Layout) proprietary software
enhancements that help with failures from a software perspective. For node hardware failures or software
failures, HA failover enables the node in the HA pair to fail over.
For more granular information about storage subsystem resiliency against failures, refer to TR-3450 and the
Storage Subsystem Best Practices Guide.
12-4
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Upgrading Nodes
Two boot images exist on each node
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
UPGRADING NODES
A nondisruptive upgrade (NDU) is a mechanism that uses HA-pair controller technology to minimize client
disruption during an upgrade of Data ONTAP or controller firmware. This procedure allows each node of
HA-pair controllers to be upgraded individually to a newer version of Data ONTAP or firmware.
12-5
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
12-6
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
12-7
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Nondisruptive Upgrade
Rolling Upgrade
8.2
8.2
8.1
8.2
8.1
8.2
12-Node Cluster
8.1
8.2
8.1
8.1
8.1
8.1
* Based on a
60-minute average
upgrade time per
HA pair
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
12-8
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Nondisruptive Upgrade
Batch Upgrade
8.2
8.2
8.2
Batch 1
8.2
8.2
8.2
12-Node Cluster
8.1
Batch 2
8.1
8.1
8.1
8.1
8.1
Based on a
60-minute average
upgrade time per
HA pair
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
12-9
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
4. HTTPS
5. HTTP
6. CIFS
10
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
12-10
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Automated NDU
Data ONTAP 8.3 and Later
8.3.x
8.3.x
8.3.x
8.3.x
8.3
8.3
6-Node Cluster
8.3.x
8.3.x
8.3.x
8.3.x
Batch 1
8.3.x
8.3.x
12-Node Cluster
8.3
8.3
Batch 2
8.3
8.3
8.3
8.3
11
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
One-touch upgrade
Simplifies and smoothens the upgrade experience
Avoids human errors
No need to download the image to all nodes in the cluster. Saves /mroot space on n-2 nodes.
In case of any errors, automated NDU provides guidance for the user with further actions to be taken.
The Data ONTAP 8.3 operating system automates the NDU process.
1. First, the Data ONTAP 8.3 operating system automatically installs the target Data ONTAP image on each
node in a cluster.
2. The Data ONTAP 8.3 operating system validates the cluster components to ensure that the cluster can be
upgraded nondisruptively.
3. Based on the number of nodes in the cluster, the operating system executes a rolling or batch upgrade in
the background. Clusters with two to six nodes use a rolling upgrade, whereas clusters with more than six
nodes use a batch upgrade.
The multistep manual process that administrators need to perform on each node has been automated into three
commands for the entire cluster.
Note that automated NDU requires that all nodes in the cluster start at the generally available distribution of
the Data ONTAP 8.3 operating system and upgrade to a later release.
For additional information about Data ONTAP upgrades, see the Clustered Data ONTAP 8.3 Upgrade and
Revert/Downgrade Guide.
12-11
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
12
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
12-12
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
12-13
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 2
Transition Fundamentals
14
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
12-14
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Evaluation
Adoption
Transition
Operations
7-Mode
7-Mode
7-Mode
Mixed
cDOT
Operating on
7-Mode
Clustered Data
ONTAP evaluation
and ROI
assessment
Decision to adopt
clustered Data
ONTAP and
move 7-Mode
environment
Transition of
the 7-Mode
environment
to clustered
Data ONTAP
Operating on
clustered
Data ONTAP
Transition Fundamentals
15
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
12-15
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Transition Fundamentals
Overview
Define
Scope
Transition
Planning
Environment
Discovery
Cluster
Design
IDENTIFY
DESIGN
TRANSITION
IMPLEMENT
Deploy &
Configure
Environment
Updates
Data
Migration
16
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
12-16
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Transition Fundamentals
A Conceptual Framework
Define
Scope
Transition
Planning
Environment
Discovery
Cluster
Design
IDENTIFY
DESIGN
TRANSITION
Web-based Training:
Introduction to the Unified Transition
MethodologyIMPLEMENT
Framework
Deploy &
Configure
Environment
Updates
Data
Migration
17
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
12-17
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
18
RapidData Migration
Solution
Targets NFS v3 customers
Source discovery
Automates data migration
Cache maintains client
performance
Minimally disruptive per-client
cutover
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
12-18
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Application-based migration
Oracle Automatic Storage Management (ASM)
Microsoft Exchange Database Availability Group (DAG)
Virtualization environment Tools.
Appliance-based migration
RapidData Migration tool
DTA2800
19
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
12-19
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Web-based courses:
NetApp Transition Fundamentals
Introduction to the Unified Transition
Methodology Framework
20
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
12-20
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
References
Clustered Data ONTAP 8.2 Upgrade and Revert/Downgrade Guide
21
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
REFERENCES
12-21
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
22
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
12-22
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
NetApp University
NetApp
Explore certification
NetApp Support
Welcome to Knowledgebase
23
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
12-23
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The NetApp Support page is your introduction to all products and solutions support:
http://mysupport.netapp.com. Use the Getting Started link
(http://mysupport.netapp.com/info/web/ECMP1150550.html) to establish your support account and hear from
the NetApp CEO. Search for products, downloads, tools, and documentation, or link to the NetApp Support
Community (http://community.netapp.com/t5/Products-and-Solutions/ct-p/products-and-solutions).
Join the Customer Success Community to ask support-related questions, share tips, and engage with other
users and experts.
https://forums.netapp.com/
Search the NetApp Knowledgebase to leverage the accumulated knowledge of NetApp users and product
experts.
https://kb.netapp.com/support/index?page=home
12-24
Clustered Data ONTAP Administration: Upgrading and Transitioning to Clustered Data ONTAP
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Bonus Module A
Infinite Volumes
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A-1
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A-2
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
data
LIFs
-----
mgmt
LIF
Infinite
Volume
SVM Admin
NFS
CIFS
Client Access
Storage virtual machines (SVMs) with Infinite Volume contain only one
infinite volume:
One junction path, which is /NS by default
Can be used for NFS and CIFS (SMB 1.0) only
3
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A-3
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
LIF5 LIF6
LIF3 LIF4
LIF7 LIF8
/NS
LIF9 LIFA LIFB LIFC
Infinite Volume
NS
DC2
DC1
1 namespace constituent
Namespace mirror(s)
Minimum of 2 data constituents
1 junction path
NSm
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A-4
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A-5
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Infinite Volume
Namespace Constituent
Infinite Volume
NS
F1
DC1
DC1
F5
F2
F3
F4
DC2
F6
NSm
NS
F1 DC1
F2 DC2
F3
NS DC1
F4 DC2
F5 DC1
F6 DC2
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A-6
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Infinite Volume
Namespace Mirror
10 TB container
Intracluster volume
SnapMirror copy of
namespace constituent
Provides
Infinite Volume
NS
F1
DC1
DC1
F5
F2
F3
F4
DC2
F6
NSm
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A-7
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Infinite Volume
Data Constituents
Infinite Volume
NS
F1
DC1
F5
F2
F3
F4
DC2
F6
NSm
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A-8
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
F2
F4
F3
Infinite Volume
NS
DC2
DC1
NSm
F1 DC1
F2 DC2
F3 DC1
F4 DC2 SVM for Infinite Volume
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A-9
Files are distributed in a round-robin fashion to the data constituents based on the capacity threshold
(preference is given to data constituents with the most available space).
Files do not span data constituents; each file gets written completely in one data constituent only and
doesnt get striped.
When a write request comes in, the namespace constituent is updated with an empty file handle, and the
data file is written on a data constituent based on the capacity threshold. Then the data file location is
updated on the namespace constituent and acknowledged back to the client.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Infinite Volume
NS
F1
DC1
F3
F2
DC2
F4
NSm
F1 DC1
F2 DC2
F3 DC1
F4 DC2 SVM for Infinite Volume
10
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A-10
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
F2 DC2
Infinite Volume
NS
F1
DC1
F3
F2
DC2
F4
NSm
F1 DC1
F2 DC2
F3 DC1
F4 DC2 SVM for Infinite Volume
11
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A-11
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Infinite Volume
Class 3
F3
F1
Class 1
10-TB SSD
200-TB SATA
Dedupe = enabled
Compression= enabled
F4
Class 2
100-TB SAS
Dedupe = enabled
F2
Deduplication enabled/disabled
Compression enabled/disabled
Inline compression enabled/disabled
Storage efficiency policy
Space guarantee
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A-12
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Infinite Volume
Class 3
F3
F1
Class 1
10TB SSD
200TB SATA
Dedupe = enabled
Compression= enabled
F4
Class 2
100TB SAS
Dedupe = enabled
F2
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A-13
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Bundled Workflows
LIF1 LIF2
LIF5 LIF6
LIF3 LIF4
LIF7 LIF8
NFS
and
CIFS
SAS
SATA
SATA
/NS
SAS drives
Deduplication enabled
Volume efficiency policy
Weekends: 6 a.m.midnight
QoS policy: background
DC1 NS DC2
DC2
DC3
DC3
DC5
DC4
DC7
DC9
DC11
DC13
DC15
NSm
DC1
DC4
DC6
DC8
DC10
DC12 DC14
DC16
SATA drives
Deduplication enabled
Compression enabled
(background and inline)
Volume efficiency policy
Midnight6 a.m.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A-14
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
F2
F3
F4
NS
SAS
DC1
SATA
DC2
SATA
DC2
DC3
DC3
DC5
DC4
DC7
DC9
DC11
DC1
DC4
DC6
DC8
DC10
DC12 DC14
DC13
DC15
DC16
NSm
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A-15
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
16
Attributes
Intended For
Scalability
Junction Path
SVM
Protocol
Access
System Type
Files
Up to 2 billion
Maximum file size 16TB
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A-16
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Intended For
Workloads
Backup and
Replication
Data Protection
Storage
Efficiency
Features
FlexClone
Number of Data
Single data copy per cluster
Copies Required
17
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A-17
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
18
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A-18
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
A-19
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Bonus Module B
Engaging NetApp Support
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-1
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-2
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 1
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-3
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Experienced
Support
Professionals
Customer
Support
Centers
Global
Logistics
AwardWinning
Support
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-4
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Website
Support@netapp.com
Call
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-5
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Key Features
Download software releases and patches
Download product documentation
Use the comprehensive Knowledgebase
Manage your service contracts
Access technical resources
Log and monitor problem reports
Share information through interactive
community sessions
With Login
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-6
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-7
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Support Community
https://forums.netapp.com/community/support
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
Example: How do I change the J7 jumper on the AT-FCX module to get it running in 10Gb mode?
Community Forums
You can ask questions, exchange ideas, or get feedback from other community members on the NetApp
Community public forum. You must be a member to participate and connect to the NetApp community.
B-8
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
AutoSupport
Key Features
Sophisticated monitoring for faster incident management and resolution
Known as Phone Home
Sends weekly AutoSupport messages to NetApp
AutoSupport
Messages
NetApp
NetApp Systems
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
AUTOSUPPORT
NetApp AutoSupport is an integrated, efficient monitoring and reporting technology that checks the health of
your AutoSupport enabled NetApp systems on a continual basis. This call home feature in the Data
ONTAP software for all NetApp systems collects detailed performance data and sends that diagnostic data
back to NetApp, where it is automatically analyzed for any issues that might affect system stability and
performance.
B-9
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
My AutoSupport
Key Features
NetApp Systems
AutoSupport
Messages
NetApp
SSC Partners
Customers
AutoSupport
Data
Warehouse
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
MY AUTOSUPPORT
My AutoSupport is a suite of web-based applications hosted on the NetApp Support site and accessible via
your web browser. Using the data from AutoSupport, My AutoSupport proactively identifies storage
infrastructure issues through a continuous health-check feature and automatically provides guidance on
remedial actions that help increase uptime and avoid disruptions to your business.
My AutoSupport provides four primary functions.
First, it identifies risks and provides best practice tips. For example, My AutoSupport might find a
configuration issue, a bad disk drive, or version incompatibility on your system.
Second, My AutoSupport can compare your hardware and software versions and alert you to potential
obsolescence. For example, My AutoSupport alerts you about end-of-life (EOL) issues or an upcoming
support contract expiration date.
Third, My AutoSupport provides performance and storage utilization reports to help you proactively plan
capacity needs.
Last, My AutoSupport provides new system visualization tools and transition advisor tools for clustered Data
ONTAP systems.
If you plan any changes to your controllers, NetApp recommends manually triggering an AutoSupport
message before you make the changes. This manually triggered AutoSupport message provides a before
snapshot for comparison, in case a problem arises later.
B-10
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
11
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-11
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Upgrade Advisor
12
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
UPGRADE ADVISOR
With the clustered Data ONTAP Dashboard, you can easily generate a Data ONTAP upgrade plan for a
cluster. Using the Upgrade Advisor link from the left navigation pane of the dashboard, select the nodes for
which you need an upgrade plan and then specify the target Data ONTAP version to generate an upgrade plan
for one or more nodes in the cluster.
B-12
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Key Features
Web-based utility
Provides detailed configuration
information for NetApp products
that work with third-party
components
13
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-13
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
14
Method 2
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
Core Name
------------------------------------core.101178384.2007-08-28.07_18_45.nz
core.101178384.2007-09-13.21_16_49.nz
core.101178745.2007-10-04.13_15_14.nz
Saved
------true
true
true
Panic Time
----------------8/28/2007 03:18:45
9/13/2007 17:16:49
10/4/2007 09:15:14
Method 1: Enable remote read-only HTTPS access to the root volume of each node.
NOTE: This option is available with clustered Data ONTAP 8.1.1 and later.
For information on how to enable remote read-only HTTPS access, see article 1013814: How to enable
remote access to a nodes root volume in a cluster
1. Copy the file from the root volume to a local workstation.
https://<cluster-mgmt-ip>/spi/<node_name>/etc/crash/
2. Once the file is on a local workstation, you can upload it using https://upload.netapp.com.
3. For more information, see article 1010364:How to upload a core file for analysis.
B-14
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Method 2:
1. Run the system coredump upload command.
You can upload a core to NetApp from the storage system, provided that it has access to the Internet. The
syntax for the command is:
::>system node coredump upload -node <node_name> -corename
core.<id>.<date_time>.nz -location ftp://ftp.netapp.com/to-ntap/ -type
kernel -casenum <case_number>
2. Log in with user name anonymous and any valid email address as the password.
The type and case number fields are specific for the type of core being uploaded (application or kernel) and
the specific case number opened for this issue.
B-15
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 2
15
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-16
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
16
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-17
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Resource
https://kb.netapp.com/support/index?page=content&id=1013073&locale=en_US
B-18
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
17
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-19
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
B-20
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
18
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-21
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
19
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-22
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
20
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-23
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
21
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-24
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
22
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-25
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
23
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-26
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
24
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-27
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
25
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-28
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
26
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-29
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
27
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-30
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
References
NetApp Support web site
http://mysupport.netapp.com
https://kb.netapp.com/support/index?page=home
https://forums.netapp.com/community/support
http://support.netapp.com/matrix
28
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
REFERENCES
Here are some links to learn more about the NetApp support site.
The NetApp Support Website: A website where you can access technical resources and log and monitor
problem reports. You are required to create an account to access the site.
The NetApp Knowledgebase: A self-help knowledgebase for articles and tips on NetApp product.
The NetApp Support Community: A web site where you can ask product usage questions and exchange tips
and suggestions. You are required to create an account to access the site.
The Interoperability Matrix Tool: A search tool that provides which NetApp and third-party products are
supported within a particular configuration. You are required to create an account to access this tool.
B-31
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
B-32
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Bonus Module C
OnCommand Insight
Walkthrough
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-1
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Ken Asks
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
KEN ASKS
C-2
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 1
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-3
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-4
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Asset Dashboard
Upper Screen
Launch Java
(thick) client
Toolbar
Administration
and settings
Online help
Current user
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-5
The Help topics option includes getting started, installation, and configuration information for Insight 7.0.
The Data source support matrix option opens a detailed matrix for this version of Insight.
The Check for updates option indicates whether a new Insight version is available.
The Support option opens the NetApp Support page.
The Java UI Help option describes the original Insight client features that you might need to use with the new
Insight 7.0 features.
The About option lists the Insight version, build numbers, and copyright information.
The Admin icon opens the web UI configuration and troubleshooting tools. If a circled number appears on this
icon, the number is the total of all items that require your attention. Check the buttons in the Admin group to see
how these items are divided among the options.
The Launch Java UI icon opens the original Insight client. You need to use the Java UI to define annotations,
business entities, policies, and thresholds.
The Current User Logged in as <user role> icon displays the role of the person who is logged in and
provides the logout option.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Asset Dashboard
Search, Navigation, and Number of Problems
Search for a
specific resource.
Browser navigation
Number of
potential problems
Asset dashboard
global status charts
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-6
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Action
menu
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-7
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Asset Dashboard
Lower Screen
Top 10 storage
pools
Current capacity
information
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-8
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Asset Dashboard
Heat Maps
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-9
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Time segment
Last refresh
Device configuration
summary
Correlation
information
Performance charts
10
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
Virtual machine
Volume
Internal volume
Physical host
Storage pool
Storage
Datastore
Hypervisor
Application
Node
C-10
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
3 hours
24 hours
3 days
7 days
All data
Performance Charts
Select the checkboxes above the charts to determine which types of data are displayed in the charts. The types
of data vary depending on the type of the base resource. Move your pointer over the graphs to display more
details for any point on the graph. Select different time icons to display different segments of the data.
Top Correlated Resources
The Top correlated resources list shows the resources that have a high correlation on one or more
performance metrics with the base resource. Use the checkboxes and links in this list to supply additional
information:
C-11
Select the checkbox in front of the resource name to add the data from that resource to the charts. Each
resource is displayed in a different color.
Click the linked letter "T" beside the checkbox, and select whether to include the Total, Read only, or
Write only data in the performance charts. Total data is the default.
Click a linked resource name to open a page of summary data for that resource.
Click the linked percentage beside a resource name to open a box that compares the type of correlation
that resource has with the base resource.
If the correlated resources list does not contain a resource that you need in the performance charts, use the
Search Assets box to locate the resource and add it to the performance data.
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
11
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-12
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
12
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-13
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Violations
13
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-14
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Lesson 2
14
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-15
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
15
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-16
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
16
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-17
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
17
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-18
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
18
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-19
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
19
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-20
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
20
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-21
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
21
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-22
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
22
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-23
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
23
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-24
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
24
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-25
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
25
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-26
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
26
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-27
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
OnCommand Insight
OnCommand Insight
Single Pane
of Glass
End-to-end visibility
27
Reporting
Value-Add
Product
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
ONCOMMAND INSIGHT
Insight is a single solution that enables cross-domain, multivendor, and E-Series resource management and
analysis across networks, storage, and servers in physical and virtual environments.
Insight improves operational efficiency by providing a "single pane of glass, enabling end-to-end visibility
into the storage environment, and generating meaningful reports on storage costs for chargeback and
showback.
Insight is a value-add product. Currently its pricing is based on capacity by terabyte for multivendor storage
environments.
C-28
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
28
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
C-29
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Bonus Module D
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
D-1
Clustered Data ONTAP Administration: Bonus Module D: Data ONTAP Physical Storage Maintenance
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
D-2
Clustered Data ONTAP Administration: Bonus Module D: Data ONTAP Physical Storage Maintenance
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Validation Methods
Data ONTAP uses various methods to validate data:
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
VALIDATION METHODS
Disk-level checksums: Two checksum types are available for disks that are used by Data ONTAP: BCS
(block) and AZCS (zoned). Both checksum types provide the same resiliency capabilities. BCS optimizes for
data access speed and reserves the smallest amount of capacity for the checksum for disks with 520-byte
sectors. AZCS provides enhanced storage utilization and capacity for disks with 512-byte sectors. You cannot
change the checksum type of a disk. To determine the checksum type of a specific disk model, see the
Hardware Universe.
Media-level scrubbing: The purpose of the continuous media scrub is to detect and correct media errors to
minimize the chance of storage system disruption due to a media error while a storage system is in degraded
or reconstruction mode.
By default, Data ONTAP runs continuous background media scrubbing for media errors on all storage system
disks. If a media error is found, Data ONTAP uses RAID to reconstruct the data and repairs the error. Media
scrubbing is a continuous background process. Therefore, you might observe disk LEDs blinking on an
apparently idle storage system. You might also observe some CPU activity even when no user workload is
present.
Because continuous media scrubbing searches only for media errors, its impact on system performance is
negligible. In addition, the media scrub attempts to exploit idle disk bandwidth and free CPU cycles to make
faster progress. However, any client workload results in aggressive throttling of the media scrub resource.
RAID-level scrubbing: RAID-level scrubbing means checking the disk blocks of all disks in use in
aggregates (or in a particular aggregate, plex, or RAID group) for media errors and parity consistency. If Data
ONTAP finds media errors or inconsistencies, it uses RAID to reconstruct the data from other disks and
rewrites the data.
RAID-level scrubs help improve data availability by uncovering and fixing media and checksum errors while
the RAID group is in a normal state. (For RAID-DP, RAID-level scrubs can also be performed when the
RAID group has a single-disk failure.) RAID-level scrubs can be scheduled or run manually.
D-3
Clustered Data ONTAP Administration: Bonus Module D: Data ONTAP Physical Storage Maintenance
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Hot Spare
Copy
Fix or Fail
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
D-4
Rebuild time
Performance degradation
Potential data loss due to additional disk failure during reconstruction
Clustered Data ONTAP Administration: Bonus Module D: Data ONTAP Physical Storage Maintenance
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Larger Size:
Unused Capacity
Exact
Match
Different Speed:
Performance
!!
Degraded Mode:
No replacement
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
If the available hot spares are not the correct size, Data ONTAP uses one that is the next size up, if there
is one.
The replacement disk is downsized to match the size of the disk it is replacing; the extra capacity is not
available.
If the available hot spares are not the correct speed, Data ONTAP uses one that is a different speed.
Using disks with different speeds within the same aggregate is not optimal. Replacing a disk with a
slower disk can cause performance degradation, and replacing a disk with a faster disk is not cost-effective.
If there is no spare with an equivalent disk type or checksum type, the RAID group that contains the failed
disk goes into degraded mode; Data ONTAP does not combine effective disk types or checksum types within
a RAID group.
D-5
Clustered Data ONTAP Administration: Bonus Module D: Data ONTAP Physical Storage Maintenance
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Degraded Mode
Degraded mode occurs when a disk in a RAID group fails.
The failed disk (or disks for RAID-DP) are rebuilt on a spare disk (if
available).
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
DEGRADED MODE
If one disk in a RAID group fails, the system operates in degraded mode. In degraded mode, the system
does not operate optimally, but no data is lost. Within a RAID 4 group, if a second disk fails, data is lost;
within a RAID-DP group, if a third disk fails, data is lost. The following AutoSupport message is broadcast:
[monitor.brokenDisk.notice:notice].
If the maximum number of disks have failed in a RAID group (two for RAID-DP, one for RAID 4) and there
are no suitable spare disks available for reconstruction, the storage system automatically shuts down in the
period of time specified by the raid.timeout option. The default timeout value is 24 hours. See this FAQ
for more information: https://kb.netapp.com/support/index?page=content&id=2013508
Therefore, you should replace failed disks and used hot-spare disks as soon as possible. You can use the
options raid.timeout command to modify the timeout internally. However, keep in mind that, as the
timeout interval increases, the risk of subsequent disk failures also increases.
D-6
Clustered Data ONTAP Administration: Bonus Module D: Data ONTAP Physical Storage Maintenance
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Disk Replacement
To replace a data disk with a spare disk:
Data ONTAP 7-Mode
system> disk replace start [-m] old_disk_name spare_name
-m if no speed match
Data Disks
Spare Disks
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
DISK REPLACEMENT
You can use the storage disk replace command to replace disks that are part of an aggregate without
disrupting data service. You do this to swap out mismatched disks from a RAID group. Keeping your RAID
groups homogeneous helps optimize storage system performance.
D-7
Clustered Data ONTAP Administration: Bonus Module D: Data ONTAP Physical Storage Maintenance
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Disk Sanitization
A way to protect sensitive datato make recovery of the data
impossible
The process of physically obliterating data by overwriting
disks with three successive byte patterns or with random data
Administrators can specify the byte patterns or use the Data
ONTAP default pattern
011010010010
101001011001010
010010010010010
0001
11000100
10111011010010100
11001011010100100
0110100101000
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
DISK SANITIZATION
Disk sanitization is the process of physically obliterating data by overwriting disks with specified byte
patterns or with random data so that recovery of the original data is impossible. Use the disk sanitize
command to ensure that no one can recover the data on the disks.
The disk sanitize command uses three successive default or user-specified byte overwrite patterns for
up to seven cycles per operation. Depending on the disk capacity, the patterns, and the number of cycles, the
process can require several hours. Sanitization runs in the background. You can start, stop, and display the
status of the sanitization process.
D-8
Clustered Data ONTAP Administration: Bonus Module D: Data ONTAP Physical Storage Maintenance
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Nodes available, no
disk reconstruction
Document storage
config, max
capacity, max
spindle and volume
counts
Identify volumes
and aggregates
Record details of
volumes and
aggregates residing on
shelves to be removed
Add new
storage shelves
As required, might
require new HBA
Verify limits
Create new
aggregates, same
size or larger than
originals
Evacuate
data
Delete all
aggregates
Remove disk
ownership on
evacuated shelves
If removing an
entire stack, remove
all cables
If removing some
shelves from a
stack, recable to
bypass the removed
shelves on path A,
then on path B
Identify shelves to
replace
Verify and
remove shelves
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
The disk hardware has reached the end of a capital depreciation period.
The disk hardware has reached the end of hardware support.
New-generation storage technology is available.
All the shelves on a stack or loop are replaced in one operation. This process is also known as a shelf stack, or
loop, upgrade. A common scenario is upgrading from 4-Gbps FC-AL or 3-Gbps SAS disk shelves to 6-Gbps
SAS disk shelves.
The steps to perform a shelf stack upgrade use clustered Data ONTAP features that are standard in 8.1 and
later versions. The primary Data ONTAP technology components are:
For information about nondisruptive shelf removal, see Technical Report 4277: Nondisruptively Replace a
Complete Disk Shelf Stack with Clustered Data ONTAP.
D-9
Clustered Data ONTAP Administration: Bonus Module D: Data ONTAP Physical Storage Maintenance
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
10
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
D-10
Clustered Data ONTAP Administration: Bonus Module D: Data ONTAP Physical Storage Maintenance
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Bonus Module E
Clustered Data ONTAP
Architecture
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-1
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-2
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Components
Three major software components on every node:
The network module
The data module
The SCSI module
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
COMPONENTS
The modules refer to separate software state machines that are accessed only by well-defined APIs. Every
node contains a network module, a SCSI module, and a data module. Any network or SCSI module in the
cluster can talk to any data module in the cluster.
The network module and the SCSI module translate client requests into Spin Network Protocol (SpinNP)
requests and vice versa. The data module, which contains the WAFL (Write Anywhere File Layout) file
system, manages SpinNP requests. The cluster session manager (CSM) is the SpinNP layer between the
network, SCSI, and data modules. The SpinNP protocol is another form of remote procedure call (RPC)
interface. It is used as the primary intranode traffic mechanism for file operations among network, SCSI, and
data modules.
The members of each replicated database (RDB) unit on every node in the cluster are in constant
communication with each other to remain synchronized. The RDB communication is like the heartbeat of
each node. If the heartbeat cannot be detected by the other members of the unit, the unit corrects itself in a
manner that is discussed later in this course. The four RDB units on each node are the blocks configuration
and operations manager (BCOM), the volume location database (VLDB), VifMgr, and management.
E-3
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
M-Host
Cluster Traffic
CSM
Data
module
Data SVM
Root Volume
Root
Vol0
RDB Units:
Mgwd
VLDB
VifMgr
BCOM
Vol1
Vol2
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-4
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Protocols:
TCP/IP and UDP/IP
NFS and CIFS
SpinNP
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-5
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Protocols:
FC
SCSI
SpinNP
TCP/IP
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-6
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-7
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The CSM
Provides a communication mechanism between any network
or SCSI module and any data module
Provides a reliable transport for SpinNP traffic
Is used regardless of whether the network or SCSI module
and the data module are on the same node or on different
nodes
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
THE CSM
E-8
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Node2
Network and
SCSI
modules
Network and
SCSI
modules
CSM
CSM
Data
module
Data
module
Root
Vol1
Vol2
Vol0
Vol0
Root
Vol3
Vol4
Vol 1
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-9
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Node2
Network and
SCSI
modules
Network and
SCSI
modules
CSM
CSM
Data
module
Data
module
Root Root
Vol1
Vol 1
Vol2
10
Vol0
Vol0
Vol3
Vol4
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-10
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Network
Module
SAN
Module
Network
Module
SAN
Module
11
Cluster Interconnect
Network
Module
SAN
Module
WAFL
RAID
Storage
N
V
R
A
M
WAFL
RAID
Storage
WAFL
RAID
Storage
N
V
R
A
M
WAFL
RAID
Storage
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-11
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
CSM
Data module
Network
Protocols
WAFL
RAID
Storage
Clients
Physical
Memory
NVRAM
To HA partner
Management
12
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-12
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-13
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Data SVMs
Characteristics
14
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-14
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Data SVMs
Relationships to Volumes and LIFs
15
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-15
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
C
D
F
E
G H
A
B
C
R
E
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-16
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
PetCo
RonCo
QuekCo
Namespace
Namespace
Namespace
Namespace
SVM
Root
Volume
Volume
Volume
Volume
17
SVM
Root
SVM
Root
SVM
Root
Volume
Volume
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-17
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Namespaces
A namespace is the file system of a data SVM
18
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
NAMESPACES
A namespace is a file system. A namespace is the external, client-facing representation of an SVM. A
namespace consists of volumes that are joined together through junctions. Each SVM has one namespace, and
the volumes in one SVM cannot be seen by clients that are accessing the namespace of another SVM.
E-18
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Is usually mirrored
19
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-19
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The RDB
The RDB is the key to maintaining high-performance
consistency in a distributed environment
The RDB maintains data that supports the cluster, not
the user data in the namespace
20
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
THE RDB
The RDB units do not contain user data. The RDB units contain data that helps to manage the cluster. These
databases are replicated; that is, each node has its own copy of the database, and that database is always
synchronized with the databases on the other nodes in the cluster. RDB database reads are performed locally
on each node, but an RDB write is performed to one master RDB database, and then those changes are
replicated to the other databases throughout the cluster. When reads of an RDB database are performed, those
reads can be fulfilled locally without the need to send requests over the cluster interconnects.
The RDB is transactional in that the RDB guarantees that when data is written to a database, either it all gets
written successfully or it all gets rolled back. No partial or inconsistent database writes are committed.
Four RDB units (the VLDB, management, VifMgr, and BCOM) exist in every cluster, which means that four
RDB unit databases exist on every node in the cluster.
E-20
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Management Gateway
Is also known as the M-host
21
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
MANAGEMENT GATEWAY
The management RDB unit contains information that is needed by the management gateway daemon (mgwd)
process on each node. The kind of management data that is stored in the RDB is written infrequently and read
frequently. The management process on a given node can query the other nodes at run time to retrieve a great
deal of information, but some information is stored locally on each node, in the management RDB database.
E-21
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
22
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-22
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
VifMgr
Runs as vifmgr
23
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
VIFMGR
The VifMgr is responsible for creating and monitoring NFS, CIFS, and iSCSI LIFs. It also handles automatic
NAS LIF failover and manual migration of NAS LIFs to other network ports and nodes.
E-23
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
24
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-24
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The RDB
Details
For each of the units, one node is the master and the other
nodes are secondaries
The master node for each unit might be different than the
master nodes for the other units
Writes for an RDB unit go to its master and are then
propagated to the secondaries through the cluster
interconnect
25
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-25
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The RDB
Terminology
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-26
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
RDB Databases
node1
node2
node4
node3
27
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
RDB DATABASES
This slide shows a four-node cluster. The four databases that are shown for each node are the four RDB units
(management, VLDB, VifMgr, and BCOM). Each unit consists of four distributed databases. Each node has
one local database for each RDB unit.
The databases that are shown on this slide with dark borders are the masters. Note that the master of any
particular RDB unit is independent of the master of the other RDB units.
The node that is shown on this slide with a dark border has epsilon (the tie-breaking ability).
On each node, all the RDB databases are stored in the vol0 volume.
E-27
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Quorum
Overview
Example: If the VLDB goes out of quorum, during the brief time that
the database is out, no volumes can be created, deleted, or moved;
however, access to the volumes from clients is not affected.
28
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
QUORUM: OVERVIEW
A master can be elected only when a majority of local RDB units are connected and healthy for a particular
RDB unit on an eligible node. A master is elected when each local unit agrees on the first reachable healthy
node in the RDB site list. A healthy node is one that is connected, can communicate with the other nodes,
has CPU cycles, and has reasonable I/O.
The master of a given unit can change. For example, when the node that is the master for the management
unit is booted, a new management master must be elected by the remaining members of the management unit.
A local unit goes out of quorum when cluster communication is interrupted for a few seconds; for example,
because of a booting or cluster interconnect hiccup that lasts for a few seconds. Because the RDB units
always work to monitor and maintain a good state, the local unit comes back in quorum automatically. When
a local unit goes out of quorum and then comes back into quorum, the RDB unit is synchronized again. Note
that the VLDB process on a node might go out of quorum although the VifMgr process on that same node has
no problem.
When a unit goes out of quorum, reads from that unit can be performed, but writes to that unit cannot. That
restriction is enforced so that no changes to that unit happen during the time that a master is not agreed upon.
In addition to the example above, if the VifMgr goes out of quorum, access to LIFs is not affected, but no LIF
failover can occur.
E-28
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Quorum
Details
29
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
QUORUM: DETAILS
Marking a node as ineligible (by using the cluster modify command) means that the node no longer
affects RDB quorum or voting. If you mark the epsilon node as ineligible, epsilon is automatically given to
another node.
E-29
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
The epsilon node is epsilon for the entire cluster, not only for
individual RDB units (such as the masters)
30
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-30
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
31
4+
2+
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-31
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
Two-Node Clusters
Two-node clusters are a special case:
32
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
TWO-NODE CLUSTERS
From Ron Kownacki, author of the RDB:
Basically, quorum majority doesnt work well when down to two nodes and theres a failure, so RDB is
essentially locking the fact that quorum is no longer being used and enabling a single replica to be artificially
writable during that outage.
The reason we require a quorum (a majority) is so that all committed data is durable: if you successfully
write to a majority, you know that any future majority will contain at least one instance that has seen the
change, so the update is durable. If we didnt always require a majority, we could silently lose committed
data. So in two nodes, the node with epsilon is a majority and the other is a minorityso you would only
have one-directional failover (need the majority). So epsilon gives you a way to get majorities where you
normally wouldnt have them, but it only gives unidirectional failover because its static.
In two-node (high-availability mode), we try to get bidirectional failover. To do this, we remove the
configuration epsilon and make both nodes equaland form majorities artificially in the failover cases. So
quorum is two nodes available out of the total of two nodes in the cluster (no epsilon involved), but if theres
a failover, you artificially designate the survivor as the majority (and lock that fact). However, that means you
cant fail over the other way until both nodes are available, they sync up, and drop the lockotherwise you
would be discarding data.
E-32
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
M-Host
Cluster Traffic
CSM
Data
module
Data SVM
Root Volume
Root
Vol1
Vol0
RDB Units:
Mgwd
VLDB
VifMgr
BCOM
Vol2
33
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-33
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
34
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
E-34
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.
35
2015 NetApp, Inc. All rights reserved. NetApp Proprietary Limited Use Only
THANK YOU
E-35
Clustered Data ONTAP Administration: Bonus Module E: Clustered Data ONTAP Architecture
2015 NetApp, Inc. This material is intended only for training. Reproduction is not authorized.